Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code
NASA Technical Reports Server (NTRS)
Waithe, Kenrick A.
2004-01-01
A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.
Source Term Model for Steady Micro Jets in a Navier-Stokes Computer Code
NASA Technical Reports Server (NTRS)
Waithe, Kenrick A.
2005-01-01
A source term model for steady micro jets was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the mass flow and momentum created by a steady blowing micro jet. The model is obtained by adding the momentum and mass flow created by the jet to the Navier-Stokes equations. The model was tested by comparing with data from numerical simulations of a single, steady micro jet on a flat plate in two and three dimensions. The source term model predicted the velocity distribution well compared to the two-dimensional plate using a steady mass flow boundary condition, which was used to simulate a steady micro jet. The model was also compared to two three-dimensional flat plate cases using a steady mass flow boundary condition to simulate a steady micro jet. The three-dimensional comparison included a case with a grid generated to capture the circular shape of the jet and a case without a grid generated for the micro jet. The case without the jet grid mimics the application of the source term. The source term model compared well with both of the three-dimensional cases. Comparisons of velocity distribution were made before and after the jet and Mach and vorticity contours were examined. The source term model allows a researcher to quickly investigate different locations of individual or several steady micro jets. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.
High-order scheme for the source-sink term in a one-dimensional water temperature model
Jing, Zheng; Kang, Ling
2017-01-01
The source-sink term in water temperature models represents the net heat absorbed or released by a water system. This term is very important because it accounts for solar radiation that can significantly affect water temperature, especially in lakes. However, existing numerical methods for discretizing the source-sink term are very simplistic, causing significant deviations between simulation results and measured data. To address this problem, we present a numerical method specific to the source-sink term. A vertical one-dimensional heat conduction equation was chosen to describe water temperature changes. A two-step operator-splitting method was adopted as the numerical solution. In the first step, using the undetermined coefficient method, a high-order scheme was adopted for discretizing the source-sink term. In the second step, the diffusion term was discretized using the Crank-Nicolson scheme. The effectiveness and capability of the numerical method was assessed by performing numerical tests. Then, the proposed numerical method was applied to a simulation of Guozheng Lake (located in central China). The modeling results were in an excellent agreement with measured data. PMID:28264005
High-order scheme for the source-sink term in a one-dimensional water temperature model.
Jing, Zheng; Kang, Ling
2017-01-01
The source-sink term in water temperature models represents the net heat absorbed or released by a water system. This term is very important because it accounts for solar radiation that can significantly affect water temperature, especially in lakes. However, existing numerical methods for discretizing the source-sink term are very simplistic, causing significant deviations between simulation results and measured data. To address this problem, we present a numerical method specific to the source-sink term. A vertical one-dimensional heat conduction equation was chosen to describe water temperature changes. A two-step operator-splitting method was adopted as the numerical solution. In the first step, using the undetermined coefficient method, a high-order scheme was adopted for discretizing the source-sink term. In the second step, the diffusion term was discretized using the Crank-Nicolson scheme. The effectiveness and capability of the numerical method was assessed by performing numerical tests. Then, the proposed numerical method was applied to a simulation of Guozheng Lake (located in central China). The modeling results were in an excellent agreement with measured data.
NASA Astrophysics Data System (ADS)
Nagai, Haruyasu; Terada, Hiroaki; Tsuduki, Katsunori; Katata, Genki; Ota, Masakazu; Furuno, Akiko; Akari, Shusaku
2017-09-01
In order to assess the radiological dose to the public resulting from the Fukushima Daiichi Nuclear Power Station (FDNPS) accident in Japan, especially for the early phase of the accident when no measured data are available for that purpose, the spatial and temporal distribution of radioactive materials in the environment are reconstructed by computer simulations. In this study, by refining the source term of radioactive materials discharged into the atmosphere and modifying the atmospheric transport, dispersion and deposition model (ATDM), the atmospheric dispersion simulation of radioactive materials is improved. Then, a database of spatiotemporal distribution of radioactive materials in the air and on the ground surface is developed from the output of the simulation. This database is used in other studies for the dose assessment by coupling with the behavioral pattern of evacuees from the FDNPS accident. By the improvement of the ATDM simulation to use a new meteorological model and sophisticated deposition scheme, the ATDM simulations reproduced well the 137Cs and 131I deposition patterns. For the better reproducibility of dispersion processes, further refinement of the source term was carried out by optimizing it to the improved ATDM simulation by using new monitoring data.
NASA Astrophysics Data System (ADS)
Guinot, Vincent
2017-11-01
The validity of flux and source term formulae used in shallow water models with porosity for urban flood simulations is assessed by solving the two-dimensional shallow water equations over computational domains representing periodic building layouts. The models under assessment are the Single Porosity (SP), the Integral Porosity (IP) and the Dual Integral Porosity (DIP) models. 9 different geometries are considered. 18 two-dimensional initial value problems and 6 two-dimensional boundary value problems are defined. This results in a set of 96 fine grid simulations. Analysing the simulation results leads to the following conclusions: (i) the DIP flux and source term models outperform those of the SP and IP models when the Riemann problem is aligned with the main street directions, (ii) all models give erroneous flux closures when is the Riemann problem is not aligned with one of the main street directions or when the main street directions are not orthogonal, (iii) the solution of the Riemann problem is self-similar in space-time when the street directions are orthogonal and the Riemann problem is aligned with one of them, (iv) a momentum balance confirms the existence of the transient momentum dissipation model presented in the DIP model, (v) none of the source term models presented so far in the literature allows all flow configurations to be accounted for(vi) future laboratory experiments aiming at the validation of flux and source term closures should focus on the high-resolution, two-dimensional monitoring of both water depth and flow velocity fields.
NASA Astrophysics Data System (ADS)
Lin, Hui; Liu, Tianyu; Su, Lin; Bednarz, Bryan; Caracappa, Peter; Xu, X. George
2017-09-01
Monte Carlo (MC) simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi) and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Richard, Jacques C.
1991-01-01
An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.
NASA Astrophysics Data System (ADS)
Griessbach, Sabine; Hoffmann, Lars; Höpfner, Michael; Riese, Martin; Spang, Reinhold
2013-09-01
The viability of a spectrally averaging model to perform radiative transfer calculations in the infrared including scattering by atmospheric particles is examined for the application of infrared limb remote sensing measurements. Here we focus on the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) aboard the European Space Agency's Envisat. Various spectra for clear air and cloudy conditions were simulated with a spectrally averaging radiative transfer model and a line-by-line radiative transfer model for three atmospheric window regions (825-830, 946-951, 1224-1228 cm-1) and compared to each other. The results are rated in terms of the MIPAS noise equivalent spectral radiance (NESR). The clear air simulations generally agree within one NESR. The cloud simulations neglecting the scattering source term agree within two NESR. The differences between the cloud simulations including the scattering source term are generally below three and always below four NESR. We conclude that the spectrally averaging approach is well suited for fast and accurate infrared radiative transfer simulations including scattering by clouds. We found that the main source for the differences between the cloud simulations of both models is the cloud edge sampling. Furthermore we reasoned that this model comparison for clouds is also valid for atmospheric aerosol in general.
A simple mass-conserved level set method for simulation of multiphase flows
NASA Astrophysics Data System (ADS)
Yuan, H.-Z.; Shu, C.; Wang, Y.; Shu, S.
2018-04-01
In this paper, a modified level set method is proposed for simulation of multiphase flows with large density ratio and high Reynolds number. The present method simply introduces a source or sink term into the level set equation to compensate the mass loss or offset the mass increase. The source or sink term is derived analytically by applying the mass conservation principle with the level set equation and the continuity equation of flow field. Since only a source term is introduced, the application of the present method is as simple as the original level set method, but it can guarantee the overall mass conservation. To validate the present method, the vortex flow problem is first considered. The simulation results are compared with those from the original level set method, which demonstrates that the modified level set method has the capability of accurately capturing the interface and keeping the mass conservation. Then, the proposed method is further validated by simulating the Laplace law, the merging of two bubbles, a bubble rising with high density ratio, and Rayleigh-Taylor instability with high Reynolds number. Numerical results show that the mass is a well-conserved by the present method.
Further development of a global pollution model for CO, CH4, and CH2 O
NASA Technical Reports Server (NTRS)
Peters, L. K.
1975-01-01
Global tropospheric pollution models are developed that describe the transport and the physical and chemical processes occurring between the principal sources and sinks of CH4 and CO. Results are given of long term static chemical kinetic computer simulations and preliminary short term dynamic simulations.
JAMSS: proteomics mass spectrometry simulation in Java.
Smith, Rob; Prince, John T
2015-03-01
Countless proteomics data processing algorithms have been proposed, yet few have been critically evaluated due to lack of labeled data (data with known identities and quantities). Although labeling techniques exist, they are limited in terms of confidence and accuracy. In silico simulators have recently been used to create complex data with known identities and quantities. We propose Java Mass Spectrometry Simulator (JAMSS): a fast, self-contained in silico simulator capable of generating simulated MS and LC-MS runs while providing meta information on the provenance of each generated signal. JAMSS improves upon previous in silico simulators in terms of its ease to install, minimal parameters, graphical user interface, multithreading capability, retention time shift model and reproducibility. The simulator creates mzML 1.1.0. It is open source software licensed under the GPLv3. The software and source are available at https://github.com/optimusmoose/JAMSS. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
High Order Finite Difference Methods with Subcell Resolution for 2D Detonation Waves
NASA Technical Reports Server (NTRS)
Wang, W.; Shu, C. W.; Yee, H. C.; Sjogreen, B.
2012-01-01
In simulating hyperbolic conservation laws in conjunction with an inhomogeneous stiff source term, if the solution is discontinuous, spurious numerical results may be produced due to different time scales of the transport part and the source term. This numerical issue often arises in combustion and high speed chemical reacting flows.
Modeling Vortex Generators in a Navier-Stokes Code
NASA Technical Reports Server (NTRS)
Dudek, Julianne C.
2011-01-01
A source-term model that simulates the effects of vortex generators was implemented into the Wind-US Navier-Stokes code. The source term added to the Navier-Stokes equations simulates the lift force that would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, subsonic flow in an S-duct with 22 corotating vortex generators, and supersonic flow in a rectangular duct with a counter-rotating vortex-generator pair. The model was also used to successfully simulate microramps in supersonic flow by treating each microramp as a pair of vanes with opposite angles of incidence. The validation results indicate that the source-term vortex-generator model provides a useful tool for screening vortex-generator configurations and gives comparable results to solutions computed using gridded vanes.
Owen, Julia P; Wipf, David P; Attias, Hagai T; Sekihara, Kensuke; Nagarajan, Srikantan S
2012-03-01
In this paper, we present an extensive performance evaluation of a novel source localization algorithm, Champagne. It is derived in an empirical Bayesian framework that yields sparse solutions to the inverse problem. It is robust to correlated sources and learns the statistics of non-stimulus-evoked activity to suppress the effect of noise and interfering brain activity. We tested Champagne on both simulated and real M/EEG data. The source locations used for the simulated data were chosen to test the performance on challenging source configurations. In simulations, we found that Champagne outperforms the benchmark algorithms in terms of both the accuracy of the source localizations and the correct estimation of source time courses. We also demonstrate that Champagne is more robust to correlated brain activity present in real MEG data and is able to resolve many distinct and functionally relevant brain areas with real MEG and EEG data. Copyright © 2011 Elsevier Inc. All rights reserved.
Effect of source location and listener location on ILD cues in a reverberant room
NASA Astrophysics Data System (ADS)
Ihlefeld, Antje; Shinn-Cunningham, Barbara G.
2004-05-01
Short-term interaural level differences (ILDs) were analyzed for simulations of the signals that would reach a listener in a reverberant room. White noise was convolved with manikin head-related impulse responses measured in a classroom to simulate different locations of the source relative to the manikin and different manikin positions in the room. The ILDs of the signals were computed within each third-octave band over a relatively short time window to investigate how reliably ILD cues encode source laterality. Overall, the mean of the ILD magnitude increases with lateral angle and decreases with distance, as expected. Increasing reverberation decreases the mean ILD magnitude and increases the variance of the short-term ILD, so that the spatial information carried by ILD cues is degraded by reverberation. These results suggest that the mean ILD is not a reliable cue for determining source laterality in a reverberant room. However, by taking into account both the mean and variance, the distribution of high-frequency short-term ILDs provides some spatial information. This analysis suggests that, in order to use ILDs to judge source direction in reverberant space, listeners must accumulate information about how the short-term ILD varies over time. [Work supported by NIDCD and AFOSR.
A New Unsteady Model for Dense Cloud Cavitation in Cryogenic Fluids
NASA Technical Reports Server (NTRS)
Hosangadi, Ashvin; Ahuja, Vineet
2005-01-01
Contents include the following: Background on thermal effects in cavitation. Physical properties of hydrogen. Multi-phase cavitation with thermal effect. Solution procedure. Cavitation model overview. Cavitation source terms. New cavitation model. Source term for bubble growth. One equation les model. Unsteady ogive simulations: liquid nitrogen. Unsteady incompressible flow in a pipe. Time averaged cavity length for NACA15 flowfield.
Sansalone, John; Raje, Saurabh; Kertesz, Ruben; Maccarone, Kerrilynn; Seltzer, Karl; Siminari, Michele; Simms, Peter; Wood, Brandon
2013-12-01
The built environs alter hydrology and water resource chemistry. Florida is subject to nutrient criteria and is promulgating "no-net-load-increase" criteria for runoff and constituents (nutrients and particulate matter, PM). With such criteria, green infrastructure, hydrologic restoration, indirect reuse and source control are potential design solutions. The study simulates runoff and constituent load control through urban source area re-design to provide long-term "no-net-load-increases". A long-term continuous simulation of pre- and post-development response for an existing surface parking facility is quantified. Retrofits include a biofiltration area reactor (BAR) for hydrologic and denitrification control. A linear infiltration reactor (LIR) of cementitious permeable pavement (CPP) provides infiltration, adsorption and filtration. Pavement cleaning provided source control. Simulation of climate and source area data indicates re-design achieves "no-net-load-increases" at lower costs compared to standard construction. The retrofit system yields lower cost per nutrient load treated compared to Best Management Practices (BMPs). Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Hughes, Eric J.; Krotkov, Nickolay; da Silva, Arlindo; Colarco, Peter
2015-01-01
Simulation of volcanic emissions in climate models requires information that describes the eruption of the emissions into the atmosphere. While the total amount of gases and aerosols released from a volcanic eruption can be readily estimated from satellite observations, information about the source parameters, like injection altitude, eruption time and duration, is often not directly known. The AeroCOM volcanic emissions inventory provides estimates of eruption source parameters and has been used to initialize volcanic emissions in reanalysis projects, like MERRA. The AeroCOM volcanic emission inventory provides an eruptions daily SO2 flux and plume top altitude, yet an eruption can be very short lived, lasting only a few hours, and emit clouds at multiple altitudes. Case studies comparing the satellite observed dispersal of volcanic SO2 clouds to simulations in MERRA have shown mixed results. Some cases show good agreement with observations Okmok (2008), while for other eruptions the observed initial SO2 mass is half of that in the simulations, Sierra Negra (2005). In other cases, the initial SO2 amount agrees with the observations but shows very different dispersal rates, Soufriere Hills (2006). In the aviation hazards community, deriving accurate source terms is crucial for monitoring and short-term forecasting (24-h) of volcanic clouds. Back trajectory methods have been developed which use satellite observations and transport models to estimate the injection altitude, eruption time, and eruption duration of observed volcanic clouds. These methods can provide eruption timing estimates on a 2-hour temporal resolution and estimate the altitude and depth of a volcanic cloud. To better understand the differences between MERRA simulations and volcanic SO2 observations, back trajectory methods are used to estimate the source term parameters for a few volcanic eruptions and compared to their corresponding entry in the AeroCOM volcanic emission inventory. The nature of these mixed results is discussed with respect to the source term estimates.
Simulations of cold electroweak baryogenesis: dependence on the source of CP-violation
NASA Astrophysics Data System (ADS)
Mou, Zong-Gang; Saffin, Paul M.; Tranberg, Anders
2018-05-01
We compute the baryon asymmetry created in a tachyonic electroweak symmetry breaking transition, focusing on the dependence on the source of effective CP-violation. Earlier simulations of Cold Electroweak Baryogenesis have almost exclusively considered a very specific CP-violating term explicitly biasing Chern-Simons number. We compare four different dimension six, scalar-gauge CP-violating terms, involving both the Higgs field and another dynamical scalar coupled to SU(2) or U(1) gauge fields. We find that for sensible values of parameters, all implementations can generate a baryon asymmetry consistent with observations, showing that baryogenesis is a generic outcome of a fast tachyonic electroweak transition.
Effect of Loss on Multiplexed Single-Photon Sources (Open Access Publisher’s Version)
2015-04-28
lossy components on near- and long-term experimental goals, we simulate themultiplexed sources when used formany- photon state generation under various...efficient integer factorization and digital quantum simulation [7, 8], which relies critically on the development of a high-performance, on-demand photon ...SPDC) or spontaneous four-wave mixing: parametric processes which use a pump laser in a nonlinearmaterial to spontaneously generate photon pairs
A large eddy simulation scheme for turbulent reacting flows
NASA Technical Reports Server (NTRS)
Gao, Feng
1993-01-01
The recent development of the dynamic subgrid-scale (SGS) model has provided a consistent method for generating localized turbulent mixing models and has opened up great possibilities for applying the large eddy simulation (LES) technique to real world problems. Given the fact that the direct numerical simulation (DNS) can not solve for engineering flow problems in the foreseeable future (Reynolds 1989), the LES is certainly an attractive alternative. It seems only natural to bring this new development in SGS modeling to bear on the reacting flows. The major stumbling block for introducing LES to reacting flow problems has been the proper modeling of the reaction source terms. Various models have been proposed, but none of them has a wide range of applicability. For example, some of the models in combustion have been based on the flamelet assumption which is only valid for relatively fast reactions. Some other models have neglected the effects of chemical reactions on the turbulent mixing time scale, which is certainly not valid for fast and non-isothermal reactions. The probability density function (PDF) method can be usefully employed to deal with the modeling of the reaction source terms. In order to fit into the framework of LES, a new PDF, the large eddy PDF (LEPDF), is introduced. This PDF provides an accurate representation for the filtered chemical source terms and can be readily calculated in the simulations. The details of this scheme are described.
Modeling Vortex Generators in the Wind-US Code
NASA Technical Reports Server (NTRS)
Dudek, Julianne C.
2010-01-01
A source term model which simulates the effects of vortex generators was implemented into the Wind-US Navier Stokes code. The source term added to the Navier-Stokes equations simulates the lift force which would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, supersonic flow in a rectangular duct with a counterrotating vortex generator pair, and subsonic flow in an S-duct with 22 co-rotating vortex generators. The validation results indicate that the source term vortex generator model provides a useful tool for screening vortex generator configurations and gives comparable results to solutions computed using a gridded vane.
Numerical simulation of hydrothermal circulation in the Cascade Range, north-central Oregon
Ingebritsen, S.E.; Paulson, K.M.
1990-01-01
Alternate conceptual models to explain near-surface heat-flow observations in the central Oregon Cascade Range involve (1) an extensive mid-crustal magmatic heat source underlying both the Quaternary arc and adjacent older rocks or (2) a narrower deep heat source which is flanked by a relatively shallow conductive heat-flow anomaly caused by regional ground-water flow (the lateral-flow model). Relative to the mid-crustal heat source model, the lateral-flow model suggests a more limited geothermal resource base, but a better-defined exploration target. We simulated ground-water flow and heat transport through two cross sections trending west from the Cascade range crest in order to explore the implications of the two models. The thermal input for the alternate conceptual models was simulated by varying the width and intensity of a basal heat-flow anomaly and, in some cases, by introducing shallower heat sources beneath the Quaternary arc. Near-surface observations in the Breitenbush Hot Springs area are most readily explained in terms of lateral heat transport by regional ground-water flow; however, the deep thermal structure still cannot be uniquely inferred. The sparser thermal data set from the McKenzie River area can be explained either in terms of deep regional ground-water flow or in terms of a conduction-dominated system, with ground-water flow essentially confined to Quaternary rocks and fault zones.
ESPC Coupled Global Prediction System
2014-09-30
active, and cloud- nucleating aerosols into NAVGEM for use in long-term simulations and forecasts and for use in the full coupled system. APPROACH...cloud- nucleating aerosols into NAVGEM for use in long-term simulations and forecasts for ESPC applications. We are relying on approaches, findings...function. For sea salt we follow NAAPS and use a source that depends on ocean surface winds and relative humidity . In lieu of the relevant
Time-frequency approach to underdetermined blind source separation.
Xie, Shengli; Yang, Liu; Yang, Jun-Mei; Zhou, Guoxu; Xiang, Yong
2012-02-01
This paper presents a new time-frequency (TF) underdetermined blind source separation approach based on Wigner-Ville distribution (WVD) and Khatri-Rao product to separate N non-stationary sources from M(M <; N) mixtures. First, an improved method is proposed for estimating the mixing matrix, where the negative value of the auto WVD of the sources is fully considered. Then after extracting all the auto-term TF points, the auto WVD value of the sources at every auto-term TF point can be found out exactly with the proposed approach no matter how many active sources there are as long as N ≤ 2M-1. Further discussion about the extraction of auto-term TF points is made and finally the numerical simulation results are presented to show the superiority of the proposed algorithm by comparing it with the existing ones.
A new traffic model with a lane-changing viscosity term
NASA Astrophysics Data System (ADS)
Ko, Hung-Tang; Liu, Xiao-He; Guo, Ming-Min; Wu, Zheng
2015-09-01
In this paper, a new continuum traffic flow model is proposed, with a lane-changing source term in the continuity equation and a lane-changing viscosity term in the acceleration equation. Based on previous literature, the source term addresses the impact of speed difference and density difference between adjacent lanes, which provides better precision for free lane-changing simulation; the viscosity term turns lane-changing behavior to a “force” that may influence speed distribution. Using a flux-splitting scheme for the model discretization, two cases are investigated numerically. The case under a homogeneous initial condition shows that the numerical results by our model agree well with the analytical ones; the case with a small initial disturbance shows that our model can simulate the evolution of perturbation, including propagation, dissipation, cluster effect and stop-and-go phenomenon. Project supported by the National Natural Science Foundation of China (Grant Nos. 11002035 and 11372147) and Hui-Chun Chin and Tsung-Dao Lee Chinese Undergraduate Research Endowment (Grant No. CURE 14024).
Coarse Grid CFD for underresolved simulation
NASA Astrophysics Data System (ADS)
Class, Andreas G.; Viellieber, Mathias O.; Himmel, Steffen R.
2010-11-01
CFD simulation of the complete reactor core of a nuclear power plant requires exceedingly huge computational resources so that this crude power approach has not been pursued yet. The traditional approach is 1D subchannel analysis employing calibrated transport models. Coarse grid CFD is an attractive alternative technique based on strongly under-resolved CFD and the inviscid Euler equations. Obviously, using inviscid equations and coarse grids does not resolve all the physics requiring additional volumetric source terms modelling viscosity and other sub-grid effects. The source terms are implemented via correlations derived from fully resolved representative simulations which can be tabulated or computed on the fly. The technique is demonstrated for a Carnot diffusor and a wire-wrap fuel assembly [1]. [4pt] [1] Himmel, S.R. phd thesis, Stuttgart University, Germany 2009, http://bibliothek.fzk.de/zb/berichte/FZKA7468.pdf
Observation-based source terms in the third-generation wave model WAVEWATCH
NASA Astrophysics Data System (ADS)
Zieger, Stefan; Babanin, Alexander V.; Erick Rogers, W.; Young, Ian R.
2015-12-01
Measurements collected during the AUSWEX field campaign, at Lake George (Australia), resulted in new insights into the processes of wind wave interaction and whitecapping dissipation, and consequently new parameterizations of the input and dissipation source terms. The new nonlinear wind input term developed accounts for dependence of the growth on wave steepness, airflow separation, and for negative growth rate under adverse winds. The new dissipation terms feature the inherent breaking term, a cumulative dissipation term and a term due to production of turbulence by waves, which is particularly relevant for decaying seas and for swell. The latter is consistent with the observed decay rate of ocean swell. This paper describes these source terms implemented in WAVEWATCH III ®and evaluates the performance against existing source terms in academic duration-limited tests, against buoy measurements for windsea-dominated conditions, under conditions of extreme wind forcing (Hurricane Katrina), and against altimeter data in global hindcasts. Results show agreement by means of growth curves as well as integral and spectral parameters in the simulations and hindcast.
NASA Astrophysics Data System (ADS)
Gica, E.
2016-12-01
The Short-term Inundation Forecasting for Tsunamis (SIFT) tool, developed by NOAA Center for Tsunami Research (NCTR) at the Pacific Marine Environmental Laboratory (PMEL), is used in forecast operations at the Tsunami Warning Centers in Alaska and Hawaii. The SIFT tool relies on a pre-computed tsunami propagation database, real-time DART buoy data, and an inversion algorithm to define the tsunami source. The tsunami propagation database is composed of 50×100km unit sources, simulated basin-wide for at least 24 hours. Different combinations of unit sources, DART buoys, and length of real-time DART buoy data can generate a wide range of results within the defined tsunami source. For an inexperienced SIFT user, the primary challenge is to determine which solution, among multiple solutions for a single tsunami event, would provide the best forecast in real time. This study investigates how the use of different tsunami sources affects simulated tsunamis at tide gauge locations. Using the tide gauge at Hilo, Hawaii, a total of 50 possible solutions for the 2011 Tohoku tsunami are considered. Maximum tsunami wave amplitude and root mean square error results are used to compare tide gauge data and the simulated tsunami time series. Results of this study will facilitate SIFT users' efforts to determine if the simulated tide gauge tsunami time series from a specific tsunami source solution would be within the range of possible solutions. This study will serve as the basis for investigating more historical tsunami events and tide gauge locations.
IMPROVEMENTS IN THE THERMAL NEUTRON CALIBRATION UNIT, TNF2, AT LNMRI/IRD.
Astuto, A; Fernandes, S S; Patrão, K C S; Fonseca, E S; Pereira, W W; Lopes, R T
2018-02-21
The standard thermal neutron flux unit, TNF2, in the Brazilian National Ionizing Radiation Metrology Laboratory was rebuilt. Fluence is still achieved by moderating of four 241Am-Be sources with 0.6 TBq each. The facility was again simulated and redesigned with graphite core and paraffin added graphite blocks surrounding it. Simulations using the MCNPX code on different geometric arrangements of moderator materials and neutron sources were performed. The resulting neutron fluence quality in terms of intensity, spectrum and cadmium ratio was evaluated. After this step, the system was assembled based on the results obtained from the simulations and measurements were performed with equipment existing in LNMRI/IRD and by simulated equipment. This work focuses on the characterization of a central chamber point and external points around the TNF2 in terms of neutron spectrum, fluence and ambient dose equivalent, H*(10). This system was validated with spectra measurements, fluence and H*(10) to ensure traceability.
NASA Astrophysics Data System (ADS)
Xia, Xilin; Liang, Qiuhua; Ming, Xiaodong; Hou, Jingming
2017-05-01
Numerical models solving the full 2-D shallow water equations (SWEs) have been increasingly used to simulate overland flows and better understand the transient flow dynamics of flash floods in a catchment. However, there still exist key challenges that have not yet been resolved for the development of fully dynamic overland flow models, related to (1) the difficulty of maintaining numerical stability and accuracy in the limit of disappearing water depth and (2) inaccurate estimation of velocities and discharges on slopes as a result of strong nonlinearity of friction terms. This paper aims to tackle these key research challenges and present a new numerical scheme for accurately and efficiently modeling large-scale transient overland flows over complex terrains. The proposed scheme features a novel surface reconstruction method (SRM) to correctly compute slope source terms and maintain numerical stability at small water depth, and a new implicit discretization method to handle the highly nonlinear friction terms. The resulting shallow water overland flow model is first validated against analytical and experimental test cases and then applied to simulate a hypothetic rainfall event in the 42 km2 Haltwhistle Burn, UK.
Performance evaluation of WAVEWATCH III model in the Persian Gulf using different wind resources
NASA Astrophysics Data System (ADS)
Kazeminezhad, Mohammad Hossein; Siadatmousavi, Seyed Mostafa
2017-07-01
The third-generation wave model, WAVEWATCH III, was employed to simulate bulk wave parameters in the Persian Gulf using three different wind sources: ERA-Interim, CCMP, and GFS-Analysis. Different formulations for whitecapping term and the energy transfer from wind to wave were used, namely the Tolman and Chalikov (J Phys Oceanogr 26:497-518, 1996), WAM cycle 4 (BJA and WAM4), and Ardhuin et al. (J Phys Oceanogr 40(9):1917-1941, 2010) (TEST405 and TEST451 parameterizations) source term packages. The obtained results from numerical simulations were compared to altimeter-derived significant wave heights and measured wave parameters at two stations in the northern part of the Persian Gulf through statistical indicators and the Taylor diagram. Comparison of the bulk wave parameters with measured values showed underestimation of wave height using all wind sources. However, the performance of the model was best when GFS-Analysis wind data were used. In general, when wind veering from southeast to northwest occurred, and wind speed was high during the rotation, the model underestimation of wave height was severe. Except for the Tolman and Chalikov (J Phys Oceanogr 26:497-518, 1996) source term package, which severely underestimated the bulk wave parameters during stormy condition, the performances of other formulations were practically similar. However, in terms of statistics, the Ardhuin et al. (J Phys Oceanogr 40(9):1917-1941, 2010) source terms with TEST405 parameterization were the most successful formulation in the Persian Gulf when compared to in situ and altimeter-derived observations.
NASA Astrophysics Data System (ADS)
Saheer, Sahana; Pathak, Amey; Mathew, Roxy; Ghosh, Subimal
2016-04-01
Simulations of Indian Summer Monsoon (ISM) with its seasonal and subseasonal characteristics is highly crucial for predictions/ projections towards sustainable agricultural planning and water resources management. The Climate forecast system version 2 (CFSv2), the state of the art coupled climate model developed by National Center for Environmental Prediction (NCEP), is evaluated here for the simulations of ISM. Even though CFSv2 is a fully coupled ocean-atmosphere-land model with advanced physics, increased resolution and refined initialization, its ISM simulations/ predictions/ projections, in terms of seasonal mean and variability are not satisfactory. Numerous works have been done for verifying the CFSv2 forecasts in terms of the seasonal mean, its mean and variability, active and break spells, and El Nino Southern Oscillation (ENSO)-monsoon interactions. Underestimation of JJAS precipitation over the Indian land mass is one of the major drawbacks of CFSv2. ISM gets the moisture required to maintain the precipitation from different oceanic and land sources. In this work, we find the fraction of moisture supplied by different sources in the CFSv2 simulations and the findings are compared with observed fractions. We also investigate the possible variations in the moisture contributions from these different sources. We suspect that the deviation in the relative moisture contribution from different sources to various sinks over the monsoon region has resulted in the observed dry bias. We also find that over the Arabian Sea region, which is the key moisture source of ISM, there is a premature built up of specific humidity during the month of May and a decline during the later months of JJAS. This is also one of the reasons for the underestimation of JJAS mean precipitation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pawloski, G A; Tompson, A F B; Carle, S F
The objectives of this report are to develop, summarize, and interpret a series of detailed unclassified simulations that forecast the nature and extent of radionuclide release and near-field migration in groundwater away from the CHESHIRE underground nuclear test at Pahute Mesa at the NTS over 1000 yrs. Collectively, these results are called the CHESHIRE Hydrologic Source Term (HST). The CHESHIRE underground nuclear test was one of 76 underground nuclear tests that were fired below or within 100 m of the water table between 1965 and 1992 in Areas 19 and 20 of the NTS. These areas now comprise the Pahutemore » Mesa Corrective Action Unit (CAU) for which a separate subregional scale flow and transport model is being developed by the UGTA Project to forecast the larger-scale migration of radionuclides from underground tests on Pahute Mesa. The current simulations are being developed, on one hand, to more fully understand the complex coupled processes involved in radionuclide migration, with a specific focus on the CHESHIRE test. While remaining unclassified, they are as site specific as possible and involve a level of modeling detail that is commensurate with the most fundamental processes, conservative assumptions, and representative data sets available. However, the simulation results are also being developed so that they may be simplified and interpreted for use as a source term boundary condition at the CHESHIRE location in the Pahute Mesa CAU model. In addition, the processes of simplification and interpretation will provide generalized insight as to how the source term behavior at other tests may be considered or otherwise represented in the Pahute Mesa CAU model.« less
Filtered Mass Density Function for Design Simulation of High Speed Airbreathing Propulsion Systems
NASA Technical Reports Server (NTRS)
Drozda, T. G.; Sheikhi, R. M.; Givi, Peyman
2001-01-01
The objective of this research is to develop and implement new methodology for large eddy simulation of (LES) of high-speed reacting turbulent flows. We have just completed two (2) years of Phase I of this research. This annual report provides a brief and up-to-date summary of our activities during the period: September 1, 2000 through August 31, 2001. In the work within the past year, a methodology termed "velocity-scalar filtered density function" (VSFDF) is developed and implemented for large eddy simulation (LES) of turbulent flows. In this methodology the effects of the unresolved subgrid scales (SGS) are taken into account by considering the joint probability density function (PDF) of all of the components of the velocity and scalar vectors. An exact transport equation is derived for the VSFDF in which the effects of the unresolved SGS convection, SGS velocity-scalar source, and SGS scalar-scalar source terms appear in closed form. The remaining unclosed terms in this equation are modeled. A system of stochastic differential equations (SDEs) which yields statistically equivalent results to the modeled VSFDF transport equation is constructed. These SDEs are solved numerically by a Lagrangian Monte Carlo procedure. The consistency of the proposed SDEs and the convergence of the Monte Carlo solution are assessed by comparison with results obtained by an Eulerian LES procedure in which the corresponding transport equations for the first two SGS moments are solved. The unclosed SGS convection, SGS velocity-scalar source, and SGS scalar-scalar source in the Eulerian LES are replaced by corresponding terms from VSFDF equation. The consistency of the results is then analyzed for a case of two dimensional mixing layer.
Discriminating Simulated Vocal Tremor Source Using Amplitude Modulation Spectra
Carbonell, Kathy M.; Lester, Rosemary A.; Story, Brad H.; Lotto, Andrew J.
2014-01-01
Objectives/Hypothesis Sources of vocal tremor are difficult to categorize perceptually and acoustically. This paper describes a preliminary attempt to discriminate vocal tremor sources through the use of spectral measures of the amplitude envelope. The hypothesis is that different vocal tremor sources are associated with distinct patterns of acoustic amplitude modulations. Study Design Statistical categorization methods (discriminant function analysis) were used to discriminate signals from simulated vocal tremor with different sources using only acoustic measures derived from the amplitude envelopes. Methods Simulations of vocal tremor were created by modulating parameters of a vocal fold model corresponding to oscillations of respiratory driving pressure (respiratory tremor), degree of vocal fold adduction (adductory tremor) and fundamental frequency of vocal fold vibration (F0 tremor). The acoustic measures were based on spectral analyses of the amplitude envelope computed across the entire signal and within select frequency bands. Results The signals could be categorized (with accuracy well above chance) in terms of the simulated tremor source using only measures of the amplitude envelope spectrum even when multiple sources of tremor were included. Conclusions These results supply initial support for an amplitude-envelope based approach to identify the source of vocal tremor and provide further evidence for the rich information about talker characteristics present in the temporal structure of the amplitude envelope. PMID:25532813
Efficient Development of High Fidelity Structured Volume Grids for Hypersonic Flow Simulations
NASA Technical Reports Server (NTRS)
Alter, Stephen J.
2003-01-01
A new technique for the control of grid line spacing and intersection angles of a structured volume grid, using elliptic partial differential equations (PDEs) is presented. Existing structured grid generation algorithms make use of source term hybridization to provide control of grid lines, imposing orthogonality implicitly at the boundary and explicitly on the interior of the domain. A bridging function between the two types of grid line control is typically used to blend the different orthogonality formulations. It is shown that utilizing such a bridging function with source term hybridization can result in the excessive use of computational resources and diminishes robustness. A new approach, Anisotropic Lagrange Based Trans-Finite Interpolation (ALBTFI), is offered as a replacement to source term hybridization. The ALBTFI technique captures the essence of the desired grid controls while improving the convergence rate of the elliptic PDEs when compared with source term hybridization. Grid generation on a blunt cone and a Shuttle Orbiter is used to demonstrate and assess the ALBTFI technique, which is shown to be as much as 50% faster, more robust, and produces higher quality grids than source term hybridization.
NASA Astrophysics Data System (ADS)
Roustan, Yelva; Duhanyan, Nora; Bocquet, Marc; Winiarek, Victor
2013-04-01
A sensitivity study of the numerical model, as well as, an inverse modelling approach applied to the atmospheric dispersion issues after the Chernobyl disaster are both presented in this paper. On the one hand, the robustness of the source term reconstruction through advanced data assimilation techniques was tested. On the other hand, the classical approaches for sensitivity analysis were enhanced by the use of an optimised forcing field which otherwise is known to be strongly uncertain. The POLYPHEMUS air quality system was used to perform the simulations of radionuclide dispersion. Activity concentrations in air and deposited to the ground of iodine-131, caesium-137 and caesium-134 were considered. The impact of the implemented parameterizations of the physical processes (dry and wet depositions, vertical turbulent diffusion), of the forcing fields (meteorology and source terms) and of the numerical configuration (horizontal resolution) were investigated for the sensitivity study of the model. A four dimensional variational scheme (4D-Var) based on the approximate adjoint of the chemistry transport model was used to invert the source term. The data assimilation is performed with measurements of activity concentrations in air extracted from the Radioactivity Environmental Monitoring (REM) database. For most of the investigated configurations (sensitivity study), the statistics to compare the model results to the field measurements as regards the concentrations in air are clearly improved while using a reconstructed source term. As regards the ground deposited concentrations, an improvement can only be seen in case of satisfactorily modelled episode. Through these studies, the source term and the meteorological fields are proved to have a major impact on the activity concentrations in air. These studies also reinforce the use of reconstructed source term instead of the usual estimated one. A more detailed parameterization of the deposition process seems also to be able to improve the simulation results. For deposited activities the results are more complex probably due to a strong sensitivity to some of the meteorological fields which remain quite uncertain.
Anthropogenic emissions from a variety of sectors including mobile sources have decreased substantially over the past decades despite continued growth in population and economic activity. In this study, we analyze 1990-2010 trends in emission inventories, ambient observations and...
D Hydrodynamics Simulation of Amazonian Seasonally Flooded Wetlands
NASA Astrophysics Data System (ADS)
Pinel, S. S.; Bonnet, M. P.; Da Silva, J. S.; Cavalcanti, R., Sr.; Calmant, S.
2016-12-01
In the low Amazonian basin, interactions between floodplains and river channels are important in terms of water exchanges, sediments, or nutrients. These wetlands are considered as hotspot of biodiversity and are among the most productive in the world. However, they are threatened by climatic changes and anthropic activities. Hence, considering the implications for predicting inundation status of floodplain habitats, the strong interactions between water circulation, energy fluxes, biogeochemical and ecological processes, detailed analyses of flooding dynamics are useful and needed. Numerical inundation models offer means to study the interactions among different water sources. Modeling floods events in this area is challenging because flows respond to dynamic hydraulic controls coming from several water sources, complex geomorphology, and vegetation. In addition, due to the difficulty of access, there is a lack of existing hydrological data. In this context, the use of monitoring systems by remote sensing is a good option. In this study, we simulated filling and drainage processes of an Amazon floodplain (Janauacá Lake, AM, Brazil) over a 6 years period (2006-2012). Common approaches of flow modeling in the Amazon region consist of coupling a 1D simulation of the main channel flood wave to a 2D simulation of the inundation of the floodplain. Here, our approach differs as the floodplain is fully simulated. Model used is the 3D model IPH-ECO, which consists of a three-dimensional hydrodynamic module coupled with an ecosystem module. The IPH-ECO hydrodynamic module solves the Reynolds-Averaged Navier-Stokes equations using a semi-implicit discretization. After having calibrated the simulation against roughness coefficients, we validated the model in terms of vertical accuracy against water levels (daily in situ and altimetrics data), in terms of flood extent against inundation maps deduced from available remote-sensed product imagery (ALOS-1/PALSAR.), and in terms of velocity. We analyzed the inter-annual variability in hydrological fluxes and inundation dynamics of the floodplain unit. Dominant sources of inflow varied seasonally: among direct rain and local runoff (November to April), Amazon River (May to August) and seepage (September to October).
An integrated system for dynamic control of auditory perspective in a multichannel sound field
NASA Astrophysics Data System (ADS)
Corey, Jason Andrew
An integrated system providing dynamic control of sound source azimuth, distance and proximity to a room boundary within a simulated acoustic space is proposed for use in multichannel music and film sound production. The system has been investigated, implemented, and psychoacoustically tested within the ITU-R BS.775 recommended five-channel (3/2) loudspeaker layout. The work brings together physical and perceptual models of room simulation to allow dynamic placement of virtual sound sources at any location of a simulated space within the horizontal plane. The control system incorporates a number of modules including simulated room modes, "fuzzy" sources, and tracking early reflections, whose parameters are dynamically changed according to sound source location within the simulated space. The control functions of the basic elements, derived from theories of perception of a source in a real room, have been carefully tuned to provide efficient, effective, and intuitive control of a sound source's perceived location. Seven formal listening tests were conducted to evaluate the effectiveness of the algorithm design choices. The tests evaluated: (1) loudness calibration of multichannel sound images; (2) the effectiveness of distance control; (3) the resolution of distance control provided by the system; (4) the effectiveness of the proposed system when compared to a commercially available multichannel room simulation system in terms of control of source distance and proximity to a room boundary; (5) the role of tracking early reflection patterns on the perception of sound source distance; (6) the role of tracking early reflection patterns on the perception of lateral phantom images. The listening tests confirm the effectiveness of the system for control of perceived sound source distance, proximity to room boundaries, and azimuth, through fine, dynamic adjustment of parameters according to source location. All of the parameters are grouped and controlled together to create a perceptually strong impression of source location and movement within a simulated space.
Bayesian source term estimation of atmospheric releases in urban areas using LES approach.
Xue, Fei; Kikumoto, Hideki; Li, Xiaofeng; Ooka, Ryozo
2018-05-05
The estimation of source information from limited measurements of a sensor network is a challenging inverse problem, which can be viewed as an assimilation process of the observed concentration data and the predicted concentration data. When dealing with releases in built-up areas, the predicted data are generally obtained by the Reynolds-averaged Navier-Stokes (RANS) equations, which yields building-resolving results; however, RANS-based models are outperformed by large-eddy simulation (LES) in the predictions of both airflow and dispersion. Therefore, it is important to explore the possibility of improving the estimation of the source parameters by using the LES approach. In this paper, a novel source term estimation method is proposed based on LES approach using Bayesian inference. The source-receptor relationship is obtained by solving the adjoint equations constructed using the time-averaged flow field simulated by the LES approach based on the gradient diffusion hypothesis. A wind tunnel experiment with a constant point source downwind of a single building model is used to evaluate the performance of the proposed method, which is compared with that of the existing method using a RANS model. The results show that the proposed method reduces the errors of source location and releasing strength by 77% and 28%, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.
Simulating the Heliosphere with Kinetic Hydrogen and Dynamic MHD Source Terms
Heerikhuisen, Jacob; Pogorelov, Nikolai; Zank, Gary
2013-04-01
The interaction between the ionized plasma of the solar wind (SW) emanating from the sun and the partially ionized plasma of the local interstellar medium (LISM) creates the heliosphere. The heliospheric interface is characterized by the tangential discontinuity known as the heliopause that separates the SW and LISM plasmas, and a termination shock on the SW side along with a possible bow shock on the LISM side. Neutral Hydrogen of interstellar origin plays a critical role in shaping the heliospheric interface, since it freely traverses the heliopause. Charge-exchange between H-atoms and plasma protons couples the ions and neutrals, but themore » mean free paths are large, resulting in non-equilibrated energetic ion and neutral components. In our model, source terms for the MHD equations are generated using a kinetic approach for hydrogen, and the key computational challenge is to resolve these sources with sufficient statistics. For steady-state simulations, statistics can accumulate over arbitrarily long time intervals. In this paper we discuss an approach for improving the statistics in time-dependent calculations, and present results from simulations of the heliosphere where the SW conditions at the inner boundary of the computation vary according to an idealized solar cycle.« less
Vezzaro, L; Sharma, A K; Ledin, A; Mikkelsen, P S
2015-03-15
The estimation of micropollutant (MP) fluxes in stormwater systems is a fundamental prerequisite when preparing strategies to reduce stormwater MP discharges to natural waters. Dynamic integrated models can be important tools in this step, as they can be used to integrate the limited data provided by monitoring campaigns and to evaluate the performance of different strategies based on model simulation results. This study presents an example where six different control strategies, including both source-control and end-of-pipe treatment, were compared. The comparison focused on fluxes of heavy metals (copper, zinc) and organic compounds (fluoranthene). MP fluxes were estimated by using an integrated dynamic model, in combination with stormwater quality measurements. MP sources were identified by using GIS land usage data, runoff quality was simulated by using a conceptual accumulation/washoff model, and a stormwater retention pond was simulated by using a dynamic treatment model based on MP inherent properties. Uncertainty in the results was estimated with a pseudo-Bayesian method. Despite the great uncertainty in the MP fluxes estimated by the runoff quality model, it was possible to compare the six scenarios in terms of discharged MP fluxes, compliance with water quality criteria, and sediment accumulation. Source-control strategies obtained better results in terms of reduction of MP emissions, but all the simulated strategies failed in fulfilling the criteria based on emission limit values. The results presented in this study shows how the efficiency of MP pollution control strategies can be quantified by combining advanced modeling tools (integrated stormwater quality model, uncertainty calibration). Copyright © 2014 Elsevier Ltd. All rights reserved.
Laser induced heat source distribution in bio-tissues
NASA Astrophysics Data System (ADS)
Li, Xiaoxia; Fan, Shifu; Zhao, Youquan
2006-09-01
During numerical simulation of laser and tissue thermal interaction, the light fluence rate distribution should be formularized and constituted to the source term in the heat transfer equation. Usually the solution of light irradiative transport equation is given in extreme conditions such as full absorption (Lambert-Beer Law), full scattering (Lubelka-Munk theory), most scattering (Diffusion Approximation) et al. But in specific conditions, these solutions will induce different errors. The usually used Monte Carlo simulation (MCS) is more universal and exact but has difficulty to deal with dynamic parameter and fast simulation. Its area partition pattern has limits when applying FEM (finite element method) to solve the bio-heat transfer partial differential coefficient equation. Laser heat source plots of above methods showed much difference with MCS. In order to solve this problem, through analyzing different optical actions such as reflection, scattering and absorption on the laser induced heat generation in bio-tissue, a new attempt was made out which combined the modified beam broaden model and the diffusion approximation model. First the scattering coefficient was replaced by reduced scattering coefficient in the beam broaden model, which is more reasonable when scattering was treated as anisotropic scattering. Secondly the attenuation coefficient was replaced by effective attenuation coefficient in scattering dominating turbid bio-tissue. The computation results of the modified method were compared with Monte Carlo simulation and showed the model provided reasonable predictions of heat source term distribution than past methods. Such a research is useful for explaining the physical characteristics of heat source in the heat transfer equation, establishing effective photo-thermal model, and providing theory contrast for related laser medicine experiments.
Influence of Iterative Reconstruction Algorithms on PET Image Resolution
NASA Astrophysics Data System (ADS)
Karpetas, G. E.; Michail, C. M.; Fountos, G. P.; Valais, I. G.; Nikolopoulos, D.; Kandarakis, I. S.; Panayiotakis, G. S.
2015-09-01
The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction. The simulated PET scanner was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the modulation transfer function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL, the ordered subsets separable paraboloidal surrogate (OSSPS), the median root prior (MRP) and OSMAPOSL with quadratic prior, algorithms. OSMAPOSL reconstruction was assessed by using fixed subsets and various iterations, as well as by using various beta (hyper) parameter values. MTF values were found to increase with increasing iterations. MTF also improves by using lower beta values. The simulated PET evaluation method, based on the TLC plane source, can be useful in the resolution assessment of PET scanners.
The Advanced Statistical Trajectory Regional Air Pollution (ASTRAP) model simulates long-term transport and deposition of oxides of and nitrogen. t is a potential screening tool for assessing long-term effects on regional visibility from sulfur emission sources. owever, a rigorou...
On the inclusion of mass source terms in a single-relaxation-time lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Aursjø, Olav; Jettestuen, Espen; Vinningland, Jan Ludvig; Hiorth, Aksel
2018-05-01
We present a lattice Boltzmann algorithm for incorporating a mass source in a fluid flow system. The proposed mass source/sink term, included in the lattice Boltzmann equation, maintains the Galilean invariance and the accuracy of the overall method, while introducing a mass source/sink term in the fluid dynamical equations. The method can, for instance, be used to inject or withdraw fluid from any preferred lattice node in a system. This suggests that injection and withdrawal of fluid does not have to be introduced through cumbersome, and sometimes less accurate, boundary conditions. The method also suggests that, through a chosen equation of state relating mass density to pressure, the proposed mass source term will render it possible to set a preferred pressure at any lattice node in a system. We demonstrate how this model handles injection and withdrawal of a fluid. And we show how it can be used to incorporate pressure boundaries. The accuracy of the algorithm is identified through a Chapman-Enskog expansion of the model and supported by the numerical simulations.
Huang, Weidong; Li, Kun; Wang, Gan; Wang, Yingzhe
2013-11-01
In this article, we present a newly designed inverse umbrella surface aerator, and tested its performance in driving flow of an oxidation ditch. Results show that it has a better performance in driving the oxidation ditch than the original one with higher average velocity and more uniform flow field. We also present a computational fluid dynamics model for predicting the flow field in an oxidation ditch driven by a surface aerator. The improved momentum source term approach to simulate the flow field of the oxidation ditch driven by an inverse umbrella surface aerator was developed and validated through experiments. Four kinds of turbulent models were investigated with the approach, including the standard k - ɛ model, RNG k - ɛ model, realizable k - ɛ model, and Reynolds stress model, and the predicted data were compared with those calculated with the multiple rotating reference frame approach (MRF) and sliding mesh approach (SM). Results of the momentum source term approach are in good agreement with the experimental data, and its prediction accuracy is better than MRF, close to SM. It is also found that the momentum source term approach has lower computational expenses, is simpler to preprocess, and is easier to use.
On the structure of pressure fluctuations in simulated turbulent channel flow
NASA Technical Reports Server (NTRS)
Kim, John
1989-01-01
Pressure fluctuations in a turbulent channel flow are investigated by analyzing a database obtained from a direct numerical simulation. Detailed statistics associated with the pressure fluctuations are presented. Characteristics associated with the rapid (linear) and slow (nonlinear) pressure are discussed. It is found that the slow pressure fluctuations are larger than the rapid pressure fluctuations throughout the channel except very near the wall, where they are about the same magnitude. This is contrary to the common belief that the nonlinear source terms are negligible compared to the linear source terms. Probability density distributions, power spectra, and two-point correlations are examined to reveal the characteristics of the pressure fluctuations. The global dependence of the pressure fluctuations and pressure-strain correlations are also examined by evaluating the integral associated with Green's function representations of them. In the wall region where the pressure-strain terms are large, most contributions to the pressure-strain terms are from the wall region (i.e., local), whereas away from the wall where the pressure-strain terms are small, contributions are global. Structures of instantaneous pressure and pressure gradients at the wall and the corresponding vorticity field are examined.
42: An Open-Source Simulation Tool for Study and Design of Spacecraft Attitude Control Systems
NASA Technical Reports Server (NTRS)
Stoneking, Eric
2018-01-01
Simulation is an important tool in the analysis and design of spacecraft attitude control systems. The speaker will discuss the simulation tool, called simply 42, that he has developed over the years to support his own work as an engineer in the Attitude Control Systems Engineering Branch at NASA Goddard Space Flight Center. 42 was intended from the outset to be high-fidelity and powerful, but also fast and easy to use. 42 is publicly available as open source since 2014. The speaker will describe some of 42's models and features, and discuss its applicability to studies ranging from early concept studies through the design cycle, integration, and operations. He will outline 42's architecture and share some thoughts on simulation development as a long-term project.
Bayesian estimation of a source term of radiation release with approximately known nuclide ratios
NASA Astrophysics Data System (ADS)
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek
2016-04-01
We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).
NASA Technical Reports Server (NTRS)
Shyy, W.; Thakur, S.; Udaykumar, H. S.
1993-01-01
A high accuracy convection scheme using a sequential solution technique has been developed and applied to simulate the longitudinal combustion instability and its active control. The scheme has been devised in the spirit of the Total Variation Diminishing (TVD) concept with special source term treatment. Due to the substantial heat release effect, a clear delineation of the key elements employed by the scheme, i.e., the adjustable damping factor and the source term treatment has been made. By comparing with the first-order upwind scheme previously utilized, the present results exhibit less damping and are free from spurious oscillations, offering improved quantitative accuracy while confirming the spectral analysis reported earlier. A simple feedback type of active control has been found to be capable of enhancing or attenuating the magnitude of the combustion instability.
The Effect of Data Quality on Short-term Growth Model Projections
David Gartner
2005-01-01
This study was designed to determine the effect of FIA's data quality on short-term growth model projections. The data from Georgia's 1996 statewide survey were used for the Southern variant of the Forest Vegetation Simulator to predict Georgia's first annual panel. The effect of several data error sources on growth modeling prediction errors...
2018-01-01
Understanding Earth surface responses in terms of sediment dynamics to climatic variability and tectonics forcing is hindered by limited ability of current models to simulate long-term evolution of sediment transfer and associated morphological changes. This paper presents pyBadlands, an open-source python-based framework which computes over geological time (1) sediment transport from landmasses to coasts, (2) reworking of marine sediments by longshore currents and (3) development of coral reef systems. pyBadlands is cross-platform, distributed under the GPLv3 license and available on GitHub (http://github.com/badlands-model). Here, we describe the underlying physical assumptions behind the simulated processes and the main options already available in the numerical framework. Along with the source code, a list of hands-on examples is provided that illustrates the model capabilities. In addition, pre and post-processing classes have been built and are accessible as a companion toolbox which comprises a series of workflows to efficiently build, quantify and explore simulation input and output files. While the framework has been primarily designed for research, its simplicity of use and portability makes it a great tool for teaching purposes. PMID:29649301
Kim, Hyun Suk; Choi, Hong Yeop; Lee, Gyemin; Ye, Sung-Joon; Smith, Martin B; Kim, Geehyun
2018-03-01
The aim of this work is to develop a gamma-ray/neutron dual-particle imager, based on rotational modulation collimators (RMCs) and pulse shape discrimination (PSD)-capable scintillators, for possible applications for radioactivity monitoring as well as nuclear security and safeguards. A Monte Carlo simulation study was performed to design an RMC system for the dual-particle imaging, and modulation patterns were obtained for gamma-ray and neutron sources in various configurations. We applied an image reconstruction algorithm utilizing the maximum-likelihood expectation-maximization method based on the analytical modeling of source-detector configurations, to the Monte Carlo simulation results. Both gamma-ray and neutron source distributions were reconstructed and evaluated in terms of signal-to-noise ratio, showing the viability of developing an RMC-based gamma-ray/neutron dual-particle imager using PSD-capable scintillators.
NASA Astrophysics Data System (ADS)
Ba, Yan; Liu, Haihu; Li, Qing; Kang, Qinjun; Sun, Jinju
2016-08-01
In this paper we propose a color-gradient lattice Boltzmann (LB) model for simulating two-phase flows with high density ratio and high Reynolds number. The model applies a multirelaxation-time (MRT) collision operator to enhance the stability of the simulation. A source term, which is derived by the Chapman-Enskog analysis, is added into the MRT LB equation so that the Navier-Stokes equations can be exactly recovered. Also, a form of the equilibrium density distribution function is used to simplify the source term. To validate the proposed model, steady flows of a static droplet and the layered channel flow are first simulated with density ratios up to 1000. Small values of spurious velocities and interfacial tension errors are found in the static droplet test, and improved profiles of velocity are obtained by the present model in simulating channel flows. Then, two cases of unsteady flows, Rayleigh-Taylor instability and droplet splashing on a thin film, are simulated. In the former case, the density ratio of 3 and Reynolds numbers of 256 and 2048 are considered. The interface shapes and spike and bubble positions are in good agreement with the results of previous studies. In the latter case, the droplet spreading radius is found to obey the power law proposed in previous studies for the density ratio of 100 and Reynolds number up to 500.
NASA Astrophysics Data System (ADS)
Katata, G.; Chino, M.; Kobayashi, T.; Terada, H.; Ota, M.; Nagai, H.; Kajino, M.; Draxler, R.; Hort, M. C.; Malo, A.; Torii, T.; Sanada, Y.
2014-06-01
Temporal variations in the amount of radionuclides released into the atmosphere during the Fukushima Dai-ichi Nuclear Power Station (FNPS1) accident and their atmospheric and marine dispersion are essential to evaluate the environmental impacts and resultant radiological doses to the public. In this paper, we estimate a detailed time trend of atmospheric releases during the accident by combining environmental monitoring data with atmospheric model simulations from WSPEEDI-II (Worldwide version of System for Prediction of Environmental Emergency Dose Information), and simulations from the oceanic dispersion model SEA-GEARN-FDM, both developed by the authors. A sophisticated deposition scheme, which deals with dry and fogwater depositions, cloud condensation nuclei (CCN) activation and subsequent wet scavenging due to mixed-phase cloud microphysics (in-cloud scavenging) for radioactive iodine gas (I2 and CH3I) and other particles (CsI, Cs, and Te), was incorporated into WSPEEDI-II to improve the surface deposition calculations. The fallout to the ocean surface calculated by WSPEEDI-II was used as input data for the SEA-GEARN-FDM calculations. Reverse and inverse source-term estimation methods based on coupling the simulations from both models was adopted using air dose rates and concentrations, and sea surface concentrations. The results revealed that the major releases of radionuclides due to FNPS1 accident occurred in the following periods during March 2011: the afternoon of 12 March due to the wet venting and hydrogen explosion at Unit 1, the morning of 13 March after the venting event at Unit 3, midnight of 14 March when the SRV (Safely Relief Valve) at Unit 2 was opened three times, the morning and night of 15 March, and the morning of 16 March. According to the simulation results, the highest radioactive contamination areas around FNPS1 were created from 15 to 16 March by complicated interactions among rainfall, plume movements, and the temporal variation of release rates associated with reactor pressure changes in Units 2 and 3. The modified WSPEEDI-II simulation using the new source term reproduced local and regional patterns of cumulative surface deposition of total 131I and 137Cs and air dose rate obtained by airborne surveys. The new source term was also tested using three atmospheric dispersion models (MLDP0, HYSPLIT, and NAME) for regional and global calculations and showed good agreement between calculated and observed air concentration and surface deposition of 137Cs in East Japan. Moreover, HYSPLIT model using the new source term also reproduced the plume arrivals at several countries abroad showing a good correlation with measured air concentration data. A large part of deposition pattern of total 131I and 137Cs in East Japan was explained by in-cloud particulate scavenging. However, for the regional scale contaminated areas, there were large uncertainties due to the overestimation of rainfall amounts and the underestimation of fogwater and drizzle depositions. The computations showed that approximately 27% of 137Cs discharged from FNPS1 deposited to the land in East Japan, mostly in forest areas.
ERIC Educational Resources Information Center
HENSHAW, NANCY WANDALIE
THIS SOURCE BOOK TRANSLATES THE ELEGANT AND SOMEWHAT ALIEN WORLD OF RESTORATION COMEDY INTO TERMS THAT CAN ENABLE AMERICAN DIRECTORS AND ACTORS--BY EMPLOYING THE ACTING "METHOD" OF CONTEMPORARY PSYCHOLOGICAL REALISM--TO SIMULATE THE EXPERIENCE, PERCEPTION, AND EXPRESSION OF THE 17TH-CENTURY ENGLISH ARISTOCRAT. TO ENCOURAGE DIRECTORS TO IMMERSE…
Multi-decadal Dynamics of Mercury in a Complex Ecosystem
NASA Astrophysics Data System (ADS)
Levin, L.
2016-12-01
A suite of air quality and watershed models was applied to track the ecosystem contributions of mercury (Hg), as well as arsenic (As), and selenium (Se) from local and global sources to the San Juan River basin in the Four Corners region of the American Southwest. Long-term changes in surface water and fish tissue mercury concentrations were also simulated, out to the year 2074.Atmospheric mercury was modeled using a nested, spatial-scale modeling system comprising GEOS-Chem (global scale) and CMAQ-APT (national and regional) models. Four emission scenarios were modeled, including two growth scenarios for Asian mercury emissions. Results showed that the average mercury deposition over the San Juan basin was 21 µg/m2-y. Source contributions to mercury deposition range from 2% to 9% of total deposition prior to post-2016 U.S. controls for air toxics regulatory compliance. Most of the contributions to mercury deposition in the basin are from non-U.S. sources. Watershed simulations showed power plant contributions to fish tissue mercury never exceeded 0.035% during the 85-year model simulation period, even with the long-term growth in fish tissue mercury over that period. Local coal-fired power plants contributed relatively small fractions to mercury deposition (less than 4%) in the basin; background and non-U.S. anthropogenic sources dominated. Fish-tissue mercury levels are projected to increase through 2074 due to growth projections for non-U.S. emission sources. The most important contributor to methylmercury in the lower reaches of the watershed was advection of MeHg produced in situ at upstream headwater locations.
Comparing the contributions of ionospheric outflow and high-altitude production to O+ loss at Mars
NASA Astrophysics Data System (ADS)
Liemohn, Michael; Curry, Shannon; Fang, Xiaohua; Johnson, Blake; Fraenz, Markus; Ma, Yingjuan
2013-04-01
The Mars total O+ escape rate is highly dependent on both the ionospheric and high-altitude source terms. Because of their different source locations, they appear in velocity space distributions as distinct populations. The Mars Test Particle model is used (with background parameters from the BATS-R-US magnetohydrodynamic code) to simulate the transport of ions in the near-Mars space environment. Because it is a collisionless model, the MTP's inner boundary is placed at 300 km altitude for this study. The MHD values at this altitude are used to define an ionospheric outflow source of ions for the MTP. The resulting loss distributions (in both real and velocity space) from this ionospheric source term are compared against those from high-altitude ionization mechanisms, in particular photoionization, charge exchange, and electron impact ionization, each of which have their own (albeit overlapping) source regions. In subsequent simulations, the MHD values defining the ionospheric outflow are systematically varied to parametrically explore possible ionospheric outflow scenarios. For the nominal MHD ionospheric outflow settings, this source contributes only 10% to the total O+ loss rate, nearly all via the central tail region. There is very little dependence of this percentage on the initial temperature, but a change in the initial density or bulk velocity directly alters this loss through the central tail. However, a density or bulk velocity increase of a factor of 10 makes the ionospheric outflow loss comparable in magnitude to the loss from the combined high-altitude sources. The spatial and velocity space distributions of escaping O+ are examined and compared for the various source terms, identifying features specific to each ion source mechanism. These results are applied to a specific Mars Express orbit and used to interpret high-altitude observations from the ion mass analyzer onboard MEX.
Huang, Weidong; Li, Kun; Wang, Gan; Wang, Yingzhe
2013-01-01
Abstract In this article, we present a newly designed inverse umbrella surface aerator, and tested its performance in driving flow of an oxidation ditch. Results show that it has a better performance in driving the oxidation ditch than the original one with higher average velocity and more uniform flow field. We also present a computational fluid dynamics model for predicting the flow field in an oxidation ditch driven by a surface aerator. The improved momentum source term approach to simulate the flow field of the oxidation ditch driven by an inverse umbrella surface aerator was developed and validated through experiments. Four kinds of turbulent models were investigated with the approach, including the standard k−ɛ model, RNG k−ɛ model, realizable k−ɛ model, and Reynolds stress model, and the predicted data were compared with those calculated with the multiple rotating reference frame approach (MRF) and sliding mesh approach (SM). Results of the momentum source term approach are in good agreement with the experimental data, and its prediction accuracy is better than MRF, close to SM. It is also found that the momentum source term approach has lower computational expenses, is simpler to preprocess, and is easier to use. PMID:24302850
Possible Dual Earthquake-Landslide Source of the 13 November 2016 Kaikoura, New Zealand Tsunami
NASA Astrophysics Data System (ADS)
Heidarzadeh, Mohammad; Satake, Kenji
2017-10-01
A complicated earthquake ( M w 7.8) in terms of rupture mechanism occurred in the NE coast of South Island, New Zealand, on 13 November 2016 (UTC) in a complex tectonic setting comprising a transition strike-slip zone between two subduction zones. The earthquake generated a moderate tsunami with zero-to-crest amplitude of 257 cm at the near-field tide gauge station of Kaikoura. Spectral analysis of the tsunami observations showed dual peaks at 3.6-5.7 and 5.7-56 min, which we attribute to the potential landslide and earthquake sources of the tsunami, respectively. Tsunami simulations showed that a source model with slip on an offshore plate-interface fault reproduces the near-field tsunami observation in terms of amplitude, but fails in terms of tsunami period. On the other hand, a source model without offshore slip fails to reproduce the first peak, but the later phases are reproduced well in terms of both amplitude and period. It can be inferred that an offshore source is necessary to be involved, but it needs to be smaller in size than the plate interface slip, which most likely points to a confined submarine landslide source, consistent with the dual-peak tsunami spectrum. We estimated the dimension of the potential submarine landslide at 8-10 km.
Part 1 of a Computational Study of a Drop-Laden Mixing Layer
NASA Technical Reports Server (NTRS)
Okong'o, Nora A.; Bellan, Josette
2004-01-01
This first of three reports on a computational study of a drop-laden temporal mixing layer presents the results of direct numerical simulations (DNS) of well-resolved flow fields and the derivation of the large-eddy simulation (LES) equations that would govern the larger scales of a turbulent flow field. The mixing layer consisted of two counterflowing gas streams, one of which was initially laden with evaporating liquid drops. The gas phase was composed of two perfect gas species, the carrier gas and the vapor emanating from the drops, and was computed in an Eulerian reference frame, whereas each drop was tracked individually in a Lagrangian manner. The flow perturbations that were initially imposed on the layer caused mixing and eventual transition to turbulence. The DNS database obtained included transitional states for layers with various liquid mass loadings. For the DNS, the gas-phase equations were the compressible Navier-Stokes equations for conservation of momentum and additional conservation equations for total energy and species mass. These equations included source terms representing the effect of the drops on the mass, momentum, and energy of the gas phase. From the DNS equations, the expression for the irreversible entropy production (dissipation) was derived and used to determine the dissipation due to the source terms. The LES equations were derived by spatially filtering the DNS set and the magnitudes of the terms were computed at transitional states, leading to a hierarchy of terms to guide simplification of the LES equations. It was concluded that effort should be devoted to the accurate modeling of both the subgridscale fluxes and the filtered source terms, which were the dominant unclosed terms appearing in the LES equations.
Flow of GE90 Turbofan Engine Simulated
NASA Technical Reports Server (NTRS)
Veres, Joseph P.
1999-01-01
The objective of this task was to create and validate a three-dimensional model of the GE90 turbofan engine (General Electric) using the APNASA (average passage) flow code. This was a joint effort between GE Aircraft Engines and the NASA Lewis Research Center. The goal was to perform an aerodynamic analysis of the engine primary flow path, in under 24 hours of CPU time, on a parallel distributed workstation system. Enhancements were made to the APNASA Navier-Stokes code to make it faster and more robust and to allow for the analysis of more arbitrary geometry. The resulting simulation exploited the use of parallel computations by using two levels of parallelism, with extremely high efficiency.The primary flow path of the GE90 turbofan consists of a nacelle and inlet, 49 blade rows of turbomachinery, and an exhaust nozzle. Secondary flows entering and exiting the primary flow path-such as bleed, purge, and cooling flows-were modeled macroscopically as source terms to accurately simulate the engine. The information on these source terms came from detailed descriptions of the cooling flow and from thermodynamic cycle system simulations. These provided boundary condition data to the three-dimensional analysis. A simplified combustor was used to feed boundary conditions to the turbomachinery. Flow simulations of the fan, high-pressure compressor, and high- and low-pressure turbines were completed with the APNASA code.
NASA Astrophysics Data System (ADS)
Poupardin, A.; Heinrich, P.; Hébert, H.; Schindelé, F.; Jamelot, A.; Reymond, D.; Sugioka, H.
2018-05-01
This paper evaluates the importance of frequency dispersion in the propagation of recent trans-Pacific tsunamis. Frequency dispersion induces a time delay for the most energetic waves, which increases for long propagation distances and short source dimensions. To calculate this time delay, propagation of tsunamis is simulated and analyzed from spectrograms of time-series at specific gauges in the Pacific Ocean. One- and two-dimensional simulations are performed by solving either shallow water or Boussinesq equations and by considering realistic seismic sources. One-dimensional sensitivity tests are first performed in a constant-depth channel to study the influence of the source width. Two-dimensional tests are then performed in a simulated Pacific Ocean with a 4000-m constant depth and by considering tectonic sources of 2010 and 2015 Chilean earthquakes. For these sources, both the azimuth and the distance play a major role in the frequency dispersion of tsunamis. Finally, simulations are performed considering the real bathymetry of the Pacific Ocean. Multiple reflections, refractions as well as shoaling of waves result in much more complex time series for which the effects of the frequency dispersion are hardly discernible. The main point of this study is to evaluate frequency dispersion in terms of traveltime delays by calculating spectrograms for a time window of 6 hours after the arrival of the first wave. Results of the spectral analysis show that the wave packets recorded by pressure and tide sensors in the Pacific Ocean seem to be better reproduced by the Boussinesq model than the shallow water model and approximately follow the theoretical dispersion relationship linking wave arrival times and frequencies. Additionally, a traveltime delay is determined above which effects of frequency dispersion are considered to be significant in terms of maximum surface elevations.
NASA Astrophysics Data System (ADS)
Malviya, Devesh; Borage, Mangesh Balkrishna; Tiwari, Sunil
2017-12-01
This paper investigates the possibility of application of Resonant Immittance Converters (RICs) as a current source for the current-fed symmetrical Capacitor-Diode Voltage Multiplier (CDVM) with LCL-T Resonant Converter (RC) as an example. Firstly, detailed characterization of the current-fed symmetrical CDVM is carried out using repeated simulations followed by the normalization of the simulation results in order to derive the closed-form curve fit equations to predict the operating modes, output voltage and ripple in terms of operating parameters. RICs, due to their ability to convert voltage source into a current source, become a possible candidate for the realization of current source for the current-fed symmetrical CDVM. Detailed analysis, optimization and design of LCL-T RC with CDVM is performed in this paper. A step by step design procedure for the design of CDVM and the converter is proposed. A 5-stage prototype symmetrical CDVM driven by LCL-T RC to produce 2.5 kV, 50 mA dc output voltage is designed, built and tested to validate the findings of the analysis and simulation.
Ghannam, K; El-Fadel, M
2013-02-01
This paper examines the relative source contribution to ground-level concentrations of carbon monoxide (CO), nitrogen dioxide (NO2), and PM10 (particulate matter with an aerodynamic diameter < 10 microm) in a coastal urban area due to emissions from an industrial complex with multiple stacks, quarrying activities, and a nearby highway. For this purpose, an inventory of CO, oxide of nitrogen (NO(x)), and PM10 emissions was coupled with the non-steady-state Mesoscale Model 5/California Puff Dispersion Modeling system to simulate individual source contributions under several spatial and temporal scales. As the contribution of a particular source to ground-level concentrations can be evaluated by simulating this single-source emissions or otherwise total emissions except that source, a set of emission sensitivity simulations was designed to examine if CALPUFF maintains a linear relationship between emission rates and predicted concentrations in cases where emitted plumes overlap and chemical transformations are simulated. Source apportionment revealed that ground-level releases (i.e., highway and quarries) extended over large areas dominated the contribution to exposure levels over elevated point sources, despite the fact that cumulative emissions from point sources are higher. Sensitivity analysis indicated that chemical transformations of NO(x) are insignificant, possibly due to short-range plume transport, with CALPUFF exhibiting a linear response to changes in emission rate. The current paper points to the significance of ground-level emissions in contributing to urban air pollution exposure and questions the viability of the prevailing paradigm of point-source emission reduction, especially that the incremental improvement in air quality associated with this common abatement strategy may not accomplish the desirable benefit in terms of lower exposure with costly emissions capping. The application of atmospheric dispersion models for source apportionment helps in identifying major contributors to regional air pollution. In industrial urban areas where multiple sources with different geometry contribute to emissions, ground-level releases extended over large areas such as roads and quarries often dominate the contribution to ground-level air pollution. Industrial emissions released at elevated stack heights may experience significant dilution, resulting in minor contribution to exposure at ground level. In such contexts, emission reduction, which is invariably the abatement strategy targeting industries at a significant investment in control equipment or process change, may result in minimal return on investment in terms of improvement in air quality at sensitive receptors.
On simulation of local fluxes in molecular junctions
NASA Astrophysics Data System (ADS)
Cabra, Gabriel; Jensen, Anders; Galperin, Michael
2018-05-01
We present a pedagogical review of the current density simulation in molecular junction models indicating its advantages and deficiencies in analysis of local junction transport characteristics. In particular, we argue that current density is a universal tool which provides more information than traditionally simulated bond currents, especially when discussing inelastic processes. However, current density simulations are sensitive to the choice of basis and electronic structure method. We note that while discussing the local current conservation in junctions, one has to account for the source term caused by the open character of the system and intra-molecular interactions. Our considerations are illustrated with numerical simulations of a benzenedithiol molecular junction.
Numerical simulations of LNG vapor dispersion in Brayton Fire Training Field tests with ANSYS CFX.
Qi, Ruifeng; Ng, Dedy; Cormier, Benjamin R; Mannan, M Sam
2010-11-15
Federal safety regulations require the use of validated consequence models to determine the vapor cloud dispersion exclusion zones for accidental liquefied natural gas (LNG) releases. One tool that is being developed in industry for exclusion zone determination and LNG vapor dispersion modeling is computational fluid dynamics (CFD). This paper uses the ANSYS CFX CFD code to model LNG vapor dispersion in the atmosphere. Discussed are important parameters that are essential inputs to the ANSYS CFX simulations, including the atmospheric conditions, LNG evaporation rate and pool area, turbulence in the source term, ground surface temperature and roughness height, and effects of obstacles. A sensitivity analysis was conducted to illustrate uncertainties in the simulation results arising from the mesh size and source term turbulence intensity. In addition, a set of medium-scale LNG spill tests were performed at the Brayton Fire Training Field to collect data for validating the ANSYS CFX prediction results. A comparison of test data with simulation results demonstrated that CFX was able to describe the dense gas behavior of LNG vapor cloud, and its prediction results of downwind gas concentrations close to ground level were in approximate agreement with the test data. Copyright © 2010 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kwon, Hyeokjun; Kang, Yoojin; Jang, Junwoo
2017-09-01
Color fidelity has been used as one of indices to evaluate the performance of light sources. Since the Color Rendering Index (CRI) was proposed at CIE, many color fidelity metrics have been proposed to increase the accuracy of the metric. This paper focuses on a comparison of the color fidelity metrics in an aspect of accuracy with human visual assessments. To visually evaluate the color fidelity of light sources, we made a simulator that reproduces the color samples under lighting conditions. In this paper, eighteen color samples of the Macbeth color checker under test light sources and reference illuminant for each of them are simulated and displayed on a well-characterized monitor. With only a spectrum set of the test light source and reference illuminant, color samples under any lighting condition can be reproduced. In this paper, the spectrums of the two LED and two OLED light sources that have similar values of CRI are used for the visual assessment. In addition, the results of the visual assessment are compared with the two color fidelity metrics that include CRI and IES TM-30-15 (Rf), proposed by Illuminating Engineering Society (IES) in 2015. Experimental results indicate that Rf outperforms CRI in terms of the correlation with visual assessment.
Hybrid Energy System Design of Micro Hydro-PV-biogas Based Micro-grid
NASA Astrophysics Data System (ADS)
Nishrina; Abdullah, A. G.; Risdiyanto, A.; Nandiyanto, ABD
2017-03-01
Hybrid renewable energy system is an arrangement of one or more sources of renewable energy and also conventional energy. This paper describes a simulation results of hybrid renewable power system based on the available potential in an educational institution in Indonesia. HOMER software was used to simulate and analyse both in terms of optimization and economic terms. This software was developed through 3 main principles; simulation, optimization, and sensitivity analysis. Generally, the presented results show that the software can demonstrate a feasible hybrid power system as well to be realized. The entire demand in case study area can be supplied by the system configuration and can be met by ¾ of electricity production. So, there are ¼ of generated energy became an excess electricity.
Analysis and Modeling of Parallel Photovoltaic Systems under Partial Shading Conditions
NASA Astrophysics Data System (ADS)
Buddala, Santhoshi Snigdha
Since the industrial revolution, fossil fuels like petroleum, coal, oil, natural gas and other non-renewable energy sources have been used as the primary energy source. The consumption of fossil fuels releases various harmful gases into the atmosphere as byproducts which are hazardous in nature and they tend to deplete the protective layers and affect the overall environmental balance. Also the fossil fuels are bounded resources of energy and rapid depletion of these sources of energy, have prompted the need to investigate alternate sources of energy called renewable energy. One such promising source of renewable energy is the solar/photovoltaic energy. This work focuses on investigating a new solar array architecture with solar cells connected in parallel configuration. By retaining the structural simplicity of the parallel architecture, a theoretical small signal model of the solar cell is proposed and modeled to analyze the variations in the module parameters when subjected to partial shading conditions. Simulations were run in SPICE to validate the model implemented in Matlab. The voltage limitations of the proposed architecture are addressed by adopting a simple dc-dc boost converter and evaluating the performance of the architecture in terms of efficiencies by comparing it with the traditional architectures. SPICE simulations are used to compare the architectures and identify the best one in terms of power conversion efficiency under partial shading conditions.
Hamid, Laith; Al Farawn, Ali; Merlet, Isabelle; Japaridze, Natia; Heute, Ulrich; Stephani, Ulrich; Galka, Andreas; Wendling, Fabrice; Siniatchkin, Michael
2017-07-01
The clinical routine of non-invasive electroencephalography (EEG) is usually performed with 8-40 electrodes, especially in long-term monitoring, infants or emergency care. There is a need in clinical and scientific brain imaging to develop inverse solution methods that can reconstruct brain sources from these low-density EEG recordings. In this proof-of-principle paper we investigate the performance of the spatiotemporal Kalman filter (STKF) in EEG source reconstruction with 9-, 19- and 32- electrodes. We used simulated EEG data of epileptic spikes generated from lateral frontal and lateral temporal brain sources using state-of-the-art neuronal population models. For validation of source reconstruction, we compared STKF results to the location of the simulated source and to the results of low-resolution brain electromagnetic tomography (LORETA) standard inverse solution. STKF consistently showed less localization bias compared to LORETA, especially when the number of electrodes was decreased. The results encourage further research into the application of the STKF in source reconstruction of brain activity from low-density EEG recordings.
NASA Technical Reports Server (NTRS)
Heffley, R. K.; Jewell, W. F.; Whitbeck, R. F.; Schulman, T. M.
1980-01-01
The effects of spurious delays in real time digital computing systems are examined. Various sources of spurious delays are defined and analyzed using an extant simulator system as an example. A specific analysis procedure is set forth and four cases are viewed in terms of their time and frequency domain characteristics. Numerical solutions are obtained for three single rate one- and two-computer examples, and the analysis problem is formulated for a two-rate, two-computer example.
NASA Astrophysics Data System (ADS)
Demirkanli, I.; Molz, F. J.; Kaplan, D. I.; Fjeld, R. A.; Serkiz, S. M.
2006-05-01
An improved understanding of flow and radionuclide transport in vadose zone sediments is fundamental to all types of future planning involving radioactive materials. One way to obtain such understanding is to perform long-term experimental studies of Pu transport in complex natural systems. With this in mind, a series of field experiments were initiated at the SRNL in the early 1980s. Lysimeters containing sources of different Pu oxidation states were placed in the shallow subsurface and left open to the natural environment for 2 to 11 years. At the end of the experiments, Pu activities were measured along vertical cores obtained from the lysimeters. Pu distributions were anomalous in nature, with transport from oxidized Pu sources being less than expected, and a small fraction of Pu from reduced sources moving more. Laboratory studies with lysimeter sediments suggested that surface-mediated, oxidation/reduction (redox) reactions could be responsible for the anomalous behavior, and this hypothesis is tested by performing both steady-state and transient Pu transport simulations that include retardation along with first-order redox reactions on mineral surfaces. Based on the simulations, we conclude that the surface-mediated, redox hypothesis is consistent with the observed downward Pu activity profiles in the experiments, and such profiles are captured well by a steady-state, net downward, flow model. (Discussion is presented as to why a steady model appears to work in a highly transient flow environment.) The redox model explains how Pu(V/VI) sources release activity that moves downward more slowly than expected based on adsorptive retardation alone, and how Pu(III/IV) sources result in a small fraction of activity that moves downward more rapidly than expected. The calibrated parameter values were robust and relatively well-defined throughout all four sets of simulations. Pu(V/VI) (i.e., oxidized Pu)retardation factors were about 15, and reduced Pu(III/IV) retardation factors were about 10,000. For these values, ko (1st order oxidation rate) averaged 2.4x10-7/hr with a standard deviation of 1.6x10-7, and kr (reduction rate)was 7.1x10-4/hr with a standard deviation of 1.6x10-4. Preliminary transient flow simulations showed a very slight increase in the fitted reaction rate constants, but otherwise reproduced the steady-state results. To date, neither approach is able to simulate the observed Pu movement above the source.
NASA Astrophysics Data System (ADS)
Yang, Yang; Li, Xiukun
2016-06-01
Separation of the components of rigid acoustic scattering by underwater objects is essential in obtaining the structural characteristics of such objects. To overcome the problem of rigid structures appearing to have the same spectral structure in the time domain, time-frequency Blind Source Separation (BSS) can be used in combination with image morphology to separate the rigid scattering components of different objects. Based on a highlight model, the separation of the rigid scattering structure of objects with time-frequency distribution is deduced. Using a morphological filter, different characteristics in a Wigner-Ville Distribution (WVD) observed for single auto term and cross terms can be simplified to remove any cross-term interference. By selecting time and frequency points of the auto terms signal, the accuracy of BSS can be improved. An experimental simulation has been used, with changes in the pulse width of the transmitted signal, the relative amplitude and the time delay parameter, in order to analyzing the feasibility of this new method. Simulation results show that the new method is not only able to separate rigid scattering components, but can also separate the components when elastic scattering and rigid scattering exist at the same time. Experimental results confirm that the new method can be used in separating the rigid scattering structure of underwater objects.
MEqTrees Telescope and Radio-sky Simulations and CPU Benchmarking
NASA Astrophysics Data System (ADS)
Shanmugha Sundaram, G. A.
2009-09-01
MEqTrees is a Python-based implementation of the classical Measurement Equation, wherein the various 2×2 Jones matrices are parametrized representations in the spatial and sky domains for any generic radio telescope. Customized simulations of radio-source sky models and corrupt Jones terms are demonstrated based on a policy framework, with performance estimates derived for array configurations, ``dirty''-map residuals and processing power requirements for such computations on conventional platforms.
Analysis and Synthesis of Tonal Aircraft Noise Sources
NASA Technical Reports Server (NTRS)
Allen, Matthew P.; Rizzi, Stephen A.; Burdisso, Ricardo; Okcu, Selen
2012-01-01
Fixed and rotary wing aircraft operations can have a significant impact on communities in proximity to airports. Simulation of predicted aircraft flyover noise, paired with listening tests, is useful to noise reduction efforts since it allows direct annoyance evaluation of aircraft or operations currently in the design phase. This paper describes efforts to improve the realism of synthesized source noise by including short term fluctuations, specifically for inlet-radiated tones resulting from the fan stage of turbomachinery. It details analysis performed on an existing set of recorded turbofan data to isolate inlet-radiated tonal fan noise, then extract and model short term tonal fluctuations using the analytic signal. Methodologies for synthesizing time-variant tonal and broadband turbofan noise sources using measured fluctuations are also described. Finally, subjective listening test results are discussed which indicate that time-variant synthesized source noise is perceived to be very similar to recordings.
NASA Astrophysics Data System (ADS)
Wang, H.; Zhang, R.; Yang, Y.; Smith, S.; Rasch, P. J.
2017-12-01
The Arctic has warmed dramatically in recent decades. As one of the important short-lived climate forcers, aerosols affect the Arctic radiative budget directly by interfering radiation and indirectly by modifying clouds. Light-absorbing particles (e.g., black carbon) in snow/ice can reduce the surface albedo. The direct radiative impact of aerosols on the Arctic climate can be either warming or cooling, depending on their composition and location, which can further alter the poleward heat transport. Anthropogenic emissions, especially, BC and SO2, have changed drastically in low/mid-latitude source regions in the past few decades. Arctic surface observations at some locations show that BC and sulfate aerosols had a decreasing trend in the recent decades. In order to understand the impact of long-term emission changes on aerosols and their radiative effects, we use the Community Earth System Model (CESM) equipped with an explicit BC and sulfur source-tagging technique to quantify the source-receptor relationships and decadal trends of Arctic sulfate and BC and to identify variations in their atmospheric transport pathways from lower latitudes. The simulation was conducted for 36 years (1979-2014) with prescribed sea surface temperatures and sea ice concentrations. To minimize potential biases in modeled large-scale circulations, wind fields in the simulation are nudged toward an atmospheric reanalysis dataset, while atmospheric constituents including water vapor, clouds, and aerosols are allowed to evolve according to the model physics. Both anthropogenic and open fire emissions came from the newly released CMIP6 datasets, which show strong regional trends in BC and SO2 emissions during the simulation time period. Results show that emissions from East Asia and South Asia together have the largest contributions to Arctic sulfate and BC concentrations in the upper troposphere, which have an increasing trend. The strong decrease in emissions from Europe, Russia and North America contributed significantly to the overall decreasing trend in Arctic BC and sulfate, especially, in the lower troposphere. The long-term changes in the spatial distributions of aerosols, their radiative impacts and source attributions, along with implications for the Arctic warming trend, will be discussed.
On recontamination and directional-bias problems in Monte Carlo simulation of PDF turbulence models
NASA Technical Reports Server (NTRS)
Hsu, Andrew T.
1991-01-01
Turbulent combustion can not be simulated adequately by conventional moment closure turbulence models. The difficulty lies in the fact that the reaction rate is in general an exponential function of the temperature, and the higher order correlations in the conventional moment closure models of the chemical source term can not be neglected, making the applications of such models impractical. The probability density function (pdf) method offers an attractive alternative: in a pdf model, the chemical source terms are closed and do not require additional models. A grid dependent Monte Carlo scheme was studied, since it is a logical alternative, wherein the number of computer operations increases only linearly with the increase of number of independent variables, as compared to the exponential increase in a conventional finite difference scheme. A new algorithm was devised that satisfies a restriction in the case of pure diffusion or uniform flow problems. Although for nonuniform flows absolute conservation seems impossible, the present scheme has reduced the error considerably.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatzidakis, Stylianos; Greulich, Christopher
A cosmic ray Muon Flexible Framework for Spectral GENeration for Monte Carlo Applications (MUFFSgenMC) has been developed to support state-of-the-art cosmic ray muon tomographic applications. The flexible framework allows for easy and fast creation of source terms for popular Monte Carlo applications like GEANT4 and MCNP. This code framework simplifies the process of simulations used for cosmic ray muon tomography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ba, Yan; Liu, Haihu; Li, Qing
2016-08-15
In this paper, we propose a color-gradient lattice Boltzmann (LB) model for simulating two-phase flows with high density ratio and high Reynolds number. The model applies a multi-relaxation-time (MRT) collision operator to enhance the stability of the simulation. A source term, which is derived by the Chapman-Enskog analysis, is added into the MRT LB equation so that the Navier-Stokes equations can be exactly recovered. Also, a new form of the equilibrium density distribution function is used to simplify the source term. To validate the proposed model, steady flows of a static droplet and the layered channel flow are first simulatedmore » with density ratios up to 1000. Small values of spurious velocities and interfacial tension errors are found in the static droplet test, and improved profiles of velocity are obtained by the present model in simulating channel flows. Then, two cases of unsteady flows, Rayleigh-Taylor instability and droplet splashing on a thin film, are simulated. In the former case, the density ratio of 3 and Reynolds numbers of 256 and 2048 are considered. The interface shapes and spike/bubble positions are in good agreement with the results of previous studies. In the latter case, the droplet spreading radius is found to obey the power law proposed in previous studies for the density ratio of 100 and Reynolds number up to 500.« less
Toward real-time regional earthquake simulation of Taiwan earthquakes
NASA Astrophysics Data System (ADS)
Lee, S.; Liu, Q.; Tromp, J.; Komatitsch, D.; Liang, W.; Huang, B.
2013-12-01
We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 minutes after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 minutes for a 70 sec ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.
Coarse Grid Modeling of Turbine Film Cooling Flows Using Volumetric Source Terms
NASA Technical Reports Server (NTRS)
Heidmann, James D.; Hunter, Scott D.
2001-01-01
The recent trend in numerical modeling of turbine film cooling flows has been toward higher fidelity grids and more complex geometries. This trend has been enabled by the rapid increase in computing power available to researchers. However, the turbine design community requires fast turnaround time in its design computations, rendering these comprehensive simulations ineffective in the design cycle. The present study describes a methodology for implementing a volumetric source term distribution in a coarse grid calculation that can model the small-scale and three-dimensional effects present in turbine film cooling flows. This model could be implemented in turbine design codes or in multistage turbomachinery codes such as APNASA, where the computational grid size may be larger than the film hole size. Detailed computations of a single row of 35 deg round holes on a flat plate have been obtained for blowing ratios of 0.5, 0.8, and 1.0, and density ratios of 1.0 and 2.0 using a multiblock grid system to resolve the flows on both sides of the plate as well as inside the hole itself. These detailed flow fields were spatially averaged to generate a field of volumetric source terms for each conservative flow variable. Solutions were also obtained using three coarse grids having streamwise and spanwise grid spacings of 3d, 1d, and d/3. These coarse grid solutions used the integrated hole exit mass, momentum, energy, and turbulence quantities from the detailed solutions as volumetric source terms. It is shown that a uniform source term addition over a distance from the wall on the order of the hole diameter is able to predict adiabatic film effectiveness better than a near-wall source term model, while strictly enforcing correct values of integrated boundary layer quantities.
Deterministic Stress Modeling of Hot Gas Segregation in a Turbine
NASA Technical Reports Server (NTRS)
Busby, Judy; Sondak, Doug; Staubach, Brent; Davis, Roger
1998-01-01
Simulation of unsteady viscous turbomachinery flowfields is presently impractical as a design tool due to the long run times required. Designers rely predominantly on steady-state simulations, but these simulations do not account for some of the important unsteady flow physics. Unsteady flow effects can be modeled as source terms in the steady flow equations. These source terms, referred to as Lumped Deterministic Stresses (LDS), can be used to drive steady flow solution procedures to reproduce the time-average of an unsteady flow solution. The goal of this work is to investigate the feasibility of using inviscid lumped deterministic stresses to model unsteady combustion hot streak migration effects on the turbine blade tip and outer air seal heat loads using a steady computational approach. The LDS model is obtained from an unsteady inviscid calculation. The LDS model is then used with a steady viscous computation to simulate the time-averaged viscous solution. Both two-dimensional and three-dimensional applications are examined. The inviscid LDS model produces good results for the two-dimensional case and requires less than 10% of the CPU time of the unsteady viscous run. For the three-dimensional case, the LDS model does a good job of reproducing the time-averaged viscous temperature migration and separation as well as heat load on the outer air seal at a CPU cost that is 25% of that of an unsteady viscous computation.
Virtual Plant Tissue: Building Blocks for Next-Generation Plant Growth Simulation
De Vos, Dirk; Dzhurakhalov, Abdiravuf; Stijven, Sean; Klosiewicz, Przemyslaw; Beemster, Gerrit T. S.; Broeckhove, Jan
2017-01-01
Motivation: Computational modeling of plant developmental processes is becoming increasingly important. Cellular resolution plant tissue simulators have been developed, yet they are typically describing physiological processes in an isolated way, strongly delimited in space and time. Results: With plant systems biology moving toward an integrative perspective on development we have built the Virtual Plant Tissue (VPTissue) package to couple functional modules or models in the same framework and across different frameworks. Multiple levels of model integration and coordination enable combining existing and new models from different sources, with diverse options in terms of input/output. Besides the core simulator the toolset also comprises a tissue editor for manipulating tissue geometry and cell, wall, and node attributes in an interactive manner. A parameter exploration tool is available to study parameter dependence of simulation results by distributing calculations over multiple systems. Availability: Virtual Plant Tissue is available as open source (EUPL license) on Bitbucket (https://bitbucket.org/vptissue/vptissue). The project has a website https://vptissue.bitbucket.io. PMID:28523006
Effects of Drift-Shell Splitting by Chorus Waves on Radiation Belt Electrons
NASA Astrophysics Data System (ADS)
Chan, A. A.; Zheng, L.; O'Brien, T. P., III; Tu, W.; Cunningham, G.; Elkington, S. R.; Albert, J.
2015-12-01
Drift shell splitting in the radiation belts breaks all three adiabatic invariants of charged particle motion via pitch angle scattering, and produces new diffusion terms that fully populate the diffusion tensor in the Fokker-Planck equation. Based on the stochastic differential equation method, the Radbelt Electron Model (REM) simulation code allows us to solve such a fully three-dimensional Fokker-Planck equation, and to elucidate the sources and transport mechanisms behind the phase space density variations. REM has been used to perform simulations with an empirical initial phase space density followed by a seed electron injection, with a Tsyganenko 1989 magnetic field model, and with chorus wave and ULF wave diffusion models. Our simulation results show that adding drift shell splitting changes the phase space location of the source to smaller L shells, which typically reduces local electron energization (compared to neglecting drift-shell splitting effects). Simulation results with and without drift-shell splitting effects are compared with Van Allen Probe measurements.
The 2016 Al-Mishraq sulphur plant fire: Source and health risk area estimation
NASA Astrophysics Data System (ADS)
Björnham, Oscar; Grahn, Håkan; von Schoenberg, Pontus; Liljedahl, Birgitta; Waleij, Annica; Brännström, Niklas
2017-11-01
On October 20, 2016, Daesh (Islamic State) set fire to the sulphur production site Al-Mishraq as the battle of Mosul in northern Iraq became more intense. An extensive plume of toxic sulphur dioxide and hydrogen sulphide caused comprehensive casualties. The intensity of the SO2 release was reaching levels of minor volcanic eruptions and the plume was observed by several satellites. By investigation of the measurement data from instruments on the MetOp-A, MetOp-B, Aura and Soumi satellites we have estimated the time-dependent source term to 161 kilotonnes sulphur dioxide released into the atmosphere during seven days. A long-range dispersion model was utilized to simulate the atmospheric transport over the Middle East. The ground level concentrations predicted by the simulation were compared with observation from the Turkey National Air Quality Monitoring Network. Finally, the simulation data provided, using a probit analysis of the simulated data, an estimate of the health risk area that was compared to reported urgent medical treatments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, L.; Cluggish, B.; Kim, J. S.
2010-02-15
A Monte Carlo charge breeding code (MCBC) is being developed by FAR-TECH, Inc. to model the capture and charge breeding of 1+ ion beam in an electron cyclotron resonance ion source (ECRIS) device. The ECRIS plasma is simulated using the generalized ECRIS model which has two choices of boundary settings, free boundary condition and Bohm condition. The charge state distribution of the extracted beam ions is calculated by solving the steady state ion continuity equations where the profiles of the captured ions are used as source terms. MCBC simulations of the charge breeding of Rb+ showed good agreement with recentmore » charge breeding experiments at Argonne National Laboratory (ANL). MCBC correctly predicted the peak of highly charged ion state outputs under free boundary condition and similar charge state distribution width but a lower peak charge state under the Bohm condition. The comparisons between the simulation results and ANL experimental measurements are presented and discussed.« less
High order finite volume WENO schemes for the Euler equations under gravitational fields
NASA Astrophysics Data System (ADS)
Li, Gang; Xing, Yulong
2016-07-01
Euler equations with gravitational source terms are used to model many astrophysical and atmospheric phenomena. This system admits hydrostatic balance where the flux produced by the pressure is exactly canceled by the gravitational source term, and two commonly seen equilibria are the isothermal and polytropic hydrostatic solutions. Exact preservation of these equilibria is desirable as many practical problems are small perturbations of such balance. High order finite difference weighted essentially non-oscillatory (WENO) schemes have been proposed in [22], but only for the isothermal equilibrium state. In this paper, we design high order well-balanced finite volume WENO schemes, which can preserve not only the isothermal equilibrium but also the polytropic hydrostatic balance state exactly, and maintain genuine high order accuracy for general solutions. The well-balanced property is obtained by novel source term reformulation and discretization, combined with well-balanced numerical fluxes. Extensive one- and two-dimensional simulations are performed to verify well-balanced property, high order accuracy, as well as good resolution for smooth and discontinuous solutions.
Ancient Glass: A Literature Search and its Role in Waste Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strachan, Denis M.; Pierce, Eric M.
2010-07-01
When developing a performance assessment model for the long-term disposal of immobilized low-activity waste (ILAW) glass, it is desirable to determine the durability of glass forms over very long periods of time. However, testing is limited to short time spans, so experiments are performed under conditions that accelerate the key geochemical processes that control weathering. Verification that models currently being used can reliably calculate the long term behavior ILAW glass is a key component of the overall PA strategy. Therefore, Pacific Northwest National Laboratory was contracted by Washington River Protection Solutions, LLC to evaluate alternative strategies that can be usedmore » for PA source term model validation. One viable alternative strategy is the use of independent experimental data from archaeological studies of ancient or natural glass contained in the literature. These results represent a potential independent experiment that date back to approximately 3600 years ago or 1600 before the current era (bce) in the case of ancient glass and 106 years or older in the case of natural glass. The results of this literature review suggest that additional experimental data may be needed before the result from archaeological studies can be used as a tool for model validation of glass weathering and more specifically disposal facility performance. This is largely because none of the existing data set contains all of the information required to conduct PA source term calculations. For example, in many cases the sediments surrounding the glass was not collected and analyzed; therefore having the data required to compare computer simulations of concentration flux is not possible. This type of information is important to understanding the element release profile from the glass to the surrounding environment and provides a metric that can be used to calibrate source term models. Although useful, the available literature sources do not contain the required information needed to simulate the long-term performance of nuclear waste glasses in a near-surface or deep geologic repositories. The information that will be required include 1) experimental measurements to quantify the model parameters, 2) detailed analyses of altered glass samples, and 3) detailed analyses of the sediment surrounding the ancient glass samples.« less
NASA Astrophysics Data System (ADS)
Kempka, T.; Norden, B.; Tillner, E.; Nakaten, B.; Kühn, M.
2012-04-01
Geological modelling and dynamic flow simulations were conducted at the Ketzin pilot site showing a good agreement of history matched geological models with CO2 arrival times in both observation wells and timely development of reservoir pressure determined in the injection well. Recently, a re-evaluation of the seismic 3D data enabled a refinement of the structural site model and the implementation of the fault system present at the top of the Ketzin anticline. The updated geological model (model size: 5 km x 5 km) shows a horizontal discretization of 5 x 5 m and consists of three vertical zones, with the finest discretization at the top (0.5 m). According to the revised seismic analysis, the facies modelling to simulate the channel and floodplain facies distribution at Ketzin was updated. Using a sequential Gaussian simulator for the distribution of total and effective porosities and an empiric porosity-permeability relationship based on site and literature data available, the structural model was parameterized. Based on this revised reservoir model of the Stuttgart formation, numerical simulations using the TOUGH2-MP/ECO2N and Schlumberger Information Services (SIS) ECLIPSE 100 black-oil simulators were undertaken in order to evaluate the long-term (up to 10,000 years) migration of the injected CO2 (about 57,000 t at the end of 2011) and the development of reservoir pressure over time. The simulation results enabled us to quantitatively compare both reservoir simulators based on current operational data considering the long-term effects of CO2 storage including CO2 dissolution in the formation fluid. While the integration of the static geological model developed in the SIS Petrel modelling package into the ECLIPSE simulator is relatively flawless, a work-flow allowing for the export of Petrel models into the TOUGH2-MP input file format had to be implemented within the scope of this study. The challenge in this task was mainly determined by the presence of a complex faulted system in the revised reservoir model demanding for an integrated concept to deal with connections between the elements aligned to faults in the TOUGH2-MP simulator. Furthermore, we developed a methodology to visualize and compare the TOUGH2-MP simulation results with those of the Eclipse simulator using the Petrel software package. The long-term simulation results of both simulators are generally in good agreement. Spatial and timely migration of the CO2 plume as well as residual gas saturation are almost identical for both simulators, even though a time-dependent approach of CO2 dissolution in the formation fluid was chosen in the ECLIPSE simulator. Our results confirmed that a scientific open-source simulator as the TOUGH2-MP software package is capable to provide the same accuracy as the industrial standard simulator ECLIPSE 100. However, the computational time and additional efforts to implement a suitable workflow for using the TOUGH2-MP simulator are significantly higher, while the open-source concept of TOUGH2 provides more flexibility regarding process adaptation.
Economic dispatch optimization for system integrating renewable energy sources
NASA Astrophysics Data System (ADS)
Jihane, Kartite; Mohamed, Cherkaoui
2018-05-01
Nowadays, the use of energy is growing especially in transportation and electricity industries. However this energy is based on conventional sources which pollute the environment. Multi-source system is seen as the best solution to sustainable development. This paper proposes the Economic Dispatch (ED) of hybrid renewable power system. The hybrid system is composed of ten thermal generators, photovoltaic (PV) generator and wind turbine generator. To show the importance of renewable energy sources (RES) in the energy mix we have ran the simulation for system integrated PV only and PV plus wind. The result shows that the system with renewable energy sources (RES) is more compromising than the system without RES in terms of fuel cost.
Sensitivity of WRF-chem predictions to dust source function specification in West Asia
NASA Astrophysics Data System (ADS)
Nabavi, Seyed Omid; Haimberger, Leopold; Samimi, Cyrus
2017-02-01
Dust storms tend to form in sparsely populated areas covered by only few observations. Dust source maps, known as source functions, are used in dust models to allocate a certain potential of dust release to each place. Recent research showed that the well known Ginoux source function (GSF), currently used in Weather Research and Forecasting Model coupled with Chemistry (WRF-chem), exhibits large errors over some regions in West Asia, particularly near the IRAQ/Syrian border. This study aims to improve the specification of this critical part of dust forecasts. A new source function based on multi-year analysis of satellite observations, called West Asia source function (WASF), is therefore proposed to raise the quality of WRF-chem predictions in the region. WASF has been implemented in three dust schemes of WRF-chem. Remotely sensed and ground-based observations have been used to verify the horizontal and vertical extent and location of simulated dust clouds. Results indicate that WRF-chem performance is significantly improved in many areas after the implementation of WASF. The modified runs (long term simulations over the summers 2008-2012, using nudging) have yielded an average increase of Spearman correlation between observed and forecast aerosol optical thickness by 12-16 percent points compared to control runs with standard source functions. They even outperform MACC and DREAM dust simulations over many dust source regions. However, the quality of the forecasts decreased with distance from sources, probably due to deficiencies in the transport and deposition characteristics of the forecast model in these areas.
NASA Astrophysics Data System (ADS)
Lachinova, Svetlana L.; Vorontsov, Mikhail A.; Filimonov, Grigory A.; LeMaster, Daniel A.; Trippel, Matthew E.
2017-07-01
Computational efficiency and accuracy of wave-optics-based Monte-Carlo and brightness function numerical simulation techniques for incoherent imaging of extended objects through atmospheric turbulence are evaluated. Simulation results are compared with theoretical estimates based on known analytical solutions for the modulation transfer function of an imaging system and the long-exposure image of a Gaussian-shaped incoherent light source. It is shown that the accuracy of both techniques is comparable over the wide range of path lengths and atmospheric turbulence conditions, whereas the brightness function technique is advantageous in terms of the computational speed.
NASA Astrophysics Data System (ADS)
Marques, G.; Fraga, C. C. S.; Medellin-Azuara, J.
2016-12-01
The expansion and operation of urban water supply systems under growing demands, hydrologic uncertainty and water scarcity requires a strategic combination of supply sources for reliability, reduced costs and improved operational flexibility. The design and operation of such portfolio of water supply sources involves integration of long and short term planning to determine what and when to expand, and how much to use of each supply source accounting for interest rates, economies of scale and hydrologic variability. This research presents an integrated methodology coupling dynamic programming optimization with quadratic programming to optimize the expansion (long term) and operations (short term) of multiple water supply alternatives. Lagrange Multipliers produced by the short-term model provide a signal about the marginal opportunity cost of expansion to the long-term model, in an iterative procedure. A simulation model hosts the water supply infrastructure and hydrologic conditions. Results allow (a) identification of trade offs between cost and reliability of different expansion paths and water use decisions; (b) evaluation of water transfers between urban supply systems; and (c) evaluation of potential gains by reducing water system losses as a portfolio component. The latter is critical in several developing countries where water supply system losses are high and often neglected in favor of more system expansion.
Seismic Waves, 4th order accurate
DOE Office of Scientific and Technical Information (OSTI.GOV)
2013-08-16
SW4 is a program for simulating seismic wave propagation on parallel computers. SW4 colves the seismic wave equations in Cartesian corrdinates. It is therefore appropriate for regional simulations, where the curvature of the earth can be neglected. SW4 implements a free surface boundary condition on a realistic topography, absorbing super-grid conditions on the far-field boundaries, and a kinematic source model consisting of point force and/or point moment tensor source terms. SW4 supports a fully 3-D heterogeneous material model that can be specified in several formats. SW4 can output synthetic seismograms in an ASCII test format, or in the SAC finarymore » format. It can also present simulation information as GMT scripts, whixh can be used to create annotated maps. Furthermore, SW4 can output the solution as well as the material model along 2-D grid planes.« less
NASA Astrophysics Data System (ADS)
Lee, Shiann-Jong; Liu, Qinya; Tromp, Jeroen; Komatitsch, Dimitri; Liang, Wen-Tzong; Huang, Bor-Shouh
2014-06-01
We developed a Real-time Online earthquake Simulation system (ROS) to simulate regional earthquakes in Taiwan. The ROS uses a centroid moment tensor solution of seismic events from a Real-time Moment Tensor monitoring system (RMT), which provides all the point source parameters including the event origin time, hypocentral location, moment magnitude and focal mechanism within 2 min after the occurrence of an earthquake. Then, all of the source parameters are automatically forwarded to the ROS to perform an earthquake simulation, which is based on a spectral-element method (SEM). A new island-wide, high resolution SEM mesh model is developed for the whole Taiwan in this study. We have improved SEM mesh quality by introducing a thin high-resolution mesh layer near the surface to accommodate steep and rapidly varying topography. The mesh for the shallow sedimentary basin is adjusted to reflect its complex geometry and sharp lateral velocity contrasts. The grid resolution at the surface is about 545 m, which is sufficient to resolve topography and tomography data for simulations accurate up to 1.0 Hz. The ROS is also an infrastructural service, making online earthquake simulation feasible. Users can conduct their own earthquake simulation by providing a set of source parameters through the ROS webpage. For visualization, a ShakeMovie and ShakeMap are produced during the simulation. The time needed for one event is roughly 3 min for a 70 s ground motion simulation. The ROS is operated online at the Institute of Earth Sciences, Academia Sinica (http://ros.earth.sinica.edu.tw/). Our long-term goal for the ROS system is to contribute to public earth science outreach and to realize seismic ground motion prediction in real-time.
2016-04-01
phosphate use by these recombinant strains was evaluated because carbon use by these strains is still undergoing optimization by LBNL. The E . coli ...plasmids, had successful growth when transformed into a different E . coli background, which correlated with IMPA degradation. Ultimately, the...transformed E . coli strains, optimized at ECBC, were able to grow using IMPA as the phosphate source. 15. SUBJECT TERMS Acetylcholinesterase (AChE
Validation of Operational Multiscale Environment Model With Grid Adaptivity (OMEGA).
1995-12-01
Center for the period of the Chernobyl Nuclear Accident. The physics of the model is tested using National Weather Service Medium Range Forecast data by...Climatology Center for the first three days following the release at the Chernobyl Nuclear Plant. A user-defined source term was developed to simulate
Current switching ratio optimization using dual pocket doping engineering
NASA Astrophysics Data System (ADS)
Dash, Sidhartha; Sahoo, Girija Shankar; Mishra, Guru Prasad
2018-01-01
This paper presents a smart idea to maximize current switching ratio of cylindrical gate tunnel FET (CGT) by growing pocket layers in both source and channel region. The pocket layers positioned in the source and channel of the device provides significant improvement in ON-state and OFF-state current respectively. The dual pocket doped cylindrical gate TFET (DP-CGT) exhibits much superior performance in term of drain current, transconductance and current ratio as compared to conventional CGT, channel pocket doped CGT (CP-CGT) and source pocket doped CGT (SP-CGT). Further, the current ratio has been optimized w.r.t. width and instantaneous position both the pocket layers. The much improved current ratio and low power consumption makes the proposed device suitable for low-power and high speed application. The simulation work of DP-CGT is done using 3D Sentaurus TCAD device simulator from Synopsys.
2016-07-21
constants. The model (2.42) is popular for simulation of the UAV motion [60], [61], [62] due to the fact that it models the aircraft response to...inputs to the dynamic model (2.42). The concentration sensors onboard the UAV record concentration ( simulated ) data according to its spatial location...vehicle dynamics and guidance, and the onboard sensor modeling . 15. SUBJECT TERMS State estimation; UAVs , mobile sensors; grid adaptationj; plume
Analysis of drift correction in different simulated weighing schemes
NASA Astrophysics Data System (ADS)
Beatrici, A.; Rebelo, A.; Quintão, D.; Cacais, F. L.; Loayza, V. M.
2015-10-01
In the calibration of high accuracy mass standards some weighing schemes are used to reduce or eliminate the zero drift effects in mass comparators. There are different sources for the drift and different methods for its treatment. By using numerical methods, drift functions were simulated and a random term was included in each function. The comparison between the results obtained from ABABAB and ABBA weighing series was carried out. The results show a better efficacy of ABABAB method for drift with smooth variation and small randomness.
NASA Astrophysics Data System (ADS)
Lawrie, S. R.; Faircloth, D. C.; Smith, J. D.; Sarmento, T. M.; Whitehead, M. O.; Wood, T.; Perkins, M.; Macgregor, J.; Abel, R.
2018-05-01
A vessel for extraction and source plasma analyses is being used for Penning H- ion source development at the Rutherford Appleton Laboratory. A new set of optical elements including an einzel lens has been installed, which transports over 80 mA of H- beam successfully. Simultaneously, a 2X scaled Penning source has been developed to reduce cathode power density. The 2X source is now delivering a 65 mA H- ion beam at 10% duty factor, meeting its design criteria. The long-term viability of the einzel lens and 2X source is now being evaluated, so new diagnostic devices have been installed. A pair of electrostatic deflector plates is used to correct beam misalignment and perform fast chopping, with a voltage rise time of 24 ns. A suite of four quartz crystal microbalances has shown that the cesium flux in the vacuum vessel is only increased by a factor of two, despite the absence of a dedicated cold trap. Finally, an infrared camera has demonstrated good agreement with thermal simulations but has indicated unexpected heating due to beam loss on the downstream electrode. These types of diagnostics are suitable for monitoring all operational ion sources. In addition to experimental campaigns and new diagnostic tools, the high-performance VSim and COMSOL software packages are being used for plasma simulations of two novel ion thrusters for space propulsion applications. In parallel, a VSim framework has been established to include arbitrary temperature and cesium fields to allow the modeling of surface physics in H- ion sources.
NASA Astrophysics Data System (ADS)
Lin, Wei-Chih; Lin, Yu-Pin; Anthony, Johnathen
2015-04-01
Heavy metal pollution has adverse effects on not only the focal invertebrate species of this study, such as reduction in pupa weight and increased larval mortality, but also on the higher trophic level organisms which feed on them, either directly or indirectly, through the process of biomagnification. Despite this, few studies regarding remediation prioritization take species distribution or biological conservation priorities into consideration. This study develops a novel approach for delineating sites which are both contaminated by any of 5 readily bioaccumulated heavy metal soil contaminants and are of high ecological importance for the highly mobile, low trophic level focal species. The conservation priority of each site was based on the projected distributions of 6 moth species simulated via the presence-only maximum entropy species distribution model followed by the subsequent application of a systematic conservation tool. In order to increase the number of available samples, we also integrated crowd-sourced data with professionally-collected data via a novel optimization procedure based on a simulated annealing algorithm. This integration procedure was important since while crowd-sourced data can drastically increase the number of data samples available to ecologists, still the quality or reliability of crowd-sourced data can be called into question, adding yet another source of uncertainty in projecting species distributions. The optimization method screens crowd-sourced data in terms of the environmental variables which correspond to professionally-collected data. The sample distribution data was derived from two different sources, including the EnjoyMoths project in Taiwan (crowd-sourced data) and the Global Biodiversity Information Facility (GBIF) ?eld data (professional data). The distributions of heavy metal concentrations were generated via 1000 iterations of a geostatistical co-simulation approach. The uncertainties in distributions of the heavy metals were then quantified based on the overall consistency between realizations. Finally, Information-Gap Decision Theory (IGDT) was applied to rank the remediation priorities of contaminated sites in terms of both spatial consensus of multiple heavy metal realizations and the priority of specific conservation areas. Our results show that the crowd-sourced optimization algorithm developed in this study is effective at selecting suitable data from crowd-sourced data. By using this technique the available sample data increased to a total number of 96, 162, 72, 62, 69 and 62 or, that is, 2.6, 1.6, 2.5, 1.6, 1.2 and 1.8 times that originally available through the GBIF professionally-assembled database. Additionally, for all species considered the performance of models, in terms of test-AUC values, based on the combination of both data sources exceeded those models which were based on a single data source. Furthermore, the additional optimization-selected data lowered the overall variability, and therefore uncertainty, of model outputs. Based on the projected species distributions, our results revealed that around 30% of high species hotspot areas were also identified as contaminated. The decision-making tool, IGDT, successfully yielded remediation plans in terms of specific ecological value requirements, false positive tolerance rates of contaminated areas, and expected decision robustness. The proposed approach can be applied both to identify high conservation priority sites contaminated by heavy metals, based on the combination of screened crowd-sourced and professionally-collected data, and in making robust remediation decisions.
NASA Astrophysics Data System (ADS)
Haworth, Daniel
2013-11-01
The importance of explicitly accounting for the effects of unresolved turbulent fluctuations in Reynolds-averaged and large-eddy simulations of chemically reacting turbulent flows is increasingly recognized. Transported probability density function (PDF) methods have emerged as one of the most promising modeling approaches for this purpose. In particular, PDF methods provide an elegant and effective resolution to the closure problems that arise from averaging or filtering terms that correspond to nonlinear point processes, including chemical reaction source terms and radiative emission. PDF methods traditionally have been associated with studies of turbulence-chemistry interactions in laboratory-scale, atmospheric-pressure, nonluminous, statistically stationary nonpremixed turbulent flames; and Lagrangian particle-based Monte Carlo numerical algorithms have been the predominant method for solving modeled PDF transport equations. Recent advances and trends in PDF methods are reviewed and discussed. These include advances in particle-based algorithms, alternatives to particle-based algorithms (e.g., Eulerian field methods), treatment of combustion regimes beyond low-to-moderate-Damköhler-number nonpremixed systems (e.g., premixed flamelets), extensions to include radiation heat transfer and multiphase systems (e.g., soot and fuel sprays), and the use of PDF methods as the basis for subfilter-scale modeling in large-eddy simulation. Examples are provided that illustrate the utility and effectiveness of PDF methods for physics discovery and for applications to practical combustion systems. These include comparisons of results obtained using the PDF method with those from models that neglect unresolved turbulent fluctuations in composition and temperature in the averaged or filtered chemical source terms and/or the radiation heat transfer source terms. In this way, the effects of turbulence-chemistry-radiation interactions can be isolated and quantified.
Development of axisymmetric lattice Boltzmann flux solver for complex multiphase flows
NASA Astrophysics Data System (ADS)
Wang, Yan; Shu, Chang; Yang, Li-Ming; Yuan, Hai-Zhuan
2018-05-01
This paper presents an axisymmetric lattice Boltzmann flux solver (LBFS) for simulating axisymmetric multiphase flows. In the solver, the two-dimensional (2D) multiphase LBFS is applied to reconstruct macroscopic fluxes excluding axisymmetric effects. Source terms accounting for axisymmetric effects are introduced directly into the governing equations. As compared to conventional axisymmetric multiphase lattice Boltzmann (LB) method, the present solver has the kinetic feature for flux evaluation and avoids complex derivations of external forcing terms. In addition, the present solver also saves considerable computational efforts in comparison with three-dimensional (3D) computations. The capability of the proposed solver in simulating complex multiphase flows is demonstrated by studying single bubble rising in a circular tube. The obtained results compare well with the published data.
Numerical models analysis of energy conversion process in air-breathing laser propulsion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong Yanji; Song Junling; Cui Cunyan
Energy source was considered as a key essential in this paper to describe energy conversion process in air-breathing laser propulsion. Some secondary factors were ignored when three independent modules, ray transmission module, energy source term module and fluid dynamic module, were established by simultaneous laser radiation transportation equation and fluid mechanics equation. The incidence laser beam was simulated based on ray tracing method. The calculated results were in good agreement with those of theoretical analysis and experiments.
Noise-enhanced CVQKD with untrusted source
NASA Astrophysics Data System (ADS)
Wang, Xiaoqun; Huang, Chunhui
2017-06-01
The performance of one-way and two-way continuous variable quantum key distribution (CVQKD) protocols can be increased by adding some noise on the reconciliation side. In this paper, we propose to add noise at the reconciliation end to improve the performance of CVQKD with untrusted source. We derive the key rate of this case and analyze the impact of the additive noise. The simulation results show that the optimal additive noise can improve the performance of the system in terms of maximum transmission distance and tolerable excess noise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Yuna; Park, Yeong-Shin; Jo, Jong-Gab
2012-02-15
Microwave plasma ion source with rectangular cavity resonator has been examined to improve ion beam current by changing wave launcher type from single-port to double-port. The cavity resonators with double-port and single-port wave launchers are designed to get resonance effect at TE-103 mode and TE-102 mode, respectively. In order to confirm that the cavities are acting as resonator, the microwave power for breakdown is measured and compared with the E-field strength estimated from the HFSS (High Frequency Structure Simulator) simulation. Langmuir probe measurements show that double-port cavity enhances central density of plasma ion source by modifying non-uniform plasma density profilemore » of the single-port cavity. Correspondingly, beam current from the plasma ion source utilizing the double-port resonator is measured to be higher than that utilizing single-port resonator. Moreover, the enhancement in plasma density and ion beam current utilizing the double-port resonator is more pronounced as higher microwave power applied to the plasma ion source. Therefore, the rectangular cavity resonator utilizing the double-port is expected to enhance the performance of plasma ion source in terms of ion beam extraction.« less
Lee, Yuna; Park, Yeong-Shin; Jo, Jong-Gab; Yang, J J; Hwang, Y S
2012-02-01
Microwave plasma ion source with rectangular cavity resonator has been examined to improve ion beam current by changing wave launcher type from single-port to double-port. The cavity resonators with double-port and single-port wave launchers are designed to get resonance effect at TE-103 mode and TE-102 mode, respectively. In order to confirm that the cavities are acting as resonator, the microwave power for breakdown is measured and compared with the E-field strength estimated from the HFSS (High Frequency Structure Simulator) simulation. Langmuir probe measurements show that double-port cavity enhances central density of plasma ion source by modifying non-uniform plasma density profile of the single-port cavity. Correspondingly, beam current from the plasma ion source utilizing the double-port resonator is measured to be higher than that utilizing single-port resonator. Moreover, the enhancement in plasma density and ion beam current utilizing the double-port resonator is more pronounced as higher microwave power applied to the plasma ion source. Therefore, the rectangular cavity resonator utilizing the double-port is expected to enhance the performance of plasma ion source in terms of ion beam extraction.
Numerical simulations of the Cosmic Battery in accretion flows around astrophysical black holes
NASA Astrophysics Data System (ADS)
Contopoulos, I.; Nathanail, A.; Sądowski, A.; Kazanas, D.; Narayan, R.
2018-01-01
We implement the KORAL code to perform two sets of very long general relativistic radiation magnetohydrodynamic simulations of an axisymmetric optically thin magnetized flow around a non-rotating black hole: one with a new term in the electromagnetic field tensor due to the radiation pressure felt by the plasma electrons on the comoving frame of the electron-proton plasma, and one without. The source of the radiation is the accretion flow itself. Without the new term, the system evolves to a standard accretion flow due to the development of the magneto-rotational instability. With the new term, however, the system eventually evolves to a magnetically arrested disc state in which a large-scale jet-like magnetic field threads the black hole horizon. Our results confirm the secular action of the Cosmic Battery in accretion flows around astrophysical black holes.
SU-E-T-507: Internal Dosimetry in Nuclear Medicine Using GATE and XCAT Phantom: A Simulation Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fallahpoor, M; Abbasi, M; Sen, A
Purpose Monte Carlo simulations are routinely used for internal dosimetry studies. These studies are conducted with humanoid phantoms such as the XCAT phantom. In this abstract we present the absorbed doses for various pairs of source and target organs using three common radiotracers in nuclear medicine. Methods The GATE software package is used for the Monte Carlo simulations. A typical female XCAT phantom is used as the input. Three radiotracers 153Sm, 131I and 99mTc are studied. The Specific Absorbed Fraction (SAF) for gamma rays (99mTc, 153Sm and 131I) and Specific Fraction (SF) for beta particles (153Sm and 131I) are calculatedmore » for all 100 pairs of source target organs including brain, liver, lung, pancreas, kidney, adrenal, spleen, rib bone, bladder and ovaries. Results The source organs themselves gain the highest absorbed dose as compared to other organs. The dose is found to be inversely proportional to distance from the source organ. In SAF results of 153Sm, when the source organ is lung, the rib bone, gain 0.0730 (Kg-1) that is more than lung itself. Conclusion The absorbed dose for various organs was studied in terms of SAF and SF. Such studies hold importance for future therapeutic procedures and optimization of induced radiotracer.« less
Numerical model of a tracer test on the Santa Clara River, Ventura County, California
Nishikawa, Tracy; Paybins, Katherine S.; Izbicki, John A.; Reichard, Eric G.
1999-01-01
To better understand the flow processes, solute-transport processes, and ground-water/surface-water interactions on the Santa Clara River in Ventura County, California, a 24-hour fluorescent-dye tracer study was performed under steady-state flow conditions on a 45-km reach of the river. The study reach includes perennial (uppermost and lowermost) subreaches and ephemeral subreaches of the lower Piru Creek and the middle Santa Clara River. The tracer-test data were used to calibrate a one-dimensional flow model (DAFLOW) and a solute-transport model (BLTM). The dye-arrival times at each sample location were simulated by calibrating the velocity parameters in DAFLOW. The simulations of dye transport indicated that (1) ground-water recharge explains the loss of mass in the ephemeral middle subreaches, and (2) groundwater recharge does not explain the loss of mass in the perennial uppermost and lowermost subreaches. The observed tracer curves in the perennial subreaches were indicative of sorptive dye losses, transient storage, and (or) photodecay - these phenomena were simulated using a linear decay term. However, analysis of the linear decay terms indicated that photodecay was not a dominant source of dye loss.To better understand the flow processes, solute-transport processes, and ground-water/surface-water interactions on the Santa Clara River in Ventura County, California, a 24-hour fluorescent-dye tracer study was performed under steady-state flow conditions on a 45-km reach of the river. The study reach includes perennial (uppermost and lowermost) subreaches and ephemeral subreaches of the lower Piru Creek and the middle Santa Clara River. The tracer-test data were used to calibrate a one-dimension-al flow model (DAFLOW) and a solute-transport model (BLTM). The dye-arrival times at each sample location were simulated by calibrating the velocity parameters in DAFLOW. The simulations of dye transport indicated that (1) ground-water recharge explains the loss of mass in the ephemeral middle subreaches, and (2) ground-water recharge does not explain the loss of mass in the perennial uppermost and lowermost subreaches. The observed tracer curves in the perennial subreaches were indicative of sorptive dye losses, transient storage, and (or) photodecay - these phenomena were simulated using a linear decay term. However, analysis of the linear decay terms indicated that photodecay was not a dominant source of dye loss.
DBCC Software as Database for Collisional Cross-Sections
NASA Astrophysics Data System (ADS)
Moroz, Daniel; Moroz, Paul
2014-10-01
Interactions of species, such as atoms, radicals, molecules, electrons, and photons, in plasmas used for materials processing could be very complex, and many of them could be described in terms of collisional cross-sections. Researchers involved in plasma simulations must select reasonable cross-sections for collisional processes for implementing them into their simulation codes to be able to correctly simulate plasmas. However, collisional cross-section data are difficult to obtain, and, for some collisional processes, the cross-sections are still not known. Data on collisional cross-sections can be obtained from numerous sources including numerical calculations, experiments, journal articles, conference proceedings, scientific reports, various universities' websites, national labs and centers specifically devoted to collecting data on cross-sections. The cross-sections data received from different sources could be partial, corresponding to limited energy ranges, or could even not be in agreement. The DBCC software package was designed to help researchers in collecting, comparing, and selecting cross-sections, some of which could be constructed from others or chosen as defaults. This is important as different researchers may place trust in different cross-sections or in different sources. We will discuss the details of DBCC and demonstrate how it works and why it is beneficial to researchers working on plasma simulations.
Modeling the contribution of point sources and non-point sources to Thachin River water pollution.
Schaffner, Monika; Bader, Hans-Peter; Scheidegger, Ruth
2009-08-15
Major rivers in developing and emerging countries suffer increasingly of severe degradation of water quality. The current study uses a mathematical Material Flow Analysis (MMFA) as a complementary approach to address the degradation of river water quality due to nutrient pollution in the Thachin River Basin in Central Thailand. This paper gives an overview of the origins and flow paths of the various point- and non-point pollution sources in the Thachin River Basin (in terms of nitrogen and phosphorus) and quantifies their relative importance within the system. The key parameters influencing the main nutrient flows are determined and possible mitigation measures discussed. The results show that aquaculture (as a point source) and rice farming (as a non-point source) are the key nutrient sources in the Thachin River Basin. Other point sources such as pig farms, households and industries, which were previously cited as the most relevant pollution sources in terms of organic pollution, play less significant roles in comparison. This order of importance shifts when considering the model results for the provincial level. Crosschecks with secondary data and field studies confirm the plausibility of our simulations. Specific nutrient loads for the pollution sources are derived; these can be used for a first broad quantification of nutrient pollution in comparable river basins. Based on an identification of the sensitive model parameters, possible mitigation scenarios are determined and their potential to reduce the nutrient load evaluated. A comparison of simulated nutrient loads with measured nutrient concentrations shows that nutrient retention in the river system may be significant. Sedimentation in the slow flowing surface water network as well as nitrogen emission to the air from the warm oxygen deficient waters are certainly partly responsible, but also wetlands along the river banks could play an important role as nutrient sinks.
A simulated approach to estimating PM10 and PM2.5 concentrations downwind from cotton gins
USDA-ARS?s Scientific Manuscript database
Cotton gins are required to obtain operating permits from state air pollution regulatory agencies (SAPRA), which regulate the amount of particulate matter that can be emitted. Industrial Source Complex Short Term version 3 (ISCST3) is the Gaussian dispersion model currently used by some SAPRAs to pr...
Antineutrino analysis for continuous monitoring of nuclear reactors: Sensitivity study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, Christopher; Erickson, Anna
This paper explores the various contributors to uncertainty on predictions of the antineutrino source term which is used for reactor antineutrino experiments and is proposed as a safeguard mechanism for future reactor installations. The errors introduced during simulation of the reactor burnup cycle from variation in nuclear reaction cross sections, operating power, and other factors are combined with those from experimental and predicted antineutrino yields, resulting from fissions, evaluated, and compared. The most significant contributor to uncertainty on the reactor antineutrino source term when the reactor was modeled in 3D fidelity with assembly-level heterogeneity was found to be the uncertaintymore » on the antineutrino yields. Using the reactor simulation uncertainty data, the dedicated observation of a rigorously modeled small, fast reactor by a few-ton near-field detector was estimated to offer reduction of uncertainty on antineutrino yields in the 3.0–6.5 MeV range to a few percent for the primary power-producing fuel isotopes, even with zero prior knowledge of the yields.« less
NASA Astrophysics Data System (ADS)
Hoffmann, T. L.; Lieb, S.; Pauldrach, A. W. A.; Lesch, H.; Hultzsch, P. J. N.; Birk, G. T.
2012-08-01
Aims: The aim of this work is to verify whether turbulent magnetic reconnection can provide the additional energy input required to explain the up to now only poorly understood ionization mechanism of the diffuse ionized gas (DIG) in galaxies and its observed emission line spectra. Methods: We use a detailed non-LTE radiative transfer code that does not make use of the usual restrictive gaseous nebula approximations to compute synthetic spectra for gas at low densities. Excitation of the gas is via an additional heating term in the energy balance as well as by photoionization. Numerical values for this heating term are derived from three-dimensional resistive magnetohydrodynamic two-fluid plasma-neutral-gas simulations to compute energy dissipation rates for the DIG under typical conditions. Results: Our simulations show that magnetic reconnection can liberate enough energy to by itself fully or partially ionize the gas. However, synthetic spectra from purely thermally excited gas are incompatible with the observed spectra; a photoionization source must additionally be present to establish the correct (observed) ionization balance in the gas.
Power-output regularization in global sound equalization.
Stefanakis, Nick; Sarris, John; Cambourakis, George; Jacobsen, Finn
2008-01-01
The purpose of equalization in room acoustics is to compensate for the undesired modification that an enclosure introduces to signals such as audio or speech. In this work, equalization in a large part of the volume of a room is addressed. The multiple point method is employed with an acoustic power-output penalty term instead of the traditional quadratic source effort penalty term. Simulation results demonstrate that this technique gives a smoother decline of the reproduction performance away from the control points.
Source term evaluation for combustion modeling
NASA Technical Reports Server (NTRS)
Sussman, Myles A.
1993-01-01
A modification is developed for application to the source terms used in combustion modeling. The modification accounts for the error of the finite difference scheme in regions where chain-branching chemical reactions produce exponential growth of species densities. The modification is first applied to a one-dimensional scalar model problem. It is then generalized to multiple chemical species, and used in quasi-one-dimensional computations of shock-induced combustion in a channel. Grid refinement studies demonstrate the improved accuracy of the method using this modification. The algorithm is applied in two spatial dimensions and used in simulations of steady and unsteady shock-induced combustion. Comparisons with ballistic range experiments give confidence in the numerical technique and the 9-species hydrogen-air chemistry model.
Tang, Xiao-Bin; Meng, Jia; Wang, Peng; Cao, Ye; Huang, Xi; Wen, Liang-Sheng; Chen, Da
2016-04-01
A small-sized UAV (NH-UAV) airborne system with two gamma spectrometers (LaBr3 detector and HPGe detector) was developed to monitor activity concentration in serious nuclear accidents, such as the Fukushima nuclear accident. The efficiency calibration and determination of minimum detectable activity concentration (MDAC) of the specific system were studied by MC simulations at different flight altitudes, different horizontal distances from the detection position to the source term center and different source term sizes. Both air and ground radiation were considered in the models. The results obtained may provide instructive suggestions for in-situ radioactivity measurements of NH-UAV. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Beck, Jeffrey; Bos, Jeremy P.
2017-05-01
We compare several modifications to the open-source wave optics package, WavePy, intended to improve execution time. Specifically, we compare the relative performance of the Intel MKL, a CPU based OpenCV distribution, and GPU-based version. Performance is compared between distributions both on the same compute platform and between a fully-featured computing workstation and the NVIDIA Jetson TX1 platform. Comparisons are drawn in terms of both execution time and power consumption. We have found that substituting the Fast Fourier Transform operation from OpenCV provides a marked improvement on all platforms. In addition, we show that embedded platforms offer some possibility for extensive improvement in terms of efficiency compared to a fully featured workstation.
Performance Impact of Deflagration to Detonation Transition Enhancing Obstacles
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Schauer, Frederick; Hopper, David
2012-01-01
A sub-model is developed to account for the drag and heat transfer enhancement resulting from deflagration-to-detonation (DDT) inducing obstacles commonly used in pulse detonation engines (PDE). The sub-model is incorporated as a source term in a time-accurate, quasi-onedimensional, CFD-based PDE simulation. The simulation and sub-model are then validated through comparison with a particular experiment in which limited DDT obstacle parameters were varied. The simulation is then used to examine the relative contributions from drag and heat transfer to the reduced thrust which is observed. It is found that heat transfer is far more significant than aerodynamic drag in this particular experiment.
A numerical method for shock driven multiphase flow with evaporating particles
NASA Astrophysics Data System (ADS)
Dahal, Jeevan; McFarland, Jacob A.
2017-09-01
A numerical method for predicting the interaction of active, phase changing particles in a shock driven flow is presented in this paper. The Particle-in-Cell (PIC) technique was used to couple particles in a Lagrangian coordinate system with a fluid in an Eulerian coordinate system. The Piecewise Parabolic Method (PPM) hydrodynamics solver was used for solving the conservation equations and was modified with mass, momentum, and energy source terms from the particle phase. The method was implemented in the open source hydrodynamics software FLASH, developed at the University of Chicago. A simple validation of the methods is accomplished by comparing velocity and temperature histories from a single particle simulation with the analytical solution. Furthermore, simple single particle parcel simulations were run at two different sizes to study the effect of particle size on vorticity deposition in a shock-driven multiphase instability. Large particles were found to have lower enstrophy production at early times and higher enstrophy dissipation at late times due to the advection of the particle vorticity source term through the carrier gas. A 2D shock-driven instability of a circular perturbation is studied in simulations and compared to previous experimental data as further validation of the numerical methods. The effect of the particle size distribution and particle evaporation is examined further for this case. The results show that larger particles reduce the vorticity deposition, while particle evaporation increases it. It is also shown that for a distribution of particles sizes the vorticity deposition is decreased compared to single particle size case at the mean diameter.
Three-Dimensional Model Synthesis of the Global Methane Cycle
NASA Technical Reports Server (NTRS)
Fung, I.; Prather, M.; John, J.; Lerner, J.; Matthews, E.
1991-01-01
A synthesis of the global methane cycle is presented to attempt to generate an accurate global methane budget. Methane-flux measurements, energy data, and agricultural statistics are merged with databases of land-surface characteristics and anthropogenic activities. The sources and sinks of methane are estimated based on atmospheric methane composition and variations, and a global 3D transport model simulates the corresponding atmospheric responses. The geographic and seasonal variations of candidate budgets are compared with observational data, and the available observations are used to constrain the plausible methane budgets. The preferred budget includes annual destruction rates and annual emissions for various sources. The lack of direct flux measurements in the regions of many of these fluxes makes the unique determination of each term impossible. OH oxidation is found to be the largest single term, although more measurements of this and other terms are recommended.
Physical/chemical closed-loop water-recycling
NASA Technical Reports Server (NTRS)
Herrmann, Cal C.; Wydeven, Theodore
1991-01-01
Water needs, water sources, and means for recycling water are examined in terms appropriate to the water quality requirements of a small crew and spacecraft intended for long duration exploration missions. Inorganic, organic, and biological hazards are estimated for waste water sources. Sensitivities to these hazards for human uses are estimated. The water recycling processes considered are humidity condensation, carbon dioxide reduction, waste oxidation, distillation, reverse osmosis, pervaporation, electrodialysis, ion exchange, carbon sorption, and electrochemical oxidation. Limitations and applications of these processes are evaluated in terms of water quality objectives. Computerized simulation of some of these chemical processes is examined. Recommendations are made for development of new water recycling technology and improvement of existing technology for near term application to life support systems for humans in space. The technological developments are equally applicable to water needs on Earth, in regions where extensive water recycling is needed or where advanced water treatment is essential to meet EPA health standards.
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Saumyadip; Abraham, John
2012-07-01
The unsteady flamelet progress variable (UFPV) model has been proposed by Pitsch and Ihme ["An unsteady/flamelet progress variable method for LES of nonpremixed turbulent combustion," AIAA Paper No. 2005-557, 2005] for modeling the averaged/filtered chemistry source terms in Reynolds averaged simulations and large eddy simulations of reacting non-premixed combustion. In the UFPV model, a look-up table of source terms is generated as a function of mixture fraction Z, scalar dissipation rate χ, and progress variable C by solving the unsteady flamelet equations. The assumption is that the unsteady flamelet represents the evolution of the reacting mixing layer in the non-premixed flame. We assess the accuracy of the model in predicting autoignition and flame development in compositionally stratified n-heptane/air mixtures using direct numerical simulations (DNS). The focus in this work is primarily on the assessment of accuracy of the probability density functions (PDFs) employed for obtaining averaged source terms. The performance of commonly employed presumed functions, such as the dirac-delta distribution function, the β distribution function, and statistically most likely distribution (SMLD) approach in approximating the shapes of the PDFs of the reactive and the conserved scalars is evaluated. For unimodal distributions, it is observed that functions that need two-moment information, e.g., the β distribution function and the SMLD approach with two-moment closure, are able to reasonably approximate the actual PDF. As the distribution becomes multimodal, higher moment information is required. Differences are observed between the ignition trends obtained from DNS and those predicted by the look-up table, especially for smaller gradients where the flamelet assumption becomes less applicable. The formulation assumes that the shape of the χ(Z) profile can be modeled by an error function which remains unchanged in the presence of heat release. We show that this assumption is not accurate.
Numerical Simulations of Reacting Flows Using Asynchrony-Tolerant Schemes for Exascale Computing
NASA Astrophysics Data System (ADS)
Cleary, Emmet; Konduri, Aditya; Chen, Jacqueline
2017-11-01
Communication and data synchronization between processing elements (PEs) are likely to pose a major challenge in scalability of solvers at the exascale. Recently developed asynchrony-tolerant (AT) finite difference schemes address this issue by relaxing communication and synchronization between PEs at a mathematical level while preserving accuracy, resulting in improved scalability. The performance of these schemes has been validated for simple linear and nonlinear homogeneous PDEs. However, many problems of practical interest are governed by highly nonlinear PDEs with source terms, whose solution may be sensitive to perturbations caused by communication asynchrony. The current work applies the AT schemes to combustion problems with chemical source terms, yielding a stiff system of PDEs with nonlinear source terms highly sensitive to temperature. Examples shown will use single-step and multi-step CH4 mechanisms for 1D premixed and nonpremixed flames. Error analysis will be discussed both in physical and spectral space. Results show that additional errors introduced by the AT schemes are negligible and the schemes preserve their accuracy. We acknowledge funding from the DOE Computational Science Graduate Fellowship administered by the Krell Institute.
Forcing scheme analysis for the axisymmetric lattice Boltzmann method under incompressible limit.
Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Chen, Jie; Yin, Linmao; Chew, Jia Wei
2017-04-01
Because the standard lattice Boltzmann (LB) method is proposed for Cartesian Navier-Stokes (NS) equations, additional source terms are necessary in the axisymmetric LB method for representing the axisymmetric effects. Therefore, the accuracy and applicability of the axisymmetric LB models depend on the forcing schemes adopted for discretization of the source terms. In this study, three forcing schemes, namely, the trapezium rule based scheme, the direct forcing scheme, and the semi-implicit centered scheme, are analyzed theoretically by investigating their derived macroscopic equations in the diffusive scale. Particularly, the finite difference interpretation of the standard LB method is extended to the LB equations with source terms, and then the accuracy of different forcing schemes is evaluated for the axisymmetric LB method. Theoretical analysis indicates that the discrete lattice effects arising from the direct forcing scheme are part of the truncation error terms and thus would not affect the overall accuracy of the standard LB method with general force term (i.e., only the source terms in the momentum equation are considered), but lead to incorrect macroscopic equations for the axisymmetric LB models. On the other hand, the trapezium rule based scheme and the semi-implicit centered scheme both have the advantage of avoiding the discrete lattice effects and recovering the correct macroscopic equations. Numerical tests applied for validating the theoretical analysis show that both the numerical stability and the accuracy of the axisymmetric LB simulations are affected by the direct forcing scheme, which indicate that forcing schemes free of the discrete lattice effects are necessary for the axisymmetric LB method.
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Fotia, Matthew L.; Hoke, John; Schauer, Fred
2015-01-01
A quasi-two-dimensional, computational fluid dynamic (CFD) simulation of a rotating detonation engine (RDE) is described. The simulation operates in the detonation frame of reference and utilizes a relatively coarse grid such that only the essential primary flow field structure is captured. This construction and other simplifications yield rapidly converging, steady solutions. Viscous effects, and heat transfer effects are modeled using source terms. The effects of potential inlet flow reversals are modeled using boundary conditions. Results from the simulation are compared to measured data from an experimental RDE rig with a converging-diverging nozzle added. The comparison is favorable for the two operating points examined. The utility of the code as a performance optimization tool and a diagnostic tool are discussed.
NASA Astrophysics Data System (ADS)
Morino, Yu; Ohara, Toshimasa; Yumimoto, Keiya
2014-05-01
Chemical transport models (CTM) played key roles in understanding the atmospheric behaviors and deposition patterns of radioactive materials emitted from the Fukushima Daiichi nuclear power plant (FDNPP) after the nuclear accident that accompanied the great Tohoku earthquake and tsunami on 11 March 2011. In this study, we assessed uncertainties of atmospheric simulation by comparing observed and simulated deposition of radiocesium (137Cs) and radioiodine (131I). Airborne monitoring survey data were used to assess the model performance of 137Cs deposition patterns. We found that simulation using emissions estimated with a regional-scale (~500 km) CTM better reproduced the observed 137Cs deposition pattern in eastern Japan than simulation using emissions estimated with local-scale (~50 km) or global-scale CTM. In addition, we estimated the emission amount of 137Cs from FDNPP by combining a CTM, a priori source term, and observed deposition data. This is the first use of airborne survey data of 137Cs deposition (more than 16,000 data points) as the observational constraints in inverse modeling. The model simulation driven by a posteriori source term achieved better agreements with 137Cs depositions measured by aircraft survey and at in-situ stations over eastern Japan. Wet deposition module was also evaluated. Simulation using a process-based wet deposition module reproduced the observations well, whereas simulation using scavenging coefficients showed large uncertainties associated with empirical parameters. The best-available simulation reproduced the observed 137Cs deposition rates in high-deposition areas (≥10 kBq m-2) within one order of magnitude. Recently, 131I deposition map was released and helped to evaluate model performance of 131I deposition patterns. Observed 131I/137Cs deposition ratio is higher in areas southwest of FDNPP than northwest of FDNPP, and this behavior was roughly reproduced by a CTM if we assume that released 131I is more in gas phase than particles. Analysis of 131I deposition gives us better constraint for the atmospheric simulation of 131I, which is important in assessing public radiation exposure.
The Prodiguer Messaging Platform
NASA Astrophysics Data System (ADS)
Greenslade, Mark; Denvil, Sebastien; Raciazek, Jerome; Carenton, Nicolas; Levavasseur, Guillame
2014-05-01
CONVERGENCE is a French multi-partner national project designed to gather HPC and informatics expertise to innovate in the context of running French climate models with differing grids and at differing resolutions. Efficient and reliable execution of these models and the management and dissemination of model output (data and meta-data) are just some of the complexities that CONVERGENCE aims to resolve. The Institut Pierre Simon Laplace (IPSL) is responsible for running climate simulations upon a set of heterogenous HPC environments within France. With heterogeneity comes added complexity in terms of simulation instrumentation and control. Obtaining a global perspective upon the state of all simulations running upon all HPC environments has hitherto been problematic. In this presentation we detail how, within the context of CONVERGENCE, the implementation of the Prodiguer messaging platform resolves complexity and permits the development of real-time applications such as: 1. a simulation monitoring dashboard; 2. a simulation metrics visualizer; 3. an automated simulation runtime notifier; 4. an automated output data & meta-data publishing pipeline; The Prodiguer messaging platform leverages a widely used open source message broker software called RabbitMQ. RabbitMQ itself implements the Advanced Message Queue Protocol (AMPQ). Hence it will be demonstrated that the Prodiguer messaging platform is built upon both open source and open standards.
Andersen, Gary L.; Frisch, A.S.; Kellogg, Christina A.; Levetin, E.; Lighthart, Bruce; Paterno, D.
2009-01-01
The most prevalent microorganisms, viruses, bacteria, and fungi, are introduced into the atmosphere from many anthropogenic sources such as agricultural, industrial and urban activities, termed microbial air pollution (MAP), and natural sources. These include soil, vegetation, and ocean surfaces that have been disturbed by atmospheric turbulence. The airborne concentrations range from nil to great numbers and change as functions of time of day, season, location, and upwind sources. While airborne, they may settle out immediately or be transported great distances. Further, most viable airborne cells can be rendered nonviable due to temperature effects, dehydration or rehydration, UV radiation, and/or air pollution effects. Mathematical microbial survival models that simulate these effects have been developed.
Cohen, Michael X; Gulbinaite, Rasa
2017-02-15
Steady-state evoked potentials (SSEPs) are rhythmic brain responses to rhythmic sensory stimulation, and are often used to study perceptual and attentional processes. We present a data analysis method for maximizing the signal-to-noise ratio of the narrow-band steady-state response in the frequency and time-frequency domains. The method, termed rhythmic entrainment source separation (RESS), is based on denoising source separation approaches that take advantage of the simultaneous but differential projection of neural activity to multiple electrodes or sensors. Our approach is a combination and extension of existing multivariate source separation methods. We demonstrate that RESS performs well on both simulated and empirical data, and outperforms conventional SSEP analysis methods based on selecting electrodes with the strongest SSEP response, as well as several other linear spatial filters. We also discuss the potential confound of overfitting, whereby the filter captures noise in absence of a signal. Matlab scripts are available to replicate and extend our simulations and methods. We conclude with some practical advice for optimizing SSEP data analyses and interpreting the results. Copyright © 2016 Elsevier Inc. All rights reserved.
Reporting inquiry in simulation.
Kardong-Edgren, Suzie; Gaba, David; Dieckmann, Peter; Cook, David A
2011-08-01
The term "inquiry" covers the large spectrum of what people are currently doing in the nascent field of simulation. This monograph proposes appropriate means of dissemination for the many different levels of inquiry that may arise from the Summit or other sources of inspiration. We discuss various methods of inquiry and where they might fit in the hierarchy of reporting and dissemination. We provide guidance for deciding whether an inquiry has reached the level of development required for publication in a peer-reviewed journal and conclude with a discussion of what most journals view as inquiry acceptable for publication.
NASA Astrophysics Data System (ADS)
Schäfer, M.; Groos, L.; Forbriger, T.; Bohlen, T.
2014-09-01
Full-waveform inversion (FWI) of shallow-seismic surface waves is able to reconstruct lateral variations of subsurface elastic properties. Line-source simulation for point-source data is required when applying algorithms of 2-D adjoint FWI to recorded shallow-seismic field data. The equivalent line-source response for point-source data can be obtained by convolving the waveforms with √{t^{-1}} (t: traveltime), which produces a phase shift of π/4. Subsequently an amplitude correction must be applied. In this work we recommend to scale the seismograms with √{2 r v_ph} at small receiver offsets r, where vph is the phase velocity, and gradually shift to applying a √{t^{-1}} time-domain taper and scaling the waveforms with r√{2} for larger receiver offsets r. We call this the hybrid transformation which is adapted for direct body and Rayleigh waves and demonstrate its outstanding performance on a 2-D heterogeneous structure. The fit of the phases as well as the amplitudes for all shot locations and components (vertical and radial) is excellent with respect to the reference line-source data. An approach for 1-D media based on Fourier-Bessel integral transformation generates strong artefacts for waves produced by 2-D structures. The theoretical background for both approaches is presented in a companion contribution. In the current contribution we study their performance when applied to waves propagating in a significantly 2-D-heterogeneous structure. We calculate synthetic seismograms for 2-D structure for line sources as well as point sources. Line-source simulations obtained from the point-source seismograms through different approaches are then compared to the corresponding line-source reference waveforms. Although being derived by approximation the hybrid transformation performs excellently except for explicitly back-scattered waves. In reconstruction tests we further invert point-source synthetic seismograms by a 2-D FWI to subsurface structure and evaluate its ability to reproduce the original structural model in comparison to the inversion of line-source synthetic data. Even when applying no explicit correction to the point-source waveforms prior to inversion only moderate artefacts appear in the results. However, the overall performance is best in terms of model reproduction and ability to reproduce the original data in a 3-D simulation if inverted waveforms are obtained by the hybrid transformation.
NASA Astrophysics Data System (ADS)
Kawamura, H.; Furuno, A.; Kobayashi, T.; In, T.; Nakayama, T.; Ishikawa, Y.; Miyazawa, Y.; Usui, N.
2017-12-01
To understand the concentration and amount of Fukushima-derived Cs-137 in the ocean, this study simulates the oceanic dispersion of Cs-137 by an oceanic dispersion model SEA-GEARN-FDM developed at Japan Atomic Energy Agency (JAEA) and multiple oceanic general circulation models. The Cs-137 deposition amounts at the sea surface were used as the source term in oceanic dispersion simulations, which were estimated by atmospheric dispersion simulations with a Worldwide version of System for Prediction of Environmental Emergency Dose Information version II (WSPEEDI-II) developed at JAEA. The direct release from the Fukushima Daiichi Nuclear Power Plant into the ocean based on in situ Cs-137 measurements was used as the other source term in oceanic dispersion simulations. The simulated air Cs-137 concentrations qualitatively replicated those measured around the North Pacific. The accumulated Cs-137 ground deposition amount in the eastern Japanese Islands was consistent with that estimated by aircraft measurements. The oceanic dispersion simulations relatively well reproduced the measured Cs-137 concentrations in the coastal and offshore oceans during the first few months after the Fukushima disaster, and in the open ocean during the first year post-disaster. It was suggested that Cs-137 dispersed along the coast in the north-south direction during the first few months post-disaster, and were subsequently dispersed offshore by the Kuroshio Current and Kuroshio Extension. Mesoscale eddies accompanied by the Kuroshio Current and Kuroshio Extension played an important role in dilution of Cs-137. The Cs-137 amounts were quantified in the coastal, offshore, and open oceans during the first year post-disaster. It was demonstrated that Cs-137 actively dispersed from the coastal and offshore oceans to the open ocean, and from the surface layer to the deeper layer in the North Pacific.
A multi-scalar PDF approach for LES of turbulent spray combustion
NASA Astrophysics Data System (ADS)
Raman, Venkat; Heye, Colin
2011-11-01
A comprehensive joint-scalar probability density function (PDF) approach is proposed for large eddy simulation (LES) of turbulent spray combustion and tests are conducted to analyze the validity and modeling requirements. The PDF method has the advantage that the chemical source term appears closed but requires models for the small scale mixing process. A stable and consistent numerical algorithm for the LES/PDF approach is presented. To understand the modeling issues in the PDF method, direct numerical simulation of a spray flame at three different fuel droplet Stokes numbers and an equivalent gaseous flame are carried out. Assumptions in closing the subfilter conditional diffusion term in the filtered PDF transport equation are evaluated for various model forms. In addition, the validity of evaporation rate models in high Stokes number flows is analyzed.
Jensen, Lars Liengaard; Merrison, Jonathan; Hansen, Aviaja Anna; Mikkelsen, Karina Aarup; Kristoffersen, Tommy; Nørnberg, Per; Lomstein, Bente Aagaard; Finster, Kai
2008-06-01
We describe the design, construction, and pilot operation of a Mars simulation facility comprised of a cryogenic environmental chamber, an atmospheric gas analyzer, and a xenon/mercury discharge source for UV generation. The Mars Environmental Simulation Chamber (MESCH) consists of a double-walled cylindrical chamber. The double wall provides a cooling mantle through which liquid N(2) can be circulated. A load-lock system that consists of a small pressure-exchange chamber, which can be evacuated, allows for the exchange of samples without changing the chamber environment. Fitted within the MESCH is a carousel, which holds up to 10 steel sample tubes. Rotation of the carousel is controlled by an external motor. Each sample in the carousel can be placed at any desired position. Environmental data, such as temperature, pressure, and UV exposure time, are computer logged and used in automated feedback mechanisms, enabling a wide variety of experiments that include time series. Tests of the simulation facility have successfully demonstrated its ability to produce temperature cycles and maintain low temperature (down to -140 degrees C), low atmospheric pressure (5-10 mbar), and a gas composition like that of Mars during long-term experiments.
NASA Astrophysics Data System (ADS)
Jensen, Lars Liengaard; Merrison, Jonathan; Hansen, Aviaja Anna; Mikkelsen, Karina Aarup; Kristoffersen, Tommy; Nørnberg, Per; Lomstein, Bente Aagaard; Finster, Kai
2008-06-01
We describe the design, construction, and pilot operation of a Mars simulation facility comprised of a cryogenic environmental chamber, an atmospheric gas analyzer, and a xenon/mercury discharge source for UV generation. The Mars Environmental Simulation Chamber (MESCH) consists of a double-walled cylindrical chamber. The double wall provides a cooling mantle through which liquid N2 can be circulated. A load-lock system that consists of a small pressure-exchange chamber, which can be evacuated, allows for the exchange of samples without changing the chamber environment. Fitted within the MESCH is a carousel, which holds up to 10 steel sample tubes. Rotation of the carousel is controlled by an external motor. Each sample in the carousel can be placed at any desired position. Environmental data, such as temperature, pressure, and UV exposure time, are computer logged and used in automated feedback mechanisms, enabling a wide variety of experiments that include time series. Tests of the simulation facility have successfully demonstrated its ability to produce temperature cycles and maintain low temperature (down to -140°C), low atmospheric pressure (5 10 mbar), and a gas composition like that of Mars during long-term experiments.
NASA Astrophysics Data System (ADS)
Jia, Mengyu; Wang, Shuang; Chen, Xueying; Gao, Feng; Zhao, Huijuan
2016-03-01
Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we have reported on an improved explicit model, referred to as "Virtual Source" (VS) diffuse approximation (DA), to inherit the mathematical simplicity of the DA while considerably extend its validity in modeling the near-field photon migration in low-albedo medium. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the nearfield to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. The proposed VS-DA model is validated by comparing with the Monte Carlo simulations, and further introduced in the image reconstruction of the Laminar Optical Tomography system.
NASA Astrophysics Data System (ADS)
Johnson, Ryan Federick; Chelliah, Harsha Kumar
2017-01-01
For a range of flow and chemical timescales, numerical simulations of two-dimensional laminar flow over a reacting carbon surface were performed to understand further the complex coupling between heterogeneous and homogeneous reactions. An open-source computational package (OpenFOAM®) was used with previously developed lumped heterogeneous reaction models for carbon surfaces and a detailed homogeneous reaction model for CO oxidation. The influence of finite-rate chemical kinetics was explored by varying the surface temperatures from 1800 to 2600 K, while flow residence time effects were explored by varying the free-stream velocity up to 50 m/s. The reacting boundary layer structure dependence on the residence time was analysed by extracting the ratio of chemical source and species diffusion terms. The important contributions of radical species reactions on overall carbon removal rate, which is often neglected in multi-dimensional simulations, are highlighted. The results provide a framework for future development and validation of lumped heterogeneous reaction models based on multi-dimensional reacting flow configurations.
Simplified contaminant source depletion models as analogs of multiphase simulators
NASA Astrophysics Data System (ADS)
Basu, Nandita B.; Fure, Adrian D.; Jawitz, James W.
2008-04-01
Four simplified dense non-aqueous phase liquid (DNAPL) source depletion models recently introduced in the literature are evaluated for the prediction of long-term effects of source depletion under natural gradient flow. These models are simple in form (a power function equation is an example) but are shown here to serve as mathematical analogs to complex multiphase flow and transport simulators. The spill and subsequent dissolution of DNAPLs was simulated in domains having different hydrologic characteristics (variance of the log conductivity field = 0.2, 1 and 3) using the multiphase flow and transport simulator UTCHEM. The dissolution profiles were fitted using four analytical models: the equilibrium streamtube model (ESM), the advection dispersion model (ADM), the power law model (PLM) and the Damkohler number model (DaM). All four models, though very different in their conceptualization, include two basic parameters that describe the mean DNAPL mass and the joint variability in the velocity and DNAPL distributions. The variability parameter was observed to be strongly correlated with the variance of the log conductivity field in the ESM and ADM but weakly correlated in the PLM and DaM. The DaM also includes a third parameter that describes the effect of rate-limited dissolution, but here this parameter was held constant as the numerical simulations were found to be insensitive to local-scale mass transfer. All four models were able to emulate the characteristics of the dissolution profiles generated from the complex numerical simulator, but the one-parameter PLM fits were the poorest, especially for the low heterogeneity case.
Simplified contaminant source depletion models as analogs of multiphase simulators.
Basu, Nandita B; Fure, Adrian D; Jawitz, James W
2008-04-28
Four simplified dense non-aqueous phase liquid (DNAPL) source depletion models recently introduced in the literature are evaluated for the prediction of long-term effects of source depletion under natural gradient flow. These models are simple in form (a power function equation is an example) but are shown here to serve as mathematical analogs to complex multiphase flow and transport simulators. The spill and subsequent dissolution of DNAPLs was simulated in domains having different hydrologic characteristics (variance of the log conductivity field=0.2, 1 and 3) using the multiphase flow and transport simulator UTCHEM. The dissolution profiles were fitted using four analytical models: the equilibrium streamtube model (ESM), the advection dispersion model (ADM), the power law model (PLM) and the Damkohler number model (DaM). All four models, though very different in their conceptualization, include two basic parameters that describe the mean DNAPL mass and the joint variability in the velocity and DNAPL distributions. The variability parameter was observed to be strongly correlated with the variance of the log conductivity field in the ESM and ADM but weakly correlated in the PLM and DaM. The DaM also includes a third parameter that describes the effect of rate-limited dissolution, but here this parameter was held constant as the numerical simulations were found to be insensitive to local-scale mass transfer. All four models were able to emulate the characteristics of the dissolution profiles generated from the complex numerical simulator, but the one-parameter PLM fits were the poorest, especially for the low heterogeneity case.
Systematic study of target localization for bioluminescence tomography guided radiation therapy
Yu, Jingjing; Zhang, Bin; Iordachita, Iulian I.; Reyes, Juvenal; Lu, Zhihao; Brock, Malcolm V.; Patterson, Michael S.; Wong, John W.
2016-01-01
Purpose: To overcome the limitation of CT/cone-beam CT (CBCT) in guiding radiation for soft tissue targets, the authors developed a spectrally resolved bioluminescence tomography (BLT) system for the small animal radiation research platform. The authors systematically assessed the performance of the BLT system in terms of target localization and the ability to resolve two neighboring sources in simulations, tissue-mimicking phantom, and in vivo environments. Methods: Multispectral measurements acquired in a single projection were used for the BLT reconstruction. The incomplete variables truncated conjugate gradient algorithm with an iterative permissible region shrinking strategy was employed as the optimization scheme to reconstruct source distributions. Simulation studies were conducted for single spherical sources with sizes from 0.5 to 3 mm radius at depth of 3–12 mm. The same configuration was also applied for the double source simulation with source separations varying from 3 to 9 mm. Experiments were performed in a standalone BLT/CBCT system. Two self-illuminated sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single-source at 6 and 9 mm depth, two sources at 3 and 5 mm separation at depth of 5 mm, or three sources in the abdomen were also used to illustrate the localization capability of the BLT system for multiple targets in vivo. Results: For simulation study, approximate 1 mm accuracy can be achieved at localizing center of mass (CoM) for single-source and grouped CoM for double source cases. For the case of 1.5 mm radius source, a common tumor size used in preclinical study, their simulation shows that for all the source separations considered, except for the 3 mm separation at 9 and 12 mm depth, the two neighboring sources can be resolved at depths from 3 to 12 mm. Phantom experiments illustrated that 2D bioluminescence imaging failed to distinguish two sources, but BLT can provide 3D source localization with approximately 1 mm accuracy. The in vivo results are encouraging that 1 and 1.7 mm accuracy can be attained for the single-source case at 6 and 9 mm depth, respectively. For the 2 sources in vivo study, both sources can be distinguished at 3 and 5 mm separations, and approximately 1 mm localization accuracy can also be achieved. Conclusions: This study demonstrated that their multispectral BLT/CBCT system could be potentially applied to localize and resolve multiple sources at wide range of source sizes, depths, and separations. The average accuracy of localizing CoM for single-source and grouped CoM for double sources is approximately 1 mm except deep-seated target. The information provided in this study can be instructive to devise treatment margins for BLT-guided irradiation. These results also suggest that the 3D BLT system could guide radiation for the situation with multiple targets, such as metastatic tumor models. PMID:27147371
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakao, N.; /SLAC; Taniguchi, S.
Neutron energy spectra were measured behind the lateral shield of the CERF (CERN-EU High Energy Reference Field) facility at CERN with a 120 GeV/c positive hadron beam (a mixture of mainly protons and pions) on a cylindrical copper target (7-cm diameter by 50-cm long). An NE213 organic liquid scintillator (12.7-cm diameter by 12.7-cm long) was located at various longitudinal positions behind shields of 80- and 160-cm thick concrete and 40-cm thick iron. The measurement locations cover an angular range with respect to the beam axis between 13 and 133{sup o}. Neutron energy spectra in the energy range between 32 MeVmore » and 380 MeV were obtained by unfolding the measured pulse height spectra with the detector response functions which have been verified in the neutron energy range up to 380 MeV in separate experiments. Since the source term and experimental geometry in this experiment are well characterized and simple and results are given in the form of energy spectra, these experimental results are very useful as benchmark data to check the accuracies of simulation codes and nuclear data. Monte Carlo simulations of the experimental set up were performed with the FLUKA, MARS and PHITS codes. Simulated spectra for the 80-cm thick concrete often agree within the experimental uncertainties. On the other hand, for the 160-cm thick concrete and iron shield differences are generally larger than the experimental uncertainties, yet within a factor of 2. Based on source term simulations, observed discrepancies among simulations of spectra outside the shield can be partially explained by differences in the high-energy hadron production in the copper target.« less
NASA Astrophysics Data System (ADS)
Achim, Pascal; Generoso, Sylvia; Morin, Mireille; Gross, Philippe; Le Petit, Gilbert; Moulin, Christophe
2016-05-01
Monitoring atmospheric concentrations of radioxenons is relevant to provide evidence of atmospheric or underground nuclear weapon tests. However, when the design of the International Monitoring Network (IMS) of the Comprehensive Nuclear-Test-Ban Treaty (CTBT) was set up, the impact of industrial releases was not perceived. It is now well known that industrial radioxenon signature can interfere with that of nuclear tests. Therefore, there is a crucial need to characterize atmospheric distributions of radioxenons from industrial sources—the so-called atmospheric background—in the frame of the CTBT. Two years of Xe-133 atmospheric background have been simulated using 2013 and 2014 meteorological data together with the most comprehensive emission inventory of radiopharmaceutical facilities and nuclear power plants to date. Annual average simulated activity concentrations vary from 0.01 mBq/m3 up to above 5 mBq/m3 nearby major sources. Average measured and simulated concentrations agree on most of the IMS stations, which indicates that the main sources during the time frame are properly captured. Xe-133 atmospheric background simulated at IMS stations turn out to be a complex combination of sources. Stations most impacted are in Europe and North America and can potentially detect Xe-133 every day. Predicted occurrences of detections of atmospheric Xe-133 show seasonal variations, more accentuated in the Northern Hemisphere, where the maximum occurs in winter. To our knowledge, this study presents the first global maps of Xe-133 atmospheric background from industrial sources based on two years of simulation and is a first attempt to analyze its composition in terms of origin at IMS stations.
VizieR Online Data Catalog: FARGO_THORIN 1.0 hydrodynamic code (Chrenko+, 2017)
NASA Astrophysics Data System (ADS)
Chrenko, O.; Broz, M.; Lambrechts, M.
2017-07-01
This archive contains the source files, documentation and example simulation setups of the FARGO_THORIN 1.0 hydrodynamic code. The program was introduced, described and used for simulations in the paper. It is built on top of the FARGO code (Masset, 2000A&AS..141..165M, Baruteau & Masset, 2008ApJ...672.1054B) and it is also interfaced with the REBOUND integrator package (Rein & Liu, 2012A&A...537A.128R). THORIN stands for Two-fluid HydrOdynamics, the Rebound integrator Interface and Non-isothermal gas physics. The program is designed for self-consistent investigations of protoplanetary systems consisting of a gas disk, a disk of small solid particles (pebbles) and embedded protoplanets. Code features: I) Non-isothermal gas disk with implicit numerical solution of the energy equation. The implemented energy source terms are: Compressional heating, viscous heating, stellar irradiation, vertical escape of radiation, radiative diffusion in the midplane and radiative feedback to accretion heating of protoplanets. II) Planets evolved in 3D, with close encounters allowed. The orbits are integrated using the IAS15 integrator (Rein & Spiegel, 2015MNRAS.446.1424R). The code detects the collisions among planets and resolve them as mergers. III) Refined treatment of the planet-disk gravitational interaction. The code uses a vertical averaging of the gravitational potential, as outlined in Muller & Kley (2012A&A...539A..18M). IV) Pebble disk represented by an Eulerian, presureless and inviscid fluid. The pebble dynamics is affected by the Epstein gas drag and optionally by the diffusive effects. We also implemented the drag back-reaction term into the Navier-Stokes equation for the gas. Archive summary: ------------------------------------------------------------------------- directory/file Explanation ------------------------------------------------------------------------- /in_relax Contains setup of the first example simulation /in_wplanet Contains setup of the second example simulation /srcmain Contains the source files of FARGOTHORIN /src_reb Contains the source files of the REBOUND integrator package to be linked with THORIN GUNGPL3 GNU General Public License, version 3 LICENSE License agreement README Simple user's guide UserGuide.pdf Extended user's guide refman.pdf Programer's guide ----------------------------------------------------------------------------- (1 data file).
A new DOD and DOA estimation method for MIMO radar
NASA Astrophysics Data System (ADS)
Gong, Jian; Lou, Shuntian; Guo, Yiduo
2018-04-01
The battlefield electromagnetic environment is becoming more and more complex, and MIMO radar will inevitably be affected by coherent and non-stationary noise. To solve this problem, an angle estimation method based on oblique projection operator and Teoplitz matrix reconstruction is proposed. Through the reconstruction of Toeplitz, nonstationary noise is transformed into Gauss white noise, and then the oblique projection operator is used to separate independent and correlated sources. Finally, simulations are carried out to verify the performance of the proposed algorithm in terms of angle estimation performance and source overload.
In-vehicle group activity modeling and simulation in sensor-based virtual environment
NASA Astrophysics Data System (ADS)
Shirkhodaie, Amir; Telagamsetti, Durga; Poshtyar, Azin; Chan, Alex; Hu, Shuowen
2016-05-01
Human group activity recognition is a very complex and challenging task, especially for Partially Observable Group Activities (POGA) that occur in confined spaces with limited visual observability and often under severe occultation. In this paper, we present IRIS Virtual Environment Simulation Model (VESM) for the modeling and simulation of dynamic POGA. More specifically, we address sensor-based modeling and simulation of a specific category of POGA, called In-Vehicle Group Activities (IVGA). In VESM, human-alike animated characters, called humanoids, are employed to simulate complex in-vehicle group activities within the confined space of a modeled vehicle. Each articulated humanoid is kinematically modeled with comparable physical attributes and appearances that are linkable to its human counterpart. Each humanoid exhibits harmonious full-body motion - simulating human-like gestures and postures, facial impressions, and hands motions for coordinated dexterity. VESM facilitates the creation of interactive scenarios consisting of multiple humanoids with different personalities and intentions, which are capable of performing complicated human activities within the confined space inside a typical vehicle. In this paper, we demonstrate the efficiency and effectiveness of VESM in terms of its capabilities to seamlessly generate time-synchronized, multi-source, and correlated imagery datasets of IVGA, which are useful for the training and testing of multi-source full-motion video processing and annotation. Furthermore, we demonstrate full-motion video processing of such simulated scenarios under different operational contextual constraints.
What's in a ray set: moving towards a unified ray set format
NASA Astrophysics Data System (ADS)
Muschaweck, Julius
2011-10-01
For the purpose of optical simulation, a plethora of formats exist to describe the properties of a light source. Except for the EULUMDAT and IES formats which describe sources in terms of aperture area and far field intensity, all these formats are vendor specific, and no generally accepted standard exists. Most illumination simulation software vendors use their own format for ray sets, which describe sources in terms of many rays. Some of them keep their format definition proprietary. Thus, software packages typically can read or write only their own specific format, although the actual data content is not so different. Typically, they describe origin and direction of each ray in 3D vectors, and use one more single number for magnitude, where magnitude may denote radiant flux, luminous flux (equivalently tristimulus Y), or tristimulus X and Z. Sometimes each ray also carries its wavelength, while other formats allow to specify an overall spectrum for the whole source. In addition, in at least one format, polarization properties are also included for each ray. This situation makes it inefficient and potentially error prone for light source manufacturers to provide ray data sets for their sources in many different formats. Furthermore, near field goniometer vendors again use their proprietary formats to store the source description in terms of luminance data, and offer their proprietary software to generate ray sets from this data base. Again, the plethora of ray sets make the ray set production inefficient and potentially error prone. In this paper, we propose to describe ray data sets in terms of phase space, as a step towards a standardized ray set format. It is well known that luminance and radiance can be defined as flux density in phase space: luminance is flux divided by etendue. Therefore, single rays can be thought of as center points of phase space cells, where each cell possesses its volume (i.e. etendue), its flux, and therefore its luminance. In addition, each phase space cell possesses its spectrum, and its polarization properties. We show how this approach leads to a unification of the EULUMDAT/IES, ray set and near field goniometer formats, making possible the generation of arbitrarily many additional rays by luminance interpolation. We also show how the EULUMDAT/IES and individual ray set formats can be derived from the proposed general format, making software using a possible standard format downward compatible.
Hong, Hongwei; Rahal, Mohamad; Demosthenous, Andreas; Bayford, Richard H
2009-10-01
Multi-frequency electrical impedance tomography (MF-EIT) systems require current sources that are accurate over a wide frequency range (1 MHz) and with large load impedance variations. The most commonly employed current source design in EIT systems is the modified Howland circuit (MHC). The MHC requires tight matching of resistors to achieve high output impedance and may suffer from instability over a wide frequency range in an integrated solution. In this paper, we introduce a new integrated current source design in CMOS technology and compare its performance with the MHC. The new integrated design has advantages over the MHC in terms of power consumption and area. The output current and the output impedance of both circuits were determined through simulations and measurements over the frequency range of 10 kHz to 1 MHz. For frequencies up to 1 MHz, the measured maximum variation of the output current for the integrated current source is 0.8% whereas for the MHC the corresponding value is 1.5%. Although the integrated current source has an output impedance greater than 1 MOmega up to 1 MHz in simulations, in practice, the impedance is greater than 160 kOmega up to 1 MHz due to the presence of stray capacitance.
NASA Astrophysics Data System (ADS)
Feng, Chi; UCNb Collaboration
2011-10-01
It is theorized that contributions to the Fierz interference term from scalar interaction beyond the Standard Model could be detectable in the spectrum of neutron beta-decay. The UCNb experiment run at the Los Alamos Neutron Science Center aims to accurately measure the neutron beta-decay energy spectrum to detect a nonzero interference term. The instrument consists of a cubic ``integrating sphere'' calorimeter attached with up to 4 photomultiplier tubes. The inside of the calorimeter is coated with white paint and a thin UV scintillating layer made of deuterated polystyrene to contain the ultracold neutrons. A Monte Carlo simulation using the Geant4 toolkit is developed in order to provide an accurate method of energy reconstruction. Offline calibration with the Kellogg Radiation Laboratory 140 keV electron gun and conversion electron sources will be used to validate the Monte Carlo simulation to give confidence in the energy reconstruction methods and to better understand systematics in the experiment data.
New thermal neutron calibration channel at LNMRI/IRD
NASA Astrophysics Data System (ADS)
Astuto, A.; Patrão, K. C. S.; Fonseca, E. S.; Pereira, W. W.; Lopes, R. T.
2016-07-01
A new standard thermal neutron flux unit was designed in the National Ionizing Radiation Metrology Laboratory (LNMRI) for calibration of neutron detectors. Fluence is achieved by moderation of four 241Am-Be sources with 0.6 TBq each, in a facility built with graphite and paraffin blocks. The study was divided into two stages. First, simulations were performed using MCNPX code in different geometric arrangements, seeking the best performance in terms of fluence and their uncertainties. Last, the system was assembled based on the results obtained on the simulations. The simulation results indicate quasi-homogeneous fluence in the central chamber and H*(10) at 50 cm from the front face with the polyethylene filter.
Modeling long-term trends of chlorinated ethene contamination at a public supply well
Chapelle, Francis H.; Kauffman, Leon J.; Widdowson, Mark A.
2015-01-01
A mass-balance solute-transport modeling approach was used to investigate the effects of dense nonaqueous phase liquid (DNAPL) volume, composition, and generation of daughter products on simulated and measured long-term trends of chlorinated ethene (CE) concentrations at a public supply well. The model was built by telescoping a calibrated regional three-dimensional MODFLOW model to the capture zone of a public supply well that has a history of CE contamination. The local model was then used to simulate the interactions between naturally occurring organic carbon that acts as an electron donor, and dissolved oxygen (DO), CEs, ferric iron, and sulfate that act as electron acceptors using the Sequential Electron Acceptor Model in three dimensions (SEAM3D) code. The modeling results indicate that asymmetry between rapidly rising and more gradual falling concentration trends over time suggests a DNAPL rather than a dissolved source of CEs. Peak concentrations of CEs are proportional to the volume and composition of the DNAPL source. The persistence of contamination, which can vary from a few years to centuries, is proportional to DNAPL volume, but is unaffected by DNAPL composition. These results show that monitoring CE concentrations in raw water produced by impacted public supply wells over time can provide useful information concerning the nature of contaminant sources and the likely future persistence of contamination.
Regularized Dual Averaging Image Reconstruction for Full-Wave Ultrasound Computed Tomography.
Matthews, Thomas P; Wang, Kun; Li, Cuiping; Duric, Neb; Anastasio, Mark A
2017-05-01
Ultrasound computed tomography (USCT) holds great promise for breast cancer screening. Waveform inversion-based image reconstruction methods account for higher order diffraction effects and can produce high-resolution USCT images, but are computationally demanding. Recently, a source encoding technique has been combined with stochastic gradient descent (SGD) to greatly reduce image reconstruction times. However, this method bundles the stochastic data fidelity term with the deterministic regularization term. This limitation can be overcome by replacing SGD with a structured optimization method, such as the regularized dual averaging method, that exploits knowledge of the composition of the cost function. In this paper, the dual averaging method is combined with source encoding techniques to improve the effectiveness of regularization while maintaining the reduced reconstruction times afforded by source encoding. It is demonstrated that each iteration can be decomposed into a gradient descent step based on the data fidelity term and a proximal update step corresponding to the regularization term. Furthermore, the regularization term is never explicitly differentiated, allowing nonsmooth regularization penalties to be naturally incorporated. The wave equation is solved by the use of a time-domain method. The effectiveness of this approach is demonstrated through computer simulation and experimental studies. The results suggest that the dual averaging method can produce images with less noise and comparable resolution to those obtained by the use of SGD.
NASA Astrophysics Data System (ADS)
Lin, Hsin-mu; Wang, Pao K.; Schlesinger, Robert E.
2005-11-01
This article presents a detailed comparison of cloud microphysical evolution among six warm-season thunderstorm simulations using a time-dependent three-dimensional model WISCDYMM. The six thunderstorms chosen for this study consist of three apiece from two contrasting climate zones, the US High Plains (one supercell and two multicells) and the humid subtropics (two in Florida, US and one in Taipei, Taiwan, all multicells). The primary goal of this study is to investigate the differences among thunderstorms in different climate regimes in terms of their microphysical structures and how differently these structures evolve in time. A subtropical case is used as an example to illustrate the general contents of a simulated storm, and two examples of the simulated storms, one humid subtropical and one northern High Plains case, are used to describe in detail the microphysical histories. The simulation results are compared with the available observational data, and the agreement between the two is shown to be at least fairly close overall. The analysis, synthesis and implications of the simulation results are then presented. The microphysical histories of the six simulated storms in terms of the domain-integrated masses of all five hydrometeor classes (cloud water, cloud ice, rain, snow, graupel/hail), along with the individual sources (and sinks) of the three precipitating hydrometeor classes (rain, snow, graupel/hail) are analyzed in detail. These analyses encompass both the absolute magnitudes and their percentage contributions to the totals, for the condensate mass and their precipitation production (and depletion) rates, respectively. Comparisons between the hydrometeor mass partitionings for the High Plains versus subtropical thunderstorms show that, in a time-averaged sense, ice hydrometeors (cloud ice, snow, graupel/hail) account for ˜ 70-80% of the total hydrometeor mass for the High Plains storms but only ˜ 50% for the subtropical storms, after the systems have reached quasi-steady mature states. This demonstrates that ice processes are highly important even in thunderstorms occurring in warm climatic regimes. The dominant rain sources are two of the graupel/hail sinks, shedding and melting, in both High Plains and subtropical storms, while the main rain sinks are accretion by hail and evaporation. The dominant graupel/hail sources are accretion of rain, snow and cloud water, while its main sinks are shedding and melting. The dominant snow sources are the Bergeron-Findeisen process and accretion of cloud water, while the main sinks are accretion by graupel/hail and sublimation. However, the rankings of the leading production and depletion mechanisms differ somewhat in different storm cases, especially for graupel/hail. The model results indicate that the same hydrometeor types in the different climates have their favored microphysical sources and sinks. These findings not only prove that thunderstorm structure depends on local dynamic and thermodynamic atmospheric conditions that are generally climate-dependent, but also provide information about the partitioning of hydrometeors in the storms. Such information is potentially useful for convective parameterization in large-scale models.
Comparison of Phase-Based 3D Near-Field Source Localization Techniques for UHF RFID.
Parr, Andreas; Miesen, Robert; Vossiek, Martin
2016-06-25
In this paper, we present multiple techniques for phase-based narrowband backscatter tag localization in three-dimensional space with planar antenna arrays or synthetic apertures. Beamformer and MUSIC localization algorithms, known from near-field source localization and direction-of-arrival estimation, are applied to the 3D backscatter scenario and their performance in terms of localization accuracy is evaluated. We discuss the impact of different transceiver modes known from the literature, which evaluate different send and receive antenna path combinations for a single localization, as in multiple input multiple output (MIMO) systems. Furthermore, we propose a new Singledimensional-MIMO (S-MIMO) transceiver mode, which is especially suited for use with mobile robot systems. Monte-Carlo simulations based on a realistic multipath error model ensure spatial correlation of the simulated signals, and serve to critically appraise the accuracies of the different localization approaches. A synthetic uniform rectangular array created by a robotic arm is used to evaluate selected localization techniques. We use an Ultra High Frequency (UHF) Radiofrequency Identification (RFID) setup to compare measurements with the theory and simulation. The results show how a mean localization accuracy of less than 30 cm can be reached in an indoor environment. Further simulations demonstrate how the distance between aperture and tag affects the localization accuracy and how the size and grid spacing of the rectangular array need to be adapted to improve the localization accuracy down to orders of magnitude in the centimeter range, and to maximize array efficiency in terms of localization accuracy per number of elements.
Computed myography: three-dimensional reconstruction of motor functions from surface EMG data
NASA Astrophysics Data System (ADS)
van den Doel, Kees; Ascher, Uri M.; Pai, Dinesh K.
2008-12-01
We describe a methodology called computed myography to qualitatively and quantitatively determine the activation level of individual muscles by voltage measurements from an array of voltage sensors on the skin surface. A finite element model for electrostatics simulation is constructed from morphometric data. For the inverse problem, we utilize a generalized Tikhonov regularization. This imposes smoothness on the reconstructed sources inside the muscles and suppresses sources outside the muscles using a penalty term. Results from experiments with simulated and human data are presented for activation reconstructions of three muscles in the upper arm (biceps brachii, bracialis and triceps). This approach potentially offers a new clinical tool to sensitively assess muscle function in patients suffering from neurological disorders (e.g., spinal cord injury), and could more accurately guide advances in the evaluation of specific rehabilitation training regimens.
Innovative Tools for Water Quality/Quantity Management: New York City's Operations Support Tool
NASA Astrophysics Data System (ADS)
Wang, L.; Schaake, J. C.; Day, G. N.; Porter, J.; Sheer, D. P.; Pyke, G.
2011-12-01
The New York City Department of Environmental Protection (DEP) manages New York City's water supply, which is comprised of over 20 reservoirs and supplies more than 1 billion gallons of water per day to over 9 million customers. Recently, DEP has initiated design of an Operations Support Tool (OST), a state-of-the-art decision support system to provide computational and predictive support for water supply operations and planning. This presentation describes the technical structure of OST, including the underlying water supply and water quality models, data sources and database management, reservoir inflow forecasts, and the functionalities required to meet the needs of a diverse group of end users. OST is a major upgrade of DEP's current water supply - water quality model, developed to evaluate alternatives for controlling turbidity in NYC's Catskill reservoirs. While the current model relies on historical hydrologic and meteorological data, OST can be driven by forecasted future conditions. It will receive a variety of near-real-time data from a number of sources. OST will support two major types of simulations: long-term, for evaluating policy or infrastructure changes over an extended period of time; and short-term "position analysis" (PA) simulations, consisting of multiple short simulations, all starting from the same initial conditions. Typically, the starting conditions for a PA run will represent those for the current day and traces of forecasted hydrology will drive the model for the duration of the simulation period. The result of these simulations will be a distribution of future system states based on system operating rules and the range of input ensemble streamflow predictions. DEP managers will analyze the output distributions and make operation decisions using risk-based metrics such as probability of refill. Currently, in the developmental stages of OST, forecasts are based on antecedent hydrologic conditions and are statistical in nature. The statistical algorithm is a relatively simple and versatile, but lacks short-term skill critical for water quality and spill management. To improve short-term skill, OST will ultimately operate with meteorologically driven hydrologic forecasts provided by the National Weather Service (NWS). OST functionalities will support a wide range of DEP uses, including short term operational projections, outage planning and emergency management, operating rule development, and water supply planning. A core use of OST will be to inform reservoir management strategies to control and mitigate turbidity events while ensuring water supply reliability. OST will also allow DEP to manage its complex reservoir system to meet multiple objectives, including ecological flows, tailwater fisheries and recreational releases, and peak flow mitigation for downstream communities.
NASA Technical Reports Server (NTRS)
Baird, J. K.
1986-01-01
The Ostwald-ripening theory is deduced and discussed starting from the fundamental principles such as Ising model concept, Mayer cluster expansion, Langer condensation point theory, Ginzburg-Landau free energy, Stillinger cutoff-pair potential, LSW-theory and MLSW-theory. Mathematical intricacies are reduced to an understanding version. Comparison of selected works, from 1949 to 1984, on solution of diffusion equation with and without sink/sources term(s) is presented. Kahlweit's 1980 work and Marqusee-Ross' 1954 work are more emphasized. Odijk and Lekkerkerker's 1985 work on rodlike macromolecules is introduced in order to simulate interested investigators.
Investigation of mode partition noise in Fabry-Perot laser diode
NASA Astrophysics Data System (ADS)
Guo, Qingyi; Deng, Lanxin; Mu, Jianwei; Li, Xun; Huang, Wei-Ping
2014-09-01
Passive optical network (PON) is considered as the most appealing access network architecture in terms of cost-effectiveness, bandwidth management flexibility, scalability and durability. And to further reduce the cost per subscriber, a Fabry-Perot (FP) laser diode is preferred as the transmitter at the optical network units (ONUs) because of its lower cost compared to distributed feedback (DFB) laser diode. However, the mode partition noise (MPN) associated with the multi-longitudinal-mode FP laser diode becomes the limiting factor in the network. This paper studies the MPN characteristics of the FP laser diode using the time-domain simulation of noise-driven multi-mode laser rate equation. The probability density functions are calculated for each longitudinal mode. The paper focuses on the investigation of the k-factor, which is a simple yet important measure of the noise power, but is usually taken as a fitted or assumed value in the penalty calculations. In this paper, the sources of the k-factor are studied with simulation, including the intrinsic source of the laser Langevin noise, and the extrinsic source of the bit pattern. The photon waveforms are shown under four simulation conditions for regular or random bit pattern, and with or without Langevin noise. The k-factors contributed by those sources are studied with a variety of bias current and modulation current. Simulation results are illustrated in figures, and show that the contribution of Langevin noise to the k-factor is larger than that of the random bit pattern, and is more dominant at lower bias current or higher modulation current.
Effect of inlet conditions on the turbulent statistics in a buoyant jet
NASA Astrophysics Data System (ADS)
Kumar, Rajesh; Dewan, Anupam
2015-11-01
Buoyant jets have been the subject of research due to their technological and environmental importance in many physical processes, such as, spread of smoke and toxic gases from fires, release of gases form volcanic eruptions and industrial stacks. The nature of the flow near the source is initially laminar which quickly changes into turbulent flow. We present large eddy simulation of a buoyant jet. In the present study a careful investigation has been done to study the influence of inlet conditions at the source on the turbulent statistics far from the source. It has been observed that the influence of the initial conditions on the second-order buoyancy terms extends further in the axial direction from the source than their influence on the time-averaged flow and second-order velocity statistics. We have studied the evolution of vortical structures in the buoyant jet. It has been shown that the generation of helical vortex rings in the vicinity of the source around a laminar core could be the reason for the larger influence of the inlet conditions on the second-order buoyancy terms as compared to the second-order velocity statistics.
The importance of quadrupole sources in prediction of transonic tip speed propeller noise
NASA Technical Reports Server (NTRS)
Hanson, D. B.; Fink, M. R.
1978-01-01
A theoretical analysis is presented for the harmonic noise of high speed, open rotors. Far field acoustic radiation equations based on the Ffowcs-Williams/Hawkings theory are derived for a static rotor with thin blades and zero lift. Near the plane of rotation, the dominant sources are the volume displacement and the rho U(2) quadrupole, where u is the disturbance velocity component in the direction blade motion. These sources are compared in both the time domain and the frequency domain using two dimensional airfoil theories valid in the subsonic, transonic, and supersonic speed ranges. For nonlifting parabolic arc blades, the two sources are equally important at speeds between the section critical Mach number and a Mach number of one. However, for moderately subsonic or fully supersonic flow over thin blade sections, the quadrupole term is negligible. It is concluded for thin blades that significant quadrupole noise radiation is strictly a transonic phenomenon and that it can be suppressed with blade sweep. Noise calculations are presented for two rotors, one simulating a helicopter main rotor and the other a model propeller. For the latter, agreement with test data was substantially improved by including the quadrupole source term.
NASA Technical Reports Server (NTRS)
Cowen, Benjamin
2011-01-01
Simulations are essential for engineering design. These virtual realities provide characteristic data to scientists and engineers in order to understand the details and complications of the desired mission. A standard development simulation package known as Trick is used in developing a source code to model a component (federate in HLA terms). The runtime executive is integrated into an HLA based distributed simulation. TrickHLA is used to extend a Trick simulation for a federation execution, develop a source code for communication between federates, as well as foster data input and output. The project incorporates international cooperation along with team collaboration. Interactions among federates occur throughout the simulation, thereby relying on simulation interoperability. Communication through the semester went on between participants to figure out how to create this data exchange. The NASA intern team is designing a Lunar Rover federate and a Lunar Shuttle federate. The Lunar Rover federate supports transportation across the lunar surface and is essential for fostering interactions with other federates on the lunar surface (Lunar Shuttle, Lunar Base Supply Depot and Mobile ISRU Plant) as well as transporting materials to the desired locations. The Lunar Shuttle federate transports materials to and from lunar orbit. Materials that it takes to the supply depot include fuel and cargo necessary to continue moon-base operations. This project analyzes modeling and simulation technologies as well as simulation interoperability. Each team from participating universities will work on and engineer their own federate(s) to participate in the SISO Spring 2011 Workshop SIW Smackdown in Boston, Massachusetts. This paper will focus on the Lunar Rover federate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lamarque, J. F.; Bond, Tami C.; Eyring, Veronika
2010-08-11
We present and discuss a new dataset of gridded emissions covering the historical period (1850-2000) in decadal increments at a horizontal resolution of 0.5° in latitude and longitude. The primary purpose of this inventory is to provide consistent gridded emissions of reactive gases and aerosols for use in chemistry model simulations needed by climate models for the Climate Model Intercomparison Program #5 (CMIP5) in support of the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment report. Our best estimate for the year 2000 inventory represents a combination of existing regional and global inventories to capture the best information available atmore » this point; 40 regions and 12 sectors were used to combine the various sources. The historical reconstruction of each emitted compound, for each region and sector, was then forced to agree with our 2000 estimate, ensuring continuity between past and 2000 emissions. Application of these emissions into two chemistry-climate models is used to test their ability to capture long-term changes in atmospheric ozone, carbon monoxide and aerosols distributions. The simulated long-term change in the Northern mid-latitudes surface and mid-troposphere ozone is not quite as rapid as observed. However, stations outside this latitude band show much better agreement in both present-day and long-term trend. The model simulations consistently underestimate the carbon monoxide trend, while capturing the long-term trend at the Mace Head station. The simulated sulfate and black carbon deposition over Greenland is in very good agreement with the ice-core observations spanning the simulation period. Finally, aerosol optical depth and additional aerosol diagnostics are shown to be in good agreement with previously published estimates.« less
R. S. Ahl; S. W. Woods
2006-01-01
Changes in the extent, composition, and configuration of forest cover over time due to succession or disturbance processes can result in measurable changes in streamflow and water yield. Removal of forest cover generally increases streamflow due to reduced canopy interception and evapotranspiration. In watersheds where snow is the dominant source of water, yield...
Theoretical simulation of the multipole seismoelectric logging while drilling
NASA Astrophysics Data System (ADS)
Guan, Wei; Hu, Hengshan; Zheng, Xiaobo
2013-11-01
Acoustic logging-while-drilling (LWD) technology has been commercially used in the petroleum industry. However it remains a rather difficult task to invert formation compressional and shear velocities from acoustic LWD signals due to the unwanted strong collar wave, which covers or interferes with signals from the formation. In this paper, seismoelectric LWD is investigated for solving that problem. The seismoelectric field is calculated by solving a modified Poisson's equation, whose source term is the electric disturbance induced electrokinetically by the travelling seismic wave. The seismic wavefield itself is obtained by solving Biot's equations for poroelastic waves. From the simulated waveforms and the semblance plots for monopole, dipole and quadrupole sources, it is found that the electric field accompanies the collar wave as well as other wave groups of the acoustic pressure, despite the fact that seismoelectric conversion occurs only in porous formations. The collar wave in the electric field, however, is significantly weakened compared with that in the acoustic pressure, in terms of its amplitude relative to the other wave groups in the full waveforms. Thus less and shallower grooves are required to damp the collar wave if the seismoelectric LWD signals are recorded for extracting formation compressional and shear velocities.
A Second Law Based Unstructured Finite Volume Procedure for Generalized Flow Simulation
NASA Technical Reports Server (NTRS)
Majumdar, Alok
1998-01-01
An unstructured finite volume procedure has been developed for steady and transient thermo-fluid dynamic analysis of fluid systems and components. The procedure is applicable for a flow network consisting of pipes and various fittings where flow is assumed to be one dimensional. It can also be used to simulate flow in a component by modeling a multi-dimensional flow using the same numerical scheme. The flow domain is discretized into a number of interconnected control volumes located arbitrarily in space. The conservation equations for each control volume account for the transport of mass, momentum and entropy from the neighboring control volumes. In addition, they also include the sources of each conserved variable and time dependent terms. The source term of entropy equation contains entropy generation due to heat transfer and fluid friction. Thermodynamic properties are computed from the equation of state of a real fluid. The system of equations is solved by a hybrid numerical method which is a combination of simultaneous Newton-Raphson and successive substitution schemes. The paper also describes the application and verification of the procedure by comparing its predictions with the analytical and numerical solution of several benchmark problems.
Neutron crosstalk between liquid scintillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verbeke, J. M.; Prasad, M. K.; Snyderman, N. J.
2015-05-01
We propose a method to quantify the fractions of neutrons scattering between liquid scintillators. Using a spontaneous fission source, this method can be utilized to quickly characterize an array of liquid scintillators in terms of crosstalk. The point model theory due to Feynman is corrected to account for these multiple scatterings. Using spectral information measured by the liquid scintillators, fractions of multiple scattering can be estimated, and mass reconstruction of fissile materials under investigation can be improved. Monte Carlo simulations of mono-energetic neutron sources were performed to estimate neutron crosstalk. A californium source in an array of liquid scintillators wasmore » modeled to illustrate the improvement of the mass reconstruction.« less
NASA Technical Reports Server (NTRS)
Greenwood, Eric, II; Schmitz, Fredric H.
2010-01-01
A new physics-based parameter identification method for rotor harmonic noise sources is developed using an acoustic inverse simulation technique. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. This new method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor Blade-Vortex Interaction (BVI) noise, allowing accurate estimates of BVI noise to be made for operating conditions based on a small number of measurements taken at different operating conditions.
Low Reynolds number k-epsilon modelling with the aid of direct simulation data
NASA Technical Reports Server (NTRS)
Rodi, W.; Mansour, N. N.
1993-01-01
The constant C sub mu and the near-wall damping function f sub mu in the eddy-viscosity relation of the k-epsilon model are evaluated from direct numerical simulation (DNS) data for developed channel and boundary layer flow at two Reynolds numbers each. Various existing f sub mu model functions are compared with the DNS data, and a new function is fitted to the high-Reynolds-number channel flow data. The epsilon-budget is computed for the fully developed channel flow. The relative magnitude of the terms in the epsilon-equation is analyzed with the aid of scaling arguments, and the parameter governing this magnitude is established. Models for the sum of all source and sink terms in the epsilon-equation are tested against the DNS data, and an improved model is proposed.
NASA Astrophysics Data System (ADS)
Thrysøe, A. S.; Løiten, M.; Madsen, J.; Naulin, V.; Nielsen, A. H.; Rasmussen, J. Juul
2018-03-01
The conditions in the edge and scrape-off layer (SOL) of magnetically confined plasmas determine the overall performance of the device, and it is of great importance to study and understand the mechanics that drive transport in those regions. If a significant amount of neutral molecules and atoms is present in the edge and SOL regions, those will influence the plasma parameters and thus the plasma confinement. In this paper, it is displayed how neutrals, described by a fluid model, introduce source terms in a plasma drift-fluid model due to inelastic collisions. The resulting source terms are included in a four-field drift-fluid model, and it is shown how an increasing neutral particle density in the edge and SOL regions influences the plasma particle transport across the last-closed-flux-surface. It is found that an appropriate gas puffing rate allows for the edge density in the simulation to be self-consistently maintained due to ionization of neutrals in the confined region.
Vaccine financing in Nigeria: are we making progress towards self-financing/sustenance?
Faniyan, Olumide; Opara, Chidiabere; Oyinade, Akinyede; Botchway, Pamela; Soyemi, Kenneth
2017-01-01
Nigeria has an estimated population of 186 million with 23% of eligible children aged 12-23 months fully immunized. Government spending on routine immunization per surviving infant has declined since 2006 meaning the immunization budget needs to improve. By 2020, Nigeria will be ineligible for additional Global Alliance for Vaccination and Immunization (Gavi) grants and will be facing an annual vaccine bill of around US$426.3m. There are several potential revenue sources that could be utilized to fill the potential funding gap, these are however subject to timely legislation and appropriation of funds by the legislative body. Innovative funding sources that should be considered include tiered levies on tele-communications, airline, hotel, alcohol, tobacco, sugar beverage taxes, lottery sales, crowd-sourcing, optimized federal state co-financing etc. To demonstrate monthly income that will be derived from a single tax revenue source, we modelled using Monte Carlo simulation trials the Communication Service Tax that is being introduced by the National Assembly. We used number of active telephone subscribers, penetration ratio, monthly charges, and percent of immunization levy as model scenario inputs and dollars generated monthly as output. The simulation generated a modest mean (SD) monthly amount of $3,649,289.38 ($1,789,651); 88% certainty range $1,282,719.90 to $7,450,906.26. The entire range for the simulation was $528,903.26 to $7,966,287.26 with a standard error of mean of $17,896.52. Sensitivity analysis revealed that percentage of immunization levy contributed 97.9 percent of the variance in the model, number of active subscribers and charges per month contributed 1.5%, and 0.6% respectively. Our modest simulation analysis demonstrated the potential to raise revenue from one possible tax source; when combined, the revenue sources will potentially surpass Nigeria's long-term financing needs. The ROI of vaccine should supersede all other considerations and prompt urgent activities to cover the impending finance coverage gap.
Vaccine financing in Nigeria: are we making progress towards self-financing/sustenance?
Faniyan, Olumide; Opara, Chidiabere; Oyinade, Akinyede; Botchway, Pamela; Soyemi, Kenneth
2017-01-01
Nigeria has an estimated population of 186 million with 23% of eligible children aged 12-23 months fully immunized. Government spending on routine immunization per surviving infant has declined since 2006 meaning the immunization budget needs to improve. By 2020, Nigeria will be ineligible for additional Global Alliance for Vaccination and Immunization (Gavi) grants and will be facing an annual vaccine bill of around US$426.3m. There are several potential revenue sources that could be utilized to fill the potential funding gap, these are however subject to timely legislation and appropriation of funds by the legislative body. Innovative funding sources that should be considered include tiered levies on tele-communications, airline, hotel, alcohol, tobacco, sugar beverage taxes, lottery sales, crowd-sourcing, optimized federal state co-financing etc. To demonstrate monthly income that will be derived from a single tax revenue source, we modelled using Monte Carlo simulation trials the Communication Service Tax that is being introduced by the National Assembly. We used number of active telephone subscribers, penetration ratio, monthly charges, and percent of immunization levy as model scenario inputs and dollars generated monthly as output. The simulation generated a modest mean (SD) monthly amount of $3,649,289.38 ($1,789,651); 88% certainty range $1,282,719.90 to $7,450,906.26. The entire range for the simulation was $528,903.26 to $7,966,287.26 with a standard error of mean of $17,896.52. Sensitivity analysis revealed that percentage of immunization levy contributed 97.9 percent of the variance in the model, number of active subscribers and charges per month contributed 1.5%, and 0.6% respectively. Our modest simulation analysis demonstrated the potential to raise revenue from one possible tax source; when combined, the revenue sources will potentially surpass Nigeria’s long-term financing needs. The ROI of vaccine should supersede all other considerations and prompt urgent activities to cover the impending finance coverage gap. PMID:29296144
Simulations of acoustic waves in channels and phonation in glottal ducts
NASA Astrophysics Data System (ADS)
Yang, Jubiao; Krane, Michael; Zhang, Lucy
2014-11-01
Numerical simulations of acoustic wave propagation were performed by solving compressible Navier-Stokes equations using finite element method. To avoid numerical contamination of acoustic field induced by non-physical reflections at computational boundaries, a Perfectly Matched Layer (PML) scheme was implemented to attenuate the acoustic waves and their reflections near these boundaries. The acoustic simulation was further combined with the simulation of interaction of vocal fold vibration and glottal flow, using our fully-coupled Immersed Finite Element Method (IFEM) approach, to study phonation in the glottal channel. In order to decouple the aeroelastic and aeroacoustic aspects of phonation, the airway duct used has a uniform cross section with PML properly applied. The dynamics of phonation were then studied by computing the terms of the equations of motion for a control volume comprised of the fluid in the vicinity of the vocal folds. It is shown that the principal dynamics is comprised of the near cancellation of the pressure force driving the flow through the glottis, and the aerodynamic drag on the vocal folds. Aeroacoustic source strengths are also presented, estimated from integral quantities computed in the source region, as well as from the radiated acoustic field.
Understanding Accretion Disks through Three Dimensional Radiation MHD Simulations
NASA Astrophysics Data System (ADS)
Jiang, Yan-Fei
I study the structures and thermal properties of black hole accretion disks in the radiation pressure dominated regime. Angular momentum transfer in the disk is provided by the turbulence generated by the magneto-rotational instability (MRI), which is calculated self-consistently with a recently developed 3D radiation magneto-hydrodynamics (MHD) code based on Athena. This code, developed by my collaborators and myself, couples both the radiation momentum and energy source terms with the ideal MHD equations by modifying the standard Godunov method to handle the stiff radiation source terms. We solve the two momentum equations of the radiation transfer equations with a variable Eddington tensor (VET), which is calculated with a time independent short characteristic module. This code is well tested and accurate in both optically thin and optically thick regimes. It is also accurate for both radiation pressure and gas pressure dominated flows. With this code, I find that when photon viscosity becomes significant, the ratio between Maxwell stress and Reynolds stress from the MRI turbulence can increase significantly with radiation pressure. The thermal instability of the radiation pressure dominated disk is then studied with vertically stratified shearing box simulations. Unlike the previous results claiming that the radiation pressure dominated disk with MRI turbulence can reach a steady state without showing any unstable behavior, I find that the radiation pressure dominated disks always either collapse or expand until we have to stop the simulations. During the thermal runaway, the heating and cooling rates from the simulations are consistent with the general criterion of thermal instability. However, details of the thermal runaway are different from the predictions of the standard alpha disk model, as many assumptions in that model are not satisfied in the simulations. We also identify the key reasons why previous simulations do not find the instability. The thermal instability has many important implications for understanding the observations of both X-ray binaries and Active Galactic Nuclei (AGNs). However, direct comparisons between observations and the simulations require global radiation MHD simulations, which will be the main focus of my future work.
Judicious use of simulation technology in continuing medical education.
Curtis, Michael T; DiazGranados, Deborah; Feldman, Moshe
2012-01-01
Use of simulation-based training is fast becoming a vital source of experiential learning in medical education. Although simulation is a common tool for undergraduate and graduate medical education curricula, the utilization of simulation in continuing medical education (CME) is still an area of growth. As more CME programs turn to simulation to address their training needs, it is important to highlight concepts of simulation technology that can help to optimize learning outcomes. This article discusses the role of fidelity in medical simulation. It provides support from a cross section of simulation training domains for determining the appropriate levels of fidelity, and it offers guidelines for creating an optimal balance of skill practice and realism for efficient training outcomes. After defining fidelity, 3 dimensions of fidelity, drawn from the human factors literature, are discussed in terms of their relevance to medical simulation. From this, research-based guidelines are provided to inform CME providers regarding the use of simulation in CME training. Copyright © 2012 The Alliance for Continuing Education in the Health Professions, the Society for Academic Continuing Medical Education, and the Council on CME, Association for Hospital Medical Education.
NASA Astrophysics Data System (ADS)
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Šindelářová, Kateřina; Hýža, Miroslav; Stohl, Andreas
2017-10-01
In the fall of 2011, iodine-131 (131I) was detected at several radionuclide monitoring stations in central Europe. After investigation, the International Atomic Energy Agency (IAEA) was informed by Hungarian authorities that 131I was released from the Institute of Isotopes Ltd. in Budapest, Hungary. It was reported that a total activity of 342 GBq of 131I was emitted between 8 September and 16 November 2011. In this study, we use the ambient concentration measurements of 131I to determine the location of the release as well as its magnitude and temporal variation. As the location of the release and an estimate of the source strength became eventually known, this accident represents a realistic test case for inversion models. For our source reconstruction, we use no prior knowledge. Instead, we estimate the source location and emission variation using only the available 131I measurements. Subsequently, we use the partial information about the source term available from the Hungarian authorities for validation of our results. For the source determination, we first perform backward runs of atmospheric transport models and obtain source-receptor sensitivity (SRS) matrices for each grid cell of our study domain. We use two dispersion models, FLEXPART and Hysplit, driven with meteorological analysis data from the global forecast system (GFS) and from European Centre for Medium-range Weather Forecasts (ECMWF) weather forecast models. Second, we use a recently developed inverse method, least-squares with adaptive prior covariance (LS-APC), to determine the 131I emissions and their temporal variation from the measurements and computed SRS matrices. For each grid cell of our simulation domain, we evaluate the probability that the release was generated in that cell using Bayesian model selection. The model selection procedure also provides information about the most suitable dispersion model for the source term reconstruction. Third, we select the most probable location of the release with its associated source term and perform a forward model simulation to study the consequences of the iodine release. Results of these procedures are compared with the known release location and reported information about its time variation. We find that our algorithm could successfully locate the actual release site. The estimated release period is also in agreement with the values reported by IAEA and the reported total released activity of 342 GBq is within the 99 % confidence interval of the posterior distribution of our most likely model.
Secure Large-Scale Airport Simulations Using Distributed Computational Resources
NASA Technical Reports Server (NTRS)
McDermott, William J.; Maluf, David A.; Gawdiak, Yuri; Tran, Peter; Clancy, Dan (Technical Monitor)
2001-01-01
To fully conduct research that will support the far-term concepts, technologies and methods required to improve the safety of Air Transportation a simulation environment of the requisite degree of fidelity must first be in place. The Virtual National Airspace Simulation (VNAS) will provide the underlying infrastructure necessary for such a simulation system. Aerospace-specific knowledge management services such as intelligent data-integration middleware will support the management of information associated with this complex and critically important operational environment. This simulation environment, in conjunction with a distributed network of supercomputers, and high-speed network connections to aircraft, and to Federal Aviation Administration (FAA), airline and other data-sources will provide the capability to continuously monitor and measure operational performance against expected performance. The VNAS will also provide the tools to use this performance baseline to obtain a perspective of what is happening today and of the potential impact of proposed changes before they are introduced into the system.
Systematic study of target localization for bioluminescence tomography guided radiation therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Jingjing; Zhang, Bin; Reyes, Juvenal
Purpose: To overcome the limitation of CT/cone-beam CT (CBCT) in guiding radiation for soft tissue targets, the authors developed a spectrally resolved bioluminescence tomography (BLT) system for the small animal radiation research platform. The authors systematically assessed the performance of the BLT system in terms of target localization and the ability to resolve two neighboring sources in simulations, tissue-mimicking phantom, and in vivo environments. Methods: Multispectral measurements acquired in a single projection were used for the BLT reconstruction. The incomplete variables truncated conjugate gradient algorithm with an iterative permissible region shrinking strategy was employed as the optimization scheme to reconstructmore » source distributions. Simulation studies were conducted for single spherical sources with sizes from 0.5 to 3 mm radius at depth of 3–12 mm. The same configuration was also applied for the double source simulation with source separations varying from 3 to 9 mm. Experiments were performed in a standalone BLT/CBCT system. Two self-illuminated sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single-source at 6 and 9 mm depth, two sources at 3 and 5 mm separation at depth of 5 mm, or three sources in the abdomen were also used to illustrate the localization capability of the BLT system for multiple targets in vivo. Results: For simulation study, approximate 1 mm accuracy can be achieved at localizing center of mass (CoM) for single-source and grouped CoM for double source cases. For the case of 1.5 mm radius source, a common tumor size used in preclinical study, their simulation shows that for all the source separations considered, except for the 3 mm separation at 9 and 12 mm depth, the two neighboring sources can be resolved at depths from 3 to 12 mm. Phantom experiments illustrated that 2D bioluminescence imaging failed to distinguish two sources, but BLT can provide 3D source localization with approximately 1 mm accuracy. The in vivo results are encouraging that 1 and 1.7 mm accuracy can be attained for the single-source case at 6 and 9 mm depth, respectively. For the 2 sources in vivo study, both sources can be distinguished at 3 and 5 mm separations, and approximately 1 mm localization accuracy can also be achieved. Conclusions: This study demonstrated that their multispectral BLT/CBCT system could be potentially applied to localize and resolve multiple sources at wide range of source sizes, depths, and separations. The average accuracy of localizing CoM for single-source and grouped CoM for double sources is approximately 1 mm except deep-seated target. The information provided in this study can be instructive to devise treatment margins for BLT-guided irradiation. These results also suggest that the 3D BLT system could guide radiation for the situation with multiple targets, such as metastatic tumor models.« less
Physical/chemical closed-loop water-recycling for long-duration missions
NASA Technical Reports Server (NTRS)
Herrmann, Cal C.; Wydeven, Ted
1990-01-01
Water needs, water sources, and means for recycling water are examined in terms appropriate to the water quality requirements of a small crew and spacecraft intended for long duration exploration missions. Inorganic, organic, and biological hazards are estimated for waste water sources. Sensitivities to these hazards for human uses are estimated. The water recycling processes considered are humidity condensation, carbon dioxide reduction, waste oxidation, distillation, reverse osmosis, pervaporation, electrodialysis, ion exchange, carbon sorption, and electrochemical oxidation. Limitations and applications of these processes are evaluated in terms of water quality objectives. Computerized simulation of some of these chemical processes is examined. Recommendations are made for development of new water recycling technology and improvement of existing technology for near term application to life support systems for humans in space. The technological developments are equally applicable to water needs on earth, in regions where extensive water ecycling is needed or where advanced water treatment is essential to meet EPA health standards.
NASA Astrophysics Data System (ADS)
Lu, Xinhua; Mao, Bing; Dong, Bingjiang
2018-01-01
Xia et al. (2017) proposed a novel, fully implicit method for the discretization of the bed friction terms for solving the shallow-water equations. The friction terms contain h-7/3 (h denotes water depth), which may be extremely large, introducing machine error when h approaches zero. To address this problem, Xia et al. (2017) introduces auxiliary variables (their equations (37) and (38)) so that h-4/3 rather than h-7/3 is calculated and solves a transformed equation (their equation (39)). The introduced auxiliary variables require extra storage. We implemented an analysis on the magnitude of the friction terms to find that these terms on the whole do not exceed the machine floating-point number precision, and thus we proposed a simple-to-implement technique by splitting h-7/3 into different parts of the friction terms to avoid introducing machine error. This technique does not need extra storage or to solve a transformed equation and thus is more efficient for simulations. We also showed that the surface reconstruction method proposed by Xia et al. (2017) may lead to predictions with spurious wiggles because the reconstructed Riemann states may misrepresent the water gravitational effect.
NASA Astrophysics Data System (ADS)
Zou, Zhen-Zhen; Yu, Xu-Tao; Zhang, Zai-Chen
2018-04-01
At first, the entanglement source deployment problem is studied in a quantum multi-hop network, which has a significant influence on quantum connectivity. Two optimization algorithms are introduced with limited entanglement sources in this paper. A deployment algorithm based on node position (DNP) improves connectivity by guaranteeing that all overlapping areas of the distribution ranges of the entanglement sources contain nodes. In addition, a deployment algorithm based on an improved genetic algorithm (DIGA) is implemented by dividing the region into grids. From the simulation results, DNP and DIGA improve quantum connectivity by 213.73% and 248.83% compared to random deployment, respectively, and the latter performs better in terms of connectivity. However, DNP is more flexible and adaptive to change, as it stops running when all nodes are covered.
LES-Modeling of a Partially Premixed Flame using a Deconvolution Turbulence Closure
NASA Astrophysics Data System (ADS)
Wang, Qing; Wu, Hao; Ihme, Matthias
2015-11-01
The modeling of the turbulence/chemistry interaction in partially premixed and multi-stream combustion remains an outstanding issue. By extending a recently developed constrained minimum mean-square error deconvolution (CMMSED) method, to objective of this work is to develop a source-term closure for turbulent multi-stream combustion. In this method, the chemical source term is obtained from a three-stream flamelet model, and CMMSED is used as closure model, thereby eliminating the need for presumed PDF-modeling. The model is applied to LES of a piloted turbulent jet flame with inhomogeneous inlets, and simulation results are compared with experiments. Comparisons with presumed PDF-methods are performed, and issues regarding resolution and conservation of the CMMSED method are examined. The author would like to acknowledge the support of funding from Stanford Graduate Fellowship.
Bayesian analyses of time-interval data for environmental radiation monitoring.
Luo, Peng; Sharp, Julia L; DeVol, Timothy A
2013-01-01
Time-interval (time difference between two consecutive pulses) analysis based on the principles of Bayesian inference was investigated for online radiation monitoring. Using experimental and simulated data, Bayesian analysis of time-interval data [Bayesian (ti)] was compared with Bayesian and a conventional frequentist analysis of counts in a fixed count time [Bayesian (cnt) and single interval test (SIT), respectively]. The performances of the three methods were compared in terms of average run length (ARL) and detection probability for several simulated detection scenarios. Experimental data were acquired with a DGF-4C system in list mode. Simulated data were obtained using Monte Carlo techniques to obtain a random sampling of the Poisson distribution. All statistical algorithms were developed using the R Project for statistical computing. Bayesian analysis of time-interval information provided a similar detection probability as Bayesian analysis of count information, but the authors were able to make a decision with fewer pulses at relatively higher radiation levels. In addition, for the cases with very short presence of the source (< count time), time-interval information is more sensitive to detect a change than count information since the source data is averaged by the background data over the entire count time. The relationships of the source time, change points, and modifications to the Bayesian approach for increasing detection probability are presented.
NASA Astrophysics Data System (ADS)
Vishnoi, Gargi; Hielscher, Andreas H.; Ramanujam, Nirmala; Chance, Britton
2000-04-01
In this work experimental tissue phantoms and numerical models were developed to estimate photon migration through the fetal head in utero. The tissue phantoms incorporate a fetal head within an amniotic fluid sac surrounded by a maternal tissue layer. A continuous wave, dual-wavelength ((lambda) equals 760 and 850 nm) spectrometer was employed to make near-infrared measurements on the tissue phantoms for various source-detector separations, fetal-head positions, and fetal-head optical properties. In addition, numerical simulations of photon propagation were performed with finite-difference algorithms that provide solutions to the equation of radiative transfer as well as the diffusion equation. The simulations were compared with measurements on tissue phantoms to determine the best numerical model to describe photon migration through the fetal head in utero. Evaluation of the results indicates that tissue phantoms in which the contact between fetal head and uterine wall is uniform best simulates the fetal head in utero for near-term pregnancies. Furthermore, we found that maximum sensitivity to the head can be achieved if the source of the probe is positioned directly above the fetal head. By optimizing the source-detector separation, this signal originating from photons that have traveled through the fetal head can drastically be increased.
Regional Scale Simulations of Nitrate Leaching through Agricultural Soils of California
NASA Astrophysics Data System (ADS)
Diamantopoulos, E.; Walkinshaw, M.; O'Geen, A. T.; Harter, T.
2016-12-01
Nitrate is recognized as one of California's most widespread groundwater contaminants. As opposed to point sources, which are relative easily identifiable sources of contamination, non-point sources of nitrate are diffuse and linked with widespread use of fertilizers in agricultural soils. California's agricultural regions have an incredible diversity of soils that encompass a huge range of properties. This complicates studies dealing with nitrate risk assessment, since important biological and physicochemical processes appear at the first meters of the vadose zone. The objective of this study is to evaluate all agricultural soils in California according to their potentiality for nitrate leaching based on numerical simulations using the Richards equation. We conducted simulations for 6000 unique soil profiles (over 22000 soil horizons) taking into account the effect of climate, crop type, irrigation and fertilization management scenarios. The final goal of this study is to evaluate simple management methods in terms of reduced nitrate leaching. We estimated drainage rates of water under the root zone and nitrate concentrations in the drain water at the regional scale. We present maps for all agricultural soils in California which can be used for risk assessment studies. Finally, our results indicate that adoption of simple irrigation and fertilization methods may significantly reduce nitrate leaching in vulnerable regions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rau, U.; Bhatnagar, S.; Owen, F. N., E-mail: rurvashi@nrao.edu
Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1–2 GHz)) and 46-pointing mosaic (D-array, C-Band (4–8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μ Jy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in themore » reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures.« less
A method for obtaining a statistically stationary turbulent free shear flow
NASA Technical Reports Server (NTRS)
Timson, Stephen F.; Lele, S. K.; Moser, R. D.
1994-01-01
The long-term goal of the current research is the study of Large-Eddy Simulation (LES) as a tool for aeroacoustics. New algorithms and developments in computer hardware are making possible a new generation of tools for aeroacoustic predictions, which rely on the physics of the flow rather than empirical knowledge. LES, in conjunction with an acoustic analogy, holds the promise of predicting the statistics of noise radiated to the far-field of a turbulent flow. LES's predictive ability will be tested through extensive comparison of acoustic predictions based on a Direct Numerical Simulation (DNS) and LES of the same flow, as well as a priori testing of DNS results. The method presented here is aimed at allowing simulation of a turbulent flow field that is both simple and amenable to acoustic predictions. A free shear flow is homogeneous in both the streamwise and spanwise directions and which is statistically stationary will be simulated using equations based on the Navier-Stokes equations with a small number of added terms. Studying a free shear flow eliminates the need to consider flow-surface interactions as an acoustic source. The homogeneous directions and the flow's statistically stationary nature greatly simplify the application of an acoustic analogy.
Simulation of Surface Pressure Induced by Vortex/Body Interaction
NASA Astrophysics Data System (ADS)
He, M.; Islam, M.; Veitch, B.; Bose, N.; Colbourne, M. B.; Liu, P.
When a strong vortical wake impacts a structure, the pressure on the impacted surface sees large variations in its amplitude. This pressure fluctuation is one of the main sources causing severe structural vibration and hydrodynamic noise. Economical and effective prediction methods of the fluctuating pressure are required by engineers in many fields. This paper presents a wake impingement model (WIM) that has been incorporated into a panel method code, Propella, and its applications in simulations of a podded propeller wake impacting on a strut. Simulated strut surface pressure distributions and variations are compared with experimental data in terms of time-averaged components and phase-averaged components. The pressure comparisons show that the calculated results are in a good agreement with experimental data.
Comparison of simulation and experimental results for a gas puff nozzle on Ambiorix
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnier, J-N.; Chevalier, J-M.; Dubroca, B.
One of source term of Z-Pinch experiments is the gas puff density profile. In order to characterize the gas jet, an experiment based on interferometry has been performed. The first study was a point measurement (a section density profile) which led us to develop a global and instantaneous interferometry imaging method. In order to optimise the nozzle, we simulated the experiment with a flow calculation code (ARES). In this paper, the experimental results are compared with simulations. The different gas properties (He, Ne, Ar) and the flow duration lead us to take care, on the one hand, of the gasmore » viscosity, and on the other, of modifying the code for an instationary flow.« less
Reconciling divergent trends and millennial variations in Holocene temperatures.
Marsicek, Jeremiah; Shuman, Bryan N; Bartlein, Patrick J; Shafer, Sarah L; Brewer, Simon
2018-01-31
Cooling during most of the past two millennia has been widely recognized and has been inferred to be the dominant global temperature trend of the past 11,700 years (the Holocene epoch). However, long-term cooling has been difficult to reconcile with global forcing, and climate models consistently simulate long-term warming. The divergence between simulations and reconstructions emerges primarily for northern mid-latitudes, for which pronounced cooling has been inferred from marine and coastal records using multiple approaches. Here we show that temperatures reconstructed from sub-fossil pollen from 642 sites across North America and Europe closely match simulations, and that long-term warming, not cooling, defined the Holocene until around 2,000 years ago. The reconstructions indicate that evidence of long-term cooling was limited to North Atlantic records. Early Holocene temperatures on the continents were more than two degrees Celsius below those of the past two millennia, consistent with the simulated effects of remnant ice sheets in the climate model Community Climate System Model 3 (CCSM3). CCSM3 simulates increases in 'growing degree days'-a measure of the accumulated warmth above five degrees Celsius per year-of more than 300 kelvin days over the Holocene, consistent with inferences from the pollen data. It also simulates a decrease in mean summer temperatures of more than two degrees Celsius, which correlates with reconstructed marine trends and highlights the potential importance of the different subseasonal sensitivities of the records. Despite the differing trends, pollen- and marine-based reconstructions are correlated at millennial-to-centennial scales, probably in response to ice-sheet and meltwater dynamics, and to stochastic dynamics similar to the temperature variations produced by CCSM3. Although our results depend on a single source of palaeoclimatic data (pollen) and a single climate-model simulation, they reinforce the notion that climate models can adequately simulate climates for periods other than the present-day. They also demonstrate that amplified warming in recent decades increased temperatures above the mean of any century during the past 11,000 years.
Reconciling divergent trends and millennial variations in Holocene temperatures
NASA Astrophysics Data System (ADS)
Marsicek, Jeremiah; Shuman, Bryan N.; Bartlein, Patrick J.; Shafer, Sarah L.; Brewer, Simon
2018-02-01
Cooling during most of the past two millennia has been widely recognized and has been inferred to be the dominant global temperature trend of the past 11,700 years (the Holocene epoch). However, long-term cooling has been difficult to reconcile with global forcing, and climate models consistently simulate long-term warming. The divergence between simulations and reconstructions emerges primarily for northern mid-latitudes, for which pronounced cooling has been inferred from marine and coastal records using multiple approaches. Here we show that temperatures reconstructed from sub-fossil pollen from 642 sites across North America and Europe closely match simulations, and that long-term warming, not cooling, defined the Holocene until around 2,000 years ago. The reconstructions indicate that evidence of long-term cooling was limited to North Atlantic records. Early Holocene temperatures on the continents were more than two degrees Celsius below those of the past two millennia, consistent with the simulated effects of remnant ice sheets in the climate model Community Climate System Model 3 (CCSM3). CCSM3 simulates increases in ‘growing degree days’—a measure of the accumulated warmth above five degrees Celsius per year—of more than 300 kelvin days over the Holocene, consistent with inferences from the pollen data. It also simulates a decrease in mean summer temperatures of more than two degrees Celsius, which correlates with reconstructed marine trends and highlights the potential importance of the different subseasonal sensitivities of the records. Despite the differing trends, pollen- and marine-based reconstructions are correlated at millennial-to-centennial scales, probably in response to ice-sheet and meltwater dynamics, and to stochastic dynamics similar to the temperature variations produced by CCSM3. Although our results depend on a single source of palaeoclimatic data (pollen) and a single climate-model simulation, they reinforce the notion that climate models can adequately simulate climates for periods other than the present-day. They also demonstrate that amplified warming in recent decades increased temperatures above the mean of any century during the past 11,000 years.
An efficient soil water balance model based on hybrid numerical and statistical methods
NASA Astrophysics Data System (ADS)
Mao, Wei; Yang, Jinzhong; Zhu, Yan; Ye, Ming; Liu, Zhao; Wu, Jingwei
2018-04-01
Most soil water balance models only consider downward soil water movement driven by gravitational potential, and thus cannot simulate upward soil water movement driven by evapotranspiration especially in agricultural areas. In addition, the models cannot be used for simulating soil water movement in heterogeneous soils, and usually require many empirical parameters. To resolve these problems, this study derives a new one-dimensional water balance model for simulating both downward and upward soil water movement in heterogeneous unsaturated zones. The new model is based on a hybrid of numerical and statistical methods, and only requires four physical parameters. The model uses three governing equations to consider three terms that impact soil water movement, including the advective term driven by gravitational potential, the source/sink term driven by external forces (e.g., evapotranspiration), and the diffusive term driven by matric potential. The three governing equations are solved separately by using the hybrid numerical and statistical methods (e.g., linear regression method) that consider soil heterogeneity. The four soil hydraulic parameters required by the new models are as follows: saturated hydraulic conductivity, saturated water content, field capacity, and residual water content. The strength and weakness of the new model are evaluated by using two published studies, three hypothetical examples and a real-world application. The evaluation is performed by comparing the simulation results of the new model with corresponding results presented in the published studies, obtained using HYDRUS-1D and observation data. The evaluation indicates that the new model is accurate and efficient for simulating upward soil water flow in heterogeneous soils with complex boundary conditions. The new model is used for evaluating different drainage functions, and the square drainage function and the power drainage function are recommended. Computational efficiency of the new model makes it particularly suitable for large-scale simulation of soil water movement, because the new model can be used with coarse discretization in space and time.
NASA Astrophysics Data System (ADS)
Jang, Jaeseong; Ahn, Chi Young; Jeon, Kiwan; Choi, Jung-il; Lee, Changhoon; Seo, Jin Keun
2015-03-01
A reconstruction method is proposed here to quantify the distribution of blood flow velocity fields inside the left ventricle from color Doppler echocardiography measurement. From 3D incompressible Navier- Stokes equation, a 2D incompressible Navier-Stokes equation with a mass source term is derived to utilize the measurable color flow ultrasound data in a plane along with the moving boundary condition. The proposed model reflects out-of-plane blood flows on the imaging plane through the mass source term. For demonstrating a feasibility of the proposed method, we have performed numerical simulations of the forward problem and numerical analysis of the reconstruction method. First, we construct a 3D moving LV region having a specific stroke volume. To obtain synthetic intra-ventricular flows, we performed a numerical simulation of the forward problem of Navier-Stokes equation inside the 3D moving LV, computed 3D intra-ventricular velocity fields as a solution of the forward problem, projected the 3D velocity fields on the imaging plane and took the inner product of the 2D velocity fields on the imaging plane and scanline directional velocity fields for synthetic scanline directional projected velocity at each position. The proposed method utilized the 2D synthetic projected velocity data for reconstructing LV blood flow. By computing the difference between synthetic flow and reconstructed flow fields, we obtained the averaged point-wise errors of 0.06 m/s and 0.02 m/s for u- and v-components, respectively.
NASA Astrophysics Data System (ADS)
Zhang, Y.; Wen, X.
2017-12-01
The Yellow River source region is situated in the northeast Tibetan Plateau, which is considered as a global climate change hot-spot and one of the most sensitive areas in terms of response to global warming in view of its fragile ecosystem. This region plays an irreplaceable role for downstream water supply of The Yellow River because of its unique topography and variable climate. The water energy cycle processes of the Yellow River source Region from July to September in 2015 were simulated by using the WRF mesoscale numerical model. The two groups respectively used Noah and CLM4 parameterization schemes of land surface process. Based on the observation data of GLDAS data set, ground automatic weather station and Zoige plateau wetland ecosystem research station, the simulated values of near surface meteorological elements and surface energy parameters of two different schemes were compared. The results showed that the daily variations about meteorological factors in Zoige station in September were simulated quite well by the model. The correlation coefficient between the simulated temperature and humidity of the CLM scheme were 0.88 and 0.83, the RMSE were 1.94 ° and 9.97%, and the deviation Bias were 0.04 ° and 3.30%, which was closer to the observation data than the Noah scheme. The correlation coefficients of net radiation, surface heat flux, upward short wave and upward longwave radiation were respectively 0.86, 0.81, 0.84 and 0.88, which corresponded better than the observation data. The sensible heat flux and latent heat flux distribution of the Noah scheme corresponded quite well to GLDAS. the distribution and magnitude of 2m relative humidity and soil moisture were closer to surface observation data because the CLM scheme described the photosynthesis and evapotranspiration of land surface vegetation more rationally. The simulating abilities of precipitation and downward longwave radiation need to be improved. This study provides a theoretical basis for the numerical simulation of water energy cycle in the source region over the Yellow River basin.
Prediction of Down-Gradient Impacts of DNAPL Source Depletion Using Tracer Techniques
NASA Astrophysics Data System (ADS)
Basu, N. B.; Fure, A. D.; Jawitz, J. W.
2006-12-01
Four simplified DNAPL source depletion models that have been discussed in the literature recently are evaluated for the prediction of long-term effects of source depletion under natural gradient flow. These models are simple in form (a power function equation is an example) but are shown here to serve as mathematical analogs to complex multiphase flow and transport simulators. One of the source depletion models, the equilibrium streamtube model, is shown to be relatively easily parameterized using non-reactive and reactive tracers. Non-reactive tracers are used to characterize the aquifer heterogeneity while reactive tracers are used to describe the mean DNAPL mass and its distribution. This information is then used in a Lagrangian framework to predict source remediation performance. In a Lagrangian approach the source zone is conceptualized as a collection of non-interacting streamtubes with hydrodynamic and DNAPL heterogeneity represented by the variation of the travel time and DNAPL saturation among the streamtubes. The travel time statistics are estimated from the non-reactive tracer data while the DNAPL distribution statistics are estimated from the reactive tracer data. The combined statistics are used to define an analytical solution for contaminant dissolution under natural gradient flow. The tracer prediction technique compared favorably with results from a multiphase flow and transport simulator UTCHEM in domains with different hydrodynamic heterogeneity (variance of the log conductivity field = 0.2, 1 and 3).
NASA Astrophysics Data System (ADS)
Katata, Genki; Chino, Masamichi; Terada, Hiroaki; Kobayashi, Takuya; Ota, Masakazu; Nagai, Haruyasu; Kajino, Mizuo
2014-05-01
Temporal variations of release amounts of radionuclides during the Fukushima Dai-ichi Nuclear Power Plant (FNPP1) accident and their dispersion process are essential to evaluate the environmental impacts and resultant radiological doses to the public. Here, we estimated a detailed time trend of atmospheric releases during the accident by combining environmental monitoring data and coupling atmospheric and oceanic dispersion simulations by WSPEEDI-II (Worldwide version of System for Prediction of Environmental Emergency Dose Information) and SEA-GEARN developed by the authors. New schemes for wet, dry, and fog depositions of radioactive iodine gas (I2 and CH3I) and other particles (I-131, Te-132, Cs-137, and Cs-134) were incorporated into WSPEEDI-II. The deposition calculated by WSPEEDI-II was used as input data of ocean dispersion calculations by SEA-GEARN. The reverse estimation method based on the simulation by both models assuming unit release rate (1 Bq h-1) was adopted to estimate the source term at the FNPP1 using air dose rate, and air sea surface concentrations. The results suggested that the major release of radionuclides from the FNPP1 occurred in the following periods during March 2011: afternoon on the 12th when the venting and hydrogen explosion occurred at Unit 1, morning on the 13th after the venting event at Unit 3, midnight on the 14th when several openings of SRV (steam relief valve) were conducted at Unit 2, morning and night on the 15th, and morning on the 16th. The modified WSPEEDI-II using the newly estimated source term well reproduced local and regional patterns of air dose rate and surface deposition of I-131 and Cs-137 obtained by airborne observations. Our dispersion simulations also revealed that the highest radioactive contamination areas around FNPP1 were created from 15th to 16th March by complicated interactions among rainfall (wet deposition), plume movements, and phase properties (gas or particle) of I-131 and release rates associated with reactor pressure variations in Units 2 and 3.
Mohamad Asri, Muhammad Naeim; Mat Desa, Wan Nur Syuhaila; Ismail, Dzulkiflee
2018-01-01
The potential combination of two nondestructive techniques, that is, Raman spectroscopy (RS) and attenuated total reflectance-fourier transform infrared (ATR-FTIR) spectroscopy with Pearson's product moment correlation (PPMC) coefficient (r) and principal component analysis (PCA) to determine the actual source of red gel pen ink used to write a simulated threatening note, was examined. Eighteen (18) red gel pens purchased from Japan and Malaysia from November to December 2014 where one of the pens was used to write a simulated threatening note were analyzed using RS and ATR-FTIR spectroscopy, respectively. The spectra of all the red gel pen inks including the ink deposited on the simulated threatening note gathered from the RS and ATR-FTIR analyses were subjected to PPMC coefficient (r) calculation and principal component analysis (PCA). The coefficients r = 0.9985 and r = 0.9912 for pairwise combination of RS and ATR-FTIR spectra respectively and similarities in terms of PC1 and PC2 scores of one of the inks to the ink deposited on the simulated threatening note substantiated the feasibility of combining RS and ATR-FTIR spectroscopy with PPMC coefficient (r) and PCA for successful source determination of red gel pen inks. The development of pigment spectral library had allowed the ink deposited on the threatening note to be identified as XSL Poppy Red (CI Pigment Red 112). © 2017 American Academy of Forensic Sciences.
Characteristics of Wind Generated Waves in the Delaware Estuary
NASA Astrophysics Data System (ADS)
Chen, J. L.; Ralston, D. K.; Geyer, W. R.; Chant, R. J.; Sommerfield, C. K.
2016-02-01
Coastal marshes provide important services for human uses such as fishery industry, recreation, ports and marine operations. Bombay Hook Wildlife Refuge, located along the western shore of the Delaware Estuary, has experienced substantial loss of salt marsh in recent decades. To evaluate the importance of different mechanisms which cause observed shoreline retreat, wave gauges were deployed along the dredged navigation channel and shoreline in the Delaware Estuary. A coupled wave and circulation modeling system (SWAN/ROMS) based on the most recent bathymetry (last updated 2013) is validated with waves observed during both calm and energetic conditions in November 2015. Simulation results based on different model parameterizations of whitecapping, bottom friction and the wind input source are compared. The tendency of observed wave steepness is more similar to a revised whitecapping source term [Westhuysen, 2007] than the default in SWAN model. Both model results and field data show that the generation/dissipation of waves in the Delaware estuary is determined by the local wind speed and channel depth. Whitecapping-induced energy dissipation is dominant in the channel, while dissipation due to bottom friction and depth-induced breaking become important on lateral shoals. To characterize the effects of wind fetch on waves in estuaries more generally, simulations with an idealized domain and varying wind conditions are compared and the results are expressed in terms of non-dimensional parameters. The simulations based on a 10m-depth uniform idealized channel show that the dissipation of waves is mainly controlled by whitecapping in all wind conditions. Under strong wind conditions (wind speed >10m/s) the effect of bottom friction becomes important so the simulated wave heights are no longer linearly correlated with wind speed.
Turbulent transport in premixed flames
NASA Technical Reports Server (NTRS)
Rutland, C. J.; Cant, R. S.
1994-01-01
Simulations of planar, premixed turbulent flames with heat release were used to study turbulent transport. Reynolds stress and Reynolds flux budgets were obtained and used to guide the investigation of important physical effects. Essentially all pressure terms in the transport equations were found to be significant. In the Reynolds flux equations, these terms are the major source of counter-gradient transport. Viscous and molecular terms were also found to be significant, with both dilatational and solenoidal terms contributing to the Reynolds stress dissipation. The BML theory of premixed turbulent combustion was critically examined in detail. The BML bimodal pdf was found to agree well with the DNS data. All BML decompositions, through the third moments, show very good agreement with the DNS results. Several BML models for conditional terms were checked using the DNS data and were found to require more extensive development.
NASA Astrophysics Data System (ADS)
Medellin-Azuara, J.; Fraga, C. C. S.; Marques, G.; Mendes, C. A.
2015-12-01
The expansion and operation of urban water supply systems under rapidly growing demands, hydrologic uncertainty, and scarce water supplies requires a strategic combination of various supply sources for added reliability, reduced costs and improved operational flexibility. The design and operation of such portfolio of water supply sources merits decisions of what and when to expand, and how much to use of each available sources accounting for interest rates, economies of scale and hydrologic variability. The present research provides a framework and an integrated methodology that optimizes the expansion of various water supply alternatives using dynamic programming and combining both short term and long term optimization of water use and simulation of water allocation. A case study in Bahia Do Rio Dos Sinos in Southern Brazil is presented. The framework couples an optimization model with quadratic programming model in GAMS with WEAP, a rain runoff simulation models that hosts the water supply infrastructure features and hydrologic conditions. Results allow (a) identification of trade offs between cost and reliability of different expansion paths and water use decisions and (b) evaluation of potential gains by reducing water system losses as a portfolio component. The latter is critical in several developing countries where water supply system losses are high and often neglected in favor of more system expansion. Results also highlight the potential of various water supply alternatives including, conservation, groundwater, and infrastructural enhancements over time. The framework proves its usefulness for planning its transferability to similarly urbanized systems.
High-resolution threshold photoionization of N2O
NASA Technical Reports Server (NTRS)
Wiedmann, R. T.; Grant, E. R.; Tonkyn, R. G.; White, M. G.
1991-01-01
Pulsed field ionization (PFI) has been used in conjunction with a coherent VUV source to obtain high-resolution threshold photoelectron spectra for the (000), (010), (020), and (100) vibrational states of the N2O(+) cation. Simulations for the rotational profiles of each vibronic level were obtained by fitting the Buckingham-Orr-Sichel equations using accurate spectroscopic constants for the ground states of the neutral and the ion. The relative branch intensities are interpreted in terms of the partial waves of the outgoing photoelectron to which the ionic core is coupled and in terms of the angular momentum transferred to the core.
Short-term Wind Forecasting at Wind Farms using WRF-LES and Actuator Disk Model
NASA Astrophysics Data System (ADS)
Kirkil, Gokhan
2017-04-01
Short-term wind forecasts are obtained for a wind farm on a mountainous terrain using WRF-LES. Multi-scale simulations are also performed using different PBL parameterizations. Turbines are parameterized using Actuator Disc Model. LES models improved the forecasts. Statistical error analysis is performed and ramp events are analyzed. Complex topography of the study area affects model performance, especially the accuracy of wind forecasts were poor for cross valley-mountain flows. By means of LES, we gain new knowledge about the sources of spatial and temporal variability of wind fluctuations such as the configuration of wind turbines.
Coral proxy record of decadal-scale reduction in base flow from Moloka'i, Hawaii
Prouty, Nancy G.; Jupiter, Stacy D.; Field, Michael E.; McCulloch, Malcolm T.
2009-01-01
Groundwater is a major resource in Hawaii and is the principal source of water for municipal, agricultural, and industrial use. With a growing population, a long-term downward trend in rainfall, and the need for proper groundwater management, a better understanding of the hydroclimatological system is essential. Proxy records from corals can supplement long-term observational networks, offering an accessible source of hydrologic and climate information. To develop a qualitative proxy for historic groundwater discharge to coastal waters, a suite of rare earth elements and yttrium (REYs) were analyzed from coral cores collected along the south shore of Moloka'i, Hawaii. The coral REY to calcium (Ca) ratios were evaluated against hydrological parameters, yielding the strongest relationship to base flow. Dissolution of REYs from labradorite and olivine in the basaltic rock aquifers is likely the primary source of coastal ocean REYs. There was a statistically significant downward trend (−40%) in subannually resolved REY/Ca ratios over the last century. This is consistent with long-term records of stream discharge from Moloka'i, which imply a downward trend in base flow since 1913. A decrease in base flow is observed statewide, consistent with the long-term downward trend in annual rainfall over much of the state. With greater demands on freshwater resources, it is appropriate for withdrawal scenarios to consider long-term trends and short-term climate variability. It is possible that coral paleohydrological records can be used to conduct model-data comparisons in groundwater flow models used to simulate changes in groundwater level and coastal discharge.
QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials.
Giannozzi, Paolo; Baroni, Stefano; Bonini, Nicola; Calandra, Matteo; Car, Roberto; Cavazzoni, Carlo; Ceresoli, Davide; Chiarotti, Guido L; Cococcioni, Matteo; Dabo, Ismaila; Dal Corso, Andrea; de Gironcoli, Stefano; Fabris, Stefano; Fratesi, Guido; Gebauer, Ralph; Gerstmann, Uwe; Gougoussis, Christos; Kokalj, Anton; Lazzeri, Michele; Martin-Samos, Layla; Marzari, Nicola; Mauri, Francesco; Mazzarello, Riccardo; Paolini, Stefano; Pasquarello, Alfredo; Paulatto, Lorenzo; Sbraccia, Carlo; Scandolo, Sandro; Sclauzero, Gabriele; Seitsonen, Ari P; Smogunov, Alexander; Umari, Paolo; Wentzcovitch, Renata M
2009-09-30
QUANTUM ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density-functional theory, plane waves, and pseudopotentials (norm-conserving, ultrasoft, and projector-augmented wave). The acronym ESPRESSO stands for opEn Source Package for Research in Electronic Structure, Simulation, and Optimization. It is freely available to researchers around the world under the terms of the GNU General Public License. QUANTUM ESPRESSO builds upon newly-restructured electronic-structure codes that have been developed and tested by some of the original authors of novel electronic-structure algorithms and applied in the last twenty years by some of the leading materials modeling groups worldwide. Innovation and efficiency are still its main focus, with special attention paid to massively parallel architectures, and a great effort being devoted to user friendliness. QUANTUM ESPRESSO is evolving towards a distribution of independent and interoperable codes in the spirit of an open-source project, where researchers active in the field of electronic-structure calculations are encouraged to participate in the project by contributing their own codes or by implementing their own ideas into existing codes.
NASA Astrophysics Data System (ADS)
Nigam, Kaushal; Kondekar, Pravin; Sharma, Dheeraj; Raad, Bhagwan Ram
2016-10-01
For the first time, a distinctive approach based on electrically doped concept is used for the formation of novel double gate tunnel field effect transistor (TFET). For this, the initially heavily doped n+ substrate is converted into n+-i-n+-i (Drain-Channel-Source) by the selection of appropriate work functions of control gate (CG) and polarity gate (PG) as 4.7 eV. Further, the formation of p+ region for source is performed by applying -1.2 V at PG. Hence, the structure behave like a n+-i-n+-p+ gated TFET, whereas, the control gate is used to modulate the effective tunneling barrier width. The physical realization of delta doped n+ layer near to source region is a challenging task for improving the device performance in terms of ON current and subthreshold slope. So, the proposed work will provide a better platform for fabrication of n+-i-n+-p+ TFET with low cost and suppressed random dopant fluctuation (RDF) effects. ATLAS TCAD device simulator is used to carry out the simulation work.
High speed imaging of dynamic processes with a switched source x-ray CT system
NASA Astrophysics Data System (ADS)
Thompson, William M.; Lionheart, William R. B.; Morton, Edward J.; Cunningham, Mike; Luggar, Russell D.
2015-05-01
Conventional x-ray computed tomography (CT) scanners are limited in their scanning speed by the mechanical constraints of their rotating gantries and as such do not provide the necessary temporal resolution for imaging of fast-moving dynamic processes, such as moving fluid flows. The Real Time Tomography (RTT) system is a family of fast cone beam CT scanners which instead use multiple fixed discrete sources and complete rings of detectors in an offset geometry. We demonstrate the potential of this system for use in the imaging of such high speed dynamic processes and give results using simulated and real experimental data. The unusual scanning geometry results in some challenges in image reconstruction, which are overcome using algebraic iterative reconstruction techniques and explicit regularisation. Through the use of a simple temporal regularisation term and by optimising the source firing pattern, we show that temporal resolution of the system may be increased at the expense of spatial resolution, which may be advantageous in some situations. Results are given showing temporal resolution of approximately 500 µs with simulated data and 3 ms with real experimental data.
NASA Astrophysics Data System (ADS)
Zhao, Yang; Dai, Rui-Na; Xiao, Xiang; Zhang, Zong; Duan, Lian; Li, Zheng; Zhu, Chao-Zhe
2017-02-01
Two-person neuroscience, a perspective in understanding human social cognition and interaction, involves designing immersive social interaction experiments as well as simultaneously recording brain activity of two or more subjects, a process termed "hyperscanning." Using newly developed imaging techniques, the interbrain connectivity or hyperlink of various types of social interaction has been revealed. Functional near-infrared spectroscopy (fNIRS)-hyperscanning provides a more naturalistic environment for experimental paradigms of social interaction and has recently drawn much attention. However, most fNIRS-hyperscanning studies have computed hyperlinks using sensor data directly while ignoring the fact that the sensor-level signals contain confounding noises, which may lead to a loss of sensitivity and specificity in hyperlink analysis. In this study, on the basis of independent component analysis (ICA), a source-level analysis framework is proposed to investigate the hyperlinks in a fNIRS two-person neuroscience study. The performance of five widely used ICA algorithms in extracting sources of interaction was compared in simulative datasets, and increased sensitivity and specificity of hyperlink analysis by our proposed method were demonstrated in both simulative and real two-person experiments.
Permeable Surface Corrections for Ffowcs Williams and Hawkings Integrals
NASA Technical Reports Server (NTRS)
Lockard, David P.; Casper, Jay H.
2005-01-01
The acoustic prediction methodology discussed herein applies an acoustic analogy to calculate the sound generated by sources in an aerodynamic simulation. Sound is propagated from the computed flow field by integrating the Ffowcs Williams and Hawkings equation on a suitable control surface. Previous research suggests that, for some applications, the integration surface must be placed away from the solid surface to incorporate source contributions from within the flow volume. As such, the fluid mechanisms in the input flow field that contribute to the far-field noise are accounted for by their mathematical projection as a distribution of source terms on a permeable surface. The passage of nonacoustic disturbances through such an integration surface can result in significant error in an acoustic calculation. A correction for the error is derived in the frequency domain using a frozen gust assumption. The correction is found to work reasonably well in several test cases where the error is a small fraction of the actual radiated noise. However, satisfactory agreement has not been obtained between noise predictions using the solution from a three-dimensional, detached-eddy simulation of flow over a cylinder.
NASA Astrophysics Data System (ADS)
Farmer, W. H.; Kiang, J. E.
2017-12-01
The development, deployment and maintenance of water resources management infrastructure and practices rely on hydrologic characterization, which requires an understanding of local hydrology. With regards to streamflow, this understanding is typically quantified with statistics derived from long-term streamgage records. However, a fundamental problem is how to characterize local hydrology without the luxury of streamgage records, a problem that complicates water resources management at ungaged locations and for long-term future projections. This problem has typically been addressed through the development of point estimators, such as regression equations, to estimate particular statistics. Physically-based precipitation-runoff models, which are capable of producing simulated hydrographs, offer an alternative to point estimators. The advantage of simulated hydrographs is that they can be used to compute any number of streamflow statistics from a single source (the simulated hydrograph) rather than relying on a diverse set of point estimators. However, the use of simulated hydrographs introduces a degree of model uncertainty that is propagated through to estimated streamflow statistics and may have drastic effects on management decisions. We compare the accuracy and precision of streamflow statistics (e.g. the mean annual streamflow, the annual maximum streamflow exceeded in 10% of years, and the minimum seven-day average streamflow exceeded in 90% of years, among others) derived from point estimators (e.g. regressions, kriging, machine learning) to that of statistics derived from simulated hydrographs across the continental United States. Initial results suggest that the error introduced through hydrograph simulation may substantially bias the resulting hydrologic characterization.
Hazard assessment of long-period ground motions for the Nankai Trough earthquakes
NASA Astrophysics Data System (ADS)
Maeda, T.; Morikawa, N.; Aoi, S.; Fujiwara, H.
2013-12-01
We evaluate a seismic hazard for long-period ground motions associated with the Nankai Trough earthquakes (M8~9) in southwest Japan. Large interplate earthquakes occurring around the Nankai Trough have caused serious damages due to strong ground motions and tsunami; most recent events were in 1944 and 1946. Such large interplate earthquake potentially causes damages to high-rise and large-scale structures due to long-period ground motions (e.g., 1985 Michoacan earthquake in Mexico, 2003 Tokachi-oki earthquake in Japan). The long-period ground motions are amplified particularly on basins. Because major cities along the Nankai Trough have developed on alluvial plains, it is therefore important to evaluate long-period ground motions as well as strong motions and tsunami for the anticipated Nankai Trough earthquakes. The long-period ground motions are evaluated by the finite difference method (FDM) using 'characterized source models' and the 3-D underground structure model. The 'characterized source model' refers to a source model including the source parameters necessary for reproducing the strong ground motions. The parameters are determined based on a 'recipe' for predicting strong ground motion (Earthquake Research Committee (ERC), 2009). We construct various source models (~100 scenarios) giving the various case of source parameters such as source region, asperity configuration, and hypocenter location. Each source region is determined by 'the long-term evaluation of earthquakes in the Nankai Trough' published by ERC. The asperity configuration and hypocenter location control the rupture directivity effects. These parameters are important because our preliminary simulations are strongly affected by the rupture directivity. We apply the system called GMS (Ground Motion Simulator) for simulating the seismic wave propagation based on 3-D FDM scheme using discontinuous grids (Aoi and Fujiwara, 1999) to our study. The grid spacing for the shallow region is 200 m and 100 m in horizontal and vertical, respectively. The grid spacing for the deep region is three times coarser. The total number of grid points is about three billion. The 3-D underground structure model used in the FD simulation is the Japan integrated velocity structure model (ERC, 2012). Our simulation is valid for period more than two seconds due to the lowest S-wave velocity and grid spacing. However, because the characterized source model may not sufficiently support short period components, we should be interpreted the reliable period of this simulation with caution. Therefore, we consider the period more than five seconds instead of two seconds for further analysis. We evaluate the long-period ground motions using the velocity response spectra for the period range between five and 20 second. The preliminary simulation shows a large variation of response spectra at a site. This large variation implies that the ground motion is very sensitive to different scenarios. And it requires studying the large variation to understand the seismic hazard. Our further study will obtain the hazard curves for the Nankai Trough earthquake (M 8~9) by applying the probabilistic seismic hazard analysis to the simulation results.
Post audit of a numerical prediction of wellfield drawdown in a semiconfined aquifer system
Stewart, M.; Langevin, C.
1999-01-01
A numerical ground water flow model was created in 1978 and revised in 1981 to predict the drawdown effects of a proposed municipal wellfield permitted to withdraw 30 million gallons per day (mgd; 1.1 x 105 m3/day) of water from the semiconfined Floridan Aquifer system. The predictions are based on the assumption that water levels in the semiconfined Floridan Aquifer reach a long-term, steady-state condition within a few days of initiation of pumping. Using this assumption, a 75 day simulation without water table recharge, pumping at the maximum permitted rates, was considered to represent a worst-case condition and the greatest drawdowns that could be experienced during wellfield operation. This method of predicting wellfield effects was accepted by the permitting agency. For this post audit, observed drawdowns were derived by taking the difference between pre-pumping and post-pumping potentiometric surface levels. Comparison of predicted and observed drawdowns suggests that actual drawdown over a 12 year period exceeds predicted drawdown by a factor of two or more. Analysis of the source of error in the 1981 predictions suggests that the values used for transmissivity, storativity, specific yield, and leakance are reasonable at the wellfield scale. Simulation using actual 1980-1992 pumping rates improves the agreement between predicted and observed drawdowns. The principal source of error is the assumption that water levels in a semiconfined aquifer achieve a steady-state condition after a few days or weeks of pumping. Simulations using a version of the 1981 model modified to include recharge and evapotranspiration suggest that it can take hundreds of days or several years for water levels in the linked Surficial and Floridan Aquifers to reach an apparent steady-state condition, and that slow declines in levels continue for years after the initiation of pumping. While the 1981 'impact' model can be used for reasonably predicting short-term, wellfield-scale effects of pumping, using a 75 day long simulation without recharge to predict the long-term behavior of the wellfield was an inappropriate application, resulting in significant underprediction of wellfield effects.A numerical ground water flow model was created in 1978 and revised in 1981 to predict the drawdown effects of a proposed municipal wellfield permitted to withdraw 30 million gallons per day (mgd; 1.1??105 m3/day) of water from the semiconfined Floridan Aquifer system. The predictions are based on the assumption that water levels in the semiconfined Floridan Aquifer reach a long-term, steady-state condition within a few days of initiation of pumping. Using this assumption, a 75 day simulation without water table recharge, pumping at the maximum permitted rates, was considered to represent a worst-case condition and the greatest drawdowns that could be experienced during wellfield operation. This method of predicting wellfield effects was accepted by the permitting agency. For this post audit, observed drawdowns were derived by taking the difference between pre-pumping and post-pumping potentiometric surface levels. Comparison of predicted and observed drawdowns suggests that actual drawdown over a 12 year period exceeds predicted drawdown by a factor of two or more. Analysis of the source of error in the 1981 predictions suggests that the values used for transmissivity, storativity, specific yield, and leakance are reasonable at the wellfield scale. Simulation using actual 1980-1992 pumping rates improves the agreement between predicted and observed drawdowns. The principal source of error is the assumption that water levels in a semiconfined aquifer achieve a steady-state condition after a few days or weeks of pumping. Simulations using a version of the 1981 model modified to include recharge and evapotranspiration suggest that it can take hundreds of days or several years for water levels in the linked Surficial and Floridan Aquifers to reach an apparent stead
NASA Astrophysics Data System (ADS)
Møll Nilsen, Halvor; Lie, Knut-Andreas; Andersen, Odd
2015-06-01
MRST-co2lab is a collection of open-source computational tools for modeling large-scale and long-time migration of CO2 in conductive aquifers, combining ideas from basin modeling, computational geometry, hydrology, and reservoir simulation. Herein, we employ the methods of MRST-co2lab to study long-term CO2 storage on the scale of hundreds of megatonnes. We consider public data sets of two aquifers from the Norwegian North Sea and use geometrical methods for identifying structural traps, percolation-type methods for identifying potential spill paths, and vertical-equilibrium methods for efficient simulation of structural, residual, and solubility trapping in a thousand-year perspective. In particular, we investigate how data resolution affects estimates of storage capacity and discuss workflows for identifying good injection sites and optimizing injection strategies.
Assessments of a Turbulence Model Based on Menter's Modification to Rotta's Two-Equation Model
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.
2013-01-01
The main objective of this paper is to construct a turbulence model with a more reliable second equation simulating length scale. In the present paper, we assess the length scale equation based on Menter s modification to Rotta s two-equation model. Rotta shows that a reliable second equation can be formed in an exact transport equation from the turbulent length scale L and kinetic energy. Rotta s equation is well suited for a term-by-term modeling and shows some interesting features compared to other approaches. The most important difference is that the formulation leads to a natural inclusion of higher order velocity derivatives into the source terms of the scale equation, which has the potential to enhance the capability of Reynolds-averaged Navier-Stokes (RANS) to simulate unsteady flows. The model is implemented in the PAB3D solver with complete formulation, usage methodology, and validation examples to demonstrate its capabilities. The detailed studies include grid convergence. Near-wall and shear flows cases are documented and compared with experimental and Large Eddy Simulation (LES) data. The results from this formulation are as good or better than the well-known SST turbulence model and much better than k-epsilon results. Overall, the study provides useful insights into the model capability in predicting attached and separated flows.
NASA Technical Reports Server (NTRS)
Chin, Mian; Diehl, Thomas; Bian, Huisheng; Yu, Hongbin
2008-01-01
We present a global model study on the role aerosols play in the change of solar radiation at Earth's surface that transitioned from a decreasing (dimming) trend to an increasing (brightening) trend. Our primary objective is to understand the relationship between the long-term trends of aerosol emission, atmospheric burden, and surface solar radiation. More specifically, we use the recently compiled comprehensive global emission datasets of aerosols and precursors from fuel combustion, biomass burning, volcanic eruptions and other sources from 1980 to 2006 to simulate long-term variations of aerosol distributions and optical properties, and then calculate the multi-decadal changes of short-wave radiative fluxes at the surface and at the top of the atmosphere by coupling the GOCART model simulated aerosols with the Goddard radiative transfer model. The model results are compared with long-term observational records from ground-based networks and satellite data. We will address the following critical questions: To what extent can the observed surface solar radiation trends, known as the transition from dimming to brightening, be explained by the changes of anthropogenic and natural aerosol loading on global and regional scales? What are the relative contributions of local emission and long-range transport to the surface radiation budget and how do these contributions change with time?
pyJac: Analytical Jacobian generator for chemical kinetics
NASA Astrophysics Data System (ADS)
Niemeyer, Kyle E.; Curtis, Nicholas J.; Sung, Chih-Jen
2017-06-01
Accurate simulations of combustion phenomena require the use of detailed chemical kinetics in order to capture limit phenomena such as ignition and extinction as well as predict pollutant formation. However, the chemical kinetic models for hydrocarbon fuels of practical interest typically have large numbers of species and reactions and exhibit high levels of mathematical stiffness in the governing differential equations, particularly for larger fuel molecules. In order to integrate the stiff equations governing chemical kinetics, generally reactive-flow simulations rely on implicit algorithms that require frequent Jacobian matrix evaluations. Some in situ and a posteriori computational diagnostics methods also require accurate Jacobian matrices, including computational singular perturbation and chemical explosive mode analysis. Typically, finite differences numerically approximate these, but for larger chemical kinetic models this poses significant computational demands since the number of chemical source term evaluations scales with the square of species count. Furthermore, existing analytical Jacobian tools do not optimize evaluations or support emerging SIMD processors such as GPUs. Here we introduce pyJac, a Python-based open-source program that generates analytical Jacobian matrices for use in chemical kinetics modeling and analysis. In addition to producing the necessary customized source code for evaluating reaction rates (including all modern reaction rate formulations), the chemical source terms, and the Jacobian matrix, pyJac uses an optimized evaluation order to minimize computational and memory operations. As a demonstration, we first establish the correctness of the Jacobian matrices for kinetic models of hydrogen, methane, ethylene, and isopentanol oxidation (number of species ranging 13-360) by showing agreement within 0.001% of matrices obtained via automatic differentiation. We then demonstrate the performance achievable on CPUs and GPUs using pyJac via matrix evaluation timing comparisons; the routines produced by pyJac outperformed first-order finite differences by 3-7.5 times and the existing analytical Jacobian software TChem by 1.1-2.2 times on a single-threaded basis. It is noted that TChem is not thread-safe, while pyJac is easily parallelized, and hence can greatly outperform TChem on multicore CPUs. The Jacobian matrix generator we describe here will be useful for reducing the cost of integrating chemical source terms with implicit algorithms in particular and algorithms that require an accurate Jacobian matrix in general. Furthermore, the open-source release of the program and Python-based implementation will enable wide adoption.
Open source software integrated into data services of Japanese planetary explorations
NASA Astrophysics Data System (ADS)
Yamamoto, Y.; Ishihara, Y.; Otake, H.; Imai, K.; Masuda, K.
2015-12-01
Scientific data obtained by Japanese scientific satellites and lunar and planetary explorations are archived in DARTS (Data ARchives and Transmission System). DARTS provides the data with a simple method such as HTTP directory listing for long-term preservation while DARTS tries to provide rich web applications for ease of access with modern web technologies based on open source software. This presentation showcases availability of open source software through our services. KADIAS is a web-based application to search, analyze, and obtain scientific data measured by SELENE(Kaguya), a Japanese lunar orbiter. KADIAS uses OpenLayers to display maps distributed from Web Map Service (WMS). As a WMS server, open source software MapServer is adopted. KAGUYA 3D GIS (KAGUYA 3D Moon NAVI) provides a virtual globe for the SELENE's data. The main purpose of this application is public outreach. NASA World Wind Java SDK is used to develop. C3 (Cross-Cutting Comparisons) is a tool to compare data from various observations and simulations. It uses Highcharts to draw graphs on web browsers. Flow is a tool to simulate a Field-Of-View of an instrument onboard a spacecraft. This tool itself is open source software developed by JAXA/ISAS, and the license is BSD 3-Caluse License. SPICE Toolkit is essential to compile FLOW. SPICE Toolkit is also open source software developed by NASA/JPL, and the website distributes many spacecrafts' data. Nowadays, open source software is an indispensable tool to integrate DARTS services.
Near-Source Shaking and Dynamic Rupture in Plastic Media
NASA Astrophysics Data System (ADS)
Gabriel, A.; Mai, P. M.; Dalguer, L. A.; Ampuero, J. P.
2012-12-01
Recent well recorded earthquakes show a high degree of complexity at the source level that severely affects the resulting ground motion in near and far-field seismic data. In our study, we focus on investigating source-dominated near-field ground motion features from numerical dynamic rupture simulations in an elasto-visco-plastic bulk. Our aim is to contribute to a more direct connection from theoretical and computational results to field and seismological observations. Previous work showed that a diversity of rupture styles emerges from simulations on faults governed by velocity-and-state-dependent friction with rapid velocity-weakening at high slip rate. For instance, growing pulses lead to re-activation of slip due to gradual stress build-up near the hypocenter, as inferred in some source studies of the 2011 Tohoku-Oki earthquake. Moreover, off-fault energy dissipation implied physical limits on extreme ground motion by limiting peak slip rate and rupture velocity. We investigate characteristic features in near-field strong ground motion generated by dynamic in-plane rupture simulations. We present effects of plasticity on source process signatures, off-fault damage patterns and ground shaking. Independent of rupture style, asymmetric damage patterns across the fault are produced that contribute to the total seismic moment, and even dominantly at high angles between the fault and the maximum principal background stress. The off-fault plastic strain fields induced by transitions between rupture styles reveal characteristic signatures of the mechanical source processes during the transition. Comparing different rupture styles in elastic and elasto-visco-plastic media to identify signatures of off-fault plasticity, we find varying degrees of alteration of near-field radiation due to plastic energy dissipation. Subshear pulses suffer more peak particle velocity reduction due to plasticity than cracks. Supershear ruptures are affected even more. The occurrence of multiple rupture fronts affect seismic potency release rate, amplitude spectra, peak particle velocity distributions and near-field seismograms. Our simulations enable us to trace features of source processes in synthetic seismograms, for example exhibiting a re-activation of slip. Such physical models may provide starting points for future investigations of field properties of earthquake source mechanisms and natural fault conditions. In the long-term, our findings may be helpful for seismic hazard analysis and the improvement of seismic source models.
NASA Astrophysics Data System (ADS)
Zorita, E.
2009-09-01
Two European temperature records for the past half-millennium, January-to-April air temperature for Stockholm (Sweden) and seasonal temperature for a Central European region, both derived from the analysis of documentary sources combined with long instrumental records, are compared with the output of forced (solar, volcanic, greenhouse gases) climate simulations with the model ECHO-G. The analysis is complemented with the long (early)-instrumental record of Central England Temperature (CET). Both approaches to study past climates (simulations and reconstructions) are burdened with uncertainties. The main objective of this comparative analysis is to identify robust features and weaknesses that may help to improve models and reconstruction methods. The results indicate a general agreement between simulations and the reconstructed Stockholm and CET records regarding the long-term temperature trend over the recent centuries, suggesting a reasonable choice of the amplitude of the solar forcing in the simulations and sensitivity of the model to the external forcing. However, the Stockholm reconstruction and the CET record also show a long and clear multi-decadal warm episode peaking around 1730, which is absent in the simulations. The uncertainties associated with the reconstruction method or with the simulated internal climate variability cannot easily explain this difference. Regarding the interannual variability, the Stockholm series displays in some periods higher amplitudes than the simulations but these differences are within the statistical uncertainty and further decrease if output from a regional model driven by the global model is used. The long-term trends in the simulations and reconstructions of the Central European temperature agree less well. The reconstructed temperature displays, for all seasons, a smaller difference between the present climate and past centuries than the simulations. Possible reasons for these differences may be related to a limitation of the traditional technique for converting documentary evidence to temperature values to capture long-term climate changes, because the documents often reflect temperatures relative to the contemporary authors' own perception of what constituted 'normal' conditions. By contrast, the simulated and reconstructed inter-annual variability is in rather good agreement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yidong Xia; Mitch Plummer; Robert Podgorney
2016-02-01
Performance of heat production process over a 30-year period is assessed in a conceptual EGS model with a geothermal gradient of 65K per km depth in the reservoir. Water is circulated through a pair of parallel wells connected by a set of single large wing fractures. The results indicate that the desirable output electric power rate and lifespan could be obtained under suitable material properties and system parameters. A sensitivity analysis on some design constraints and operation parameters indicates that 1) the fracture horizontal spacing has profound effect on the long-term performance of heat production, 2) the downward deviation anglemore » for the parallel doublet wells may help overcome the difficulty of vertical drilling to reach a favorable production temperature, and 3) the thermal energy production rate and lifespan has close dependence on water mass flow rate. The results also indicate that the heat production can be improved when the horizontal fracture spacing, well deviation angle, and production flow rate are under reasonable conditions. To conduct the reservoir modeling and simulations, an open-source, finite element based, fully implicit, fully coupled hydrothermal code, namely FALCON, has been developed and used in this work. Compared with most other existing codes that are either closed-source or commercially available in this area, this new open-source code has demonstrated a code development strategy that aims to provide an unparalleled easiness for user-customization and multi-physics coupling. Test results have shown that the FALCON code is able to complete the long-term tests efficiently and accurately, thanks to the state-of-the-art nonlinear and linear solver algorithms implemented in the code.« less
NASA Astrophysics Data System (ADS)
Chai, Xintao; Tang, Genyang; Peng, Ronghua; Liu, Shaoyong
2018-03-01
Full-waveform inversion (FWI) reconstructs the subsurface properties from acquired seismic data via minimization of the misfit between observed and simulated data. However, FWI suffers from considerable computational costs resulting from the numerical solution of the wave equation for each source at each iteration. To reduce the computational burden, constructing supershots by combining several sources (aka source encoding) allows mitigation of the number of simulations at each iteration, but it gives rise to crosstalk artifacts because of interference between the individual sources of the supershot. A modified Gauss-Newton FWI (MGNFWI) approach showed that as long as the difference between the initial and true models permits a sparse representation, the ℓ _1-norm constrained model updates suppress subsampling-related artifacts. However, the spectral-projected gradient ℓ _1 (SPGℓ _1) algorithm employed by MGNFWI is rather complicated that makes its implementation difficult. To facilitate realistic applications, we adapt a linearized Bregman (LB) method to sparsity-promoting FWI (SPFWI) because of the efficiency and simplicity of LB in the framework of ℓ _1-norm constrained optimization problem and compressive sensing. Numerical experiments performed with the BP Salt model, the Marmousi model and the BG Compass model verify the following points. The FWI result with LB solving ℓ _1-norm sparsity-promoting problem for the model update outperforms that generated by solving ℓ _2-norm problem in terms of crosstalk elimination and high-fidelity results. The simpler LB method performs comparably and even superiorly to the complicated SPGℓ _1 method in terms of computational efficiency and model quality, making the LB method a viable alternative for realistic implementations of SPFWI.
A study of the Ljubljansko polje aquifer system behaviour and its simulations using multi-tools
NASA Astrophysics Data System (ADS)
Vrzel, J.; Ludwig, R.; Vižintin, G.; Ogrinc, N.
2017-12-01
Our study of comprehensive hydrological system behaviour, where understanding of the interfaces between groundwater and surface water is crucial, includes geochemical analyses for identification of groundwater sources (δ18O and δ2H) and estimation of groundwater mean residence time (3H, 3H/3He). The results of the geochemical analyses were compared with long-term data on precipitation, river discharge, hydraulic head, and groundwater pumping rate. The study is representative for the Ljubljansko polje in Slovenia, which belongs to the Sava River basin. The results show that the Sava River water and local precipitation are the main groundwater sources in this alluvial aquifer with high system sensitivity to both sources, which ranged from a day to a month. For a simulation of such a sensitive system different tools describing water cycle were coupled: simulation of the percolation of the local precipitation was done with the WaSiM-ETH, while the river and groundwater dynamics were performed with the MIKE 11 and FEFLOW, respectively. The WaSiM-ETH and MIKE 11 results were later employed as the upper boundary conditions in the FEFLOW model. The models have high spatial and daily temporal resolutions. A good agreement between geochemical data and modeling results was observed with two main highlights: (1) groundwater sources are in accordance with hydraulic heads and the Sava River water level/precipitation; (2) responsiveness of the aquifer on the high water level in the Sava River and on precipitation events is also synchronic with the mean groundwater residence time. The study shows that links between MIKE 11-FEFLOW-WaSiM-ETH tools is a great solution for a precise groundwater flow simulation, since all the tools are compatible and at the moment there is no routine approach for a precise parallel simulation of groundwater and surface water dynamics. The Project was financially supported by the the EU 7th Research Project - GLOBAQUA.
Simulation and Prediction of Warm Season Drought in North America
NASA Technical Reports Server (NTRS)
Wang, Hailan; Chang, Yehui; Schubert, Siegfried D.; Koster, Randal D.
2018-01-01
This presentation presents our recent work on model simulation and prediction of warm season drought in North America. The emphasis will be on the contribution from the leading modes of subseasonal atmospheric circulation variability, which are often present in the form of stationary Rossby waves. Here we take advantage of the results from observations, reanalyses, and simulations and reforecasts performed using the NASA Goddard Earth Observing System (GEOS-5) atmospheric and coupled General Circulation Model (GCM). Our results show that stationary Rossby waves play a key role in Northern Hemisphere (NH) atmospheric circulation and surface meteorology variability on subseasonal timescales. In particular, such waves have been crucial to the development of recent short-term warm season heat waves and droughts over North America (e.g. the 1988, 1998, and 2012 summer droughts) and northern Eurasia (e.g., the 2003 summer heat wave over Europe and the 2010 summer drought and heat wave over Russia). Through an investigation of the physical processes by which these waves lead to the development of warm season drought in North America, it is further found that these waves can serve as a potential source of drought predictability. In order to properly represent their effect and exploit this source of predictability, a model needs to correctly simulate the Northern Hemisphere (NH) mean jet streams and be able to predict the sources of these waves. Given the NASA GEOS-5 AGCM deficiency in simulating the NH jet streams and tropical convection during boreal summer, an approach has been developed to artificially remove much of model mean biases, which leads to considerable improvement in model simulation and prediction of stationary Rossby waves and drought development in North America. Our study points to the need to identify key model biases that limit model simulation and prediction of regional climate extremes, and diagnose the origin of these biases so as to inform modeling group for model improvement.
Englehardt, James D; Ashbolt, Nicholas J; Loewenstine, Chad; Gadzinski, Erik R; Ayenu-Prah, Albert Y
2012-06-01
Recently pathogen counts in drinking and source waters were shown theoretically to have the discrete Weibull (DW) or closely related discrete growth distribution (DGD). The result was demonstrated versus nine short-term and three simulated long-term water quality datasets. These distributions are highly skewed such that available datasets seldom represent the rare but important high-count events, making estimation of the long-term mean difficult. In the current work the methods, and data record length, required to assess long-term mean microbial count were evaluated by simulation of representative DW and DGD waterborne pathogen count distributions. Also, microbial count data were analyzed spectrally for correlation and cycles. In general, longer data records were required for more highly skewed distributions, conceptually associated with more highly treated water. In particular, 500-1,000 random samples were required for reliable assessment of the population mean ±10%, though 50-100 samples produced an estimate within one log (45%) below. A simple correlated first order model was shown to produce count series with 1/f signal, and such periodicity over many scales was shown in empirical microbial count data, for consideration in sampling. A tiered management strategy is recommended, including a plan for rapid response to unusual levels of routinely-monitored water quality indicators.
Dancing Bees Improve Colony Foraging Success as Long-Term Benefits Outweigh Short-Term Costs
Schürch, Roger; Grüter, Christoph
2014-01-01
Waggle dancing bees provide nestmates with spatial information about high quality resources. Surprisingly, attempts to quantify the benefits of this encoded spatial information have failed to find positive effects on colony foraging success under many ecological circumstances. Experimental designs have often involved measuring the foraging success of colonies that were repeatedly switched between oriented dances versus disoriented dances (i.e. communicating vectors versus not communicating vectors). However, if recruited bees continue to visit profitable food sources for more than one day, this procedure would lead to confounded results because of the long-term effects of successful recruitment events. Using agent-based simulations, we found that spatial information was beneficial in almost all ecological situations. Contrary to common belief, the benefits of recruitment increased with environmental stability because benefits can accumulate over time to outweigh the short-term costs of recruitment. Furthermore, we found that in simulations mimicking previous experiments, the benefits of communication were considerably underestimated (at low food density) or not detected at all (at medium and high densities). Our results suggest that the benefits of waggle dance communication are currently underestimated and that different experimental designs, which account for potential long-term benefits, are needed to measure empirically how spatial information affects colony foraging success. PMID:25141306
Dancing bees improve colony foraging success as long-term benefits outweigh short-term costs.
Schürch, Roger; Grüter, Christoph
2014-01-01
Waggle dancing bees provide nestmates with spatial information about high quality resources. Surprisingly, attempts to quantify the benefits of this encoded spatial information have failed to find positive effects on colony foraging success under many ecological circumstances. Experimental designs have often involved measuring the foraging success of colonies that were repeatedly switched between oriented dances versus disoriented dances (i.e. communicating vectors versus not communicating vectors). However, if recruited bees continue to visit profitable food sources for more than one day, this procedure would lead to confounded results because of the long-term effects of successful recruitment events. Using agent-based simulations, we found that spatial information was beneficial in almost all ecological situations. Contrary to common belief, the benefits of recruitment increased with environmental stability because benefits can accumulate over time to outweigh the short-term costs of recruitment. Furthermore, we found that in simulations mimicking previous experiments, the benefits of communication were considerably underestimated (at low food density) or not detected at all (at medium and high densities). Our results suggest that the benefits of waggle dance communication are currently underestimated and that different experimental designs, which account for potential long-term benefits, are needed to measure empirically how spatial information affects colony foraging success.
NASA Astrophysics Data System (ADS)
Munoz-Arriola, F.; Torres-Alavez, J.; Mohamad Abadi, A.; Walko, R. L.
2014-12-01
Our goal is to investigate possible sources of predictability of hydrometeorological extreme events in the Northern High Plains. Hydrometeorological extreme events are considered the most costly natural phenomena. Water deficits and surpluses highlight how the water-climate interdependence becomes crucial in areas where single activities drive economies such as Agriculture in the NHP. Nonetheless we recognize the Water-Climate interdependence and the regulatory role that human activities play, we still grapple to identify what sources of predictability could be added to flood and drought forecasts. To identify the benefit of multi-scale climate modeling and the role of initial conditions on flood and drought predictability on the NHP, we use the Ocean Land Atmospheric Model (OLAM). OLAM is characterized by a dynamic core with a global geodesic grid with hexagonal (and variably refined) mesh cells and a finite volume discretization of the full compressible Navier Stokes equations, a cut-grid cell method for topography (that reduces error in computational gradient computation and anomalous vertical dispersion). Our hypothesis is that wet conditions will drive OLAM's simulations of precipitation to wetter conditions affecting both flood forecast and drought forecast. To test this hypothesis we simulate precipitation during identified historical flood events followed by drought events in the NHP (i.e. 2011-2012 years). We initialized OLAM with CFS-data 1-10 days previous to a flooding event (as initial conditions) to explore (1) short-term and high-resolution and (2) long-term and coarse-resolution simulations of flood and drought events, respectively. While floods are assessed during a maximum of 15-days refined-mesh simulations, drought is evaluated during the following 15 months. Simulated precipitation will be compared with the Sub-continental Observation Dataset, a gridded 1/16th degree resolution data obtained from climatological stations in Canada, US, and Mexico. This in-progress research will ultimately contribute to integrate OLAM and VIC models and improve predictability of extreme hydrometeorological events.
A time reversal algorithm in acoustic media with Dirac measure approximations
NASA Astrophysics Data System (ADS)
Bretin, Élie; Lucas, Carine; Privat, Yannick
2018-04-01
This article is devoted to the study of a photoacoustic tomography model, where one is led to consider the solution of the acoustic wave equation with a source term writing as a separated variables function in time and space, whose temporal component is in some sense close to the derivative of the Dirac distribution at t = 0. This models a continuous wave laser illumination performed during a short interval of time. We introduce an algorithm for reconstructing the space component of the source term from the measure of the solution recorded by sensors during a time T all along the boundary of a connected bounded domain. It is based at the same time on the introduction of an auxiliary equivalent Cauchy problem allowing to derive explicit reconstruction formula and then to use of a deconvolution procedure. Numerical simulations illustrate our approach. Finally, this algorithm is also extended to elasticity wave systems.
Validation of a mixture-averaged thermal diffusion model for premixed lean hydrogen flames
NASA Astrophysics Data System (ADS)
Schlup, Jason; Blanquart, Guillaume
2018-03-01
The mixture-averaged thermal diffusion model originally proposed by Chapman and Cowling is validated using multiple flame configurations. Simulations using detailed hydrogen chemistry are done on one-, two-, and three-dimensional flames. The analysis spans flat and stretched, steady and unsteady, and laminar and turbulent flames. Quantitative and qualitative results using the thermal diffusion model compare very well with the more complex multicomponent diffusion model. Comparisons are made using flame speeds, surface areas, species profiles, and chemical source terms. Once validated, this model is applied to three-dimensional laminar and turbulent flames. For these cases, thermal diffusion causes an increase in the propagation speed of the flames as well as increased product chemical source terms in regions of high positive curvature. The results illustrate the necessity for including thermal diffusion, and the accuracy and computational efficiency of the mixture-averaged thermal diffusion model.
Volume Averaging Study of the Capacitive Deionization Process in Homogeneous Porous Media
Gabitto, Jorge; Tsouris, Costas
2015-05-05
Ion storage in porous electrodes is important in applications such as energy storage by supercapacitors, water purification by capacitive deionization, extraction of energy from a salinity difference and heavy ion purification. In this paper, a model is presented to simulate the charge process in homogeneous porous media comprising big pores. It is based on a theory for capacitive charging by ideally polarizable porous electrodes without faradaic reactions or specific adsorption of ions. A volume averaging technique is used to derive the averaged transport equations in the limit of thin electrical double layers. Transport between the electrolyte solution and the chargedmore » wall is described using the Gouy–Chapman–Stern model. The effective transport parameters for isotropic porous media are calculated solving the corresponding closure problems. Finally, the source terms that appear in the average equations are calculated using numerical computations. An alternative way to deal with the source terms is proposed.« less
Galileo Attitude Determination: Experiences with a Rotating Star Scanner
NASA Technical Reports Server (NTRS)
Merken, L.; Singh, G.
1991-01-01
The Galileo experience with a rotating star scanner is discussed in terms of problems encountered in flight, solutions implemented, and lessons learned. An overview of the Galileo project and the attitude and articulation control subsystem is given and the star scanner hardware and relevant software algorithms are detailed. The star scanner is the sole source of inertial attitude reference for this spacecraft. Problem symptoms observed in flight are discussed in terms of effects on spacecraft performance and safety. Sources of thse problems include contributions from flight software idiosyncrasies and inadequate validation of the ground procedures used to identify target stars for use by the autonomous on-board star identification algorithm. Problem fixes (some already implemented and some only proposed) are discussed. A general conclusion is drawn regarding the inherent difficulty of performing simulation tests to validate algorithms which are highly sensitive to external inputs of statistically 'rare' events.
Improved thermal lattice Boltzmann model for simulation of liquid-vapor phase change
NASA Astrophysics Data System (ADS)
Li, Qing; Zhou, P.; Yan, H. J.
2017-12-01
In this paper, an improved thermal lattice Boltzmann (LB) model is proposed for simulating liquid-vapor phase change, which is aimed at improving an existing thermal LB model for liquid-vapor phase change [S. Gong and P. Cheng, Int. J. Heat Mass Transfer 55, 4923 (2012), 10.1016/j.ijheatmasstransfer.2012.04.037]. First, we emphasize that the replacement of ∇ .(λ ∇ T ) /∇.(λ ∇ T ) ρ cV ρ cV with ∇ .(χ ∇ T ) is an inappropriate treatment for diffuse interface modeling of liquid-vapor phase change. Furthermore, the error terms ∂t 0(T v ) +∇ .(T vv ) , which exist in the macroscopic temperature equation recovered from the previous model, are eliminated in the present model through a way that is consistent with the philosophy of the LB method. Moreover, the discrete effect of the source term is also eliminated in the present model. Numerical simulations are performed for droplet evaporation and bubble nucleation to validate the capability of the model for simulating liquid-vapor phase change. It is shown that the numerical results of the improved model agree well with those of a finite-difference scheme. Meanwhile, it is found that the replacement of ∇ .(λ ∇ T ) /∇ .(λ ∇ T ) ρ cV ρ cV with ∇ .(χ ∇ T ) leads to significant numerical errors and the error terms in the recovered macroscopic temperature equation also result in considerable errors.
Active tower damping and pitch balancing - design, simulation and field test
NASA Astrophysics Data System (ADS)
Duckwitz, Daniel; Shan, Martin
2014-12-01
The tower is one of the major components in wind turbines with a contribution to the cost of energy of 8 to 12% [1]. In this overview the load situation of the tower will be described in terms of sources of loads, load components and fatigue contribution. Then two load reduction control schemes are described along with simulation and field test results. Pitch Balancing is described as a method to reduce aerodynamic asymmetry and the resulting fatigue loads. Active Tower Damping is reducing the tower oscillations by applying appropiate pitch angle changes. A field test was conducted on an Areva M5000 wind turbine.
2006-09-01
Lavoie, D. Kurts, SYNTHETIC ENVIRONMENTS AT THE ENTREPRISE LEVEL: OVERVIEW OF A GOVERNMENT OF CANADA (GOC), ACADEMIA and INDUSTRY DISTRIBUTED...vehicle (UAV) focused to locate the radiological source, and by comparing the performance of these assets in terms of various capability based...framework to analyze homeland security capabilities • Illustrate how a rapidly configured distributed simulation involving academia, industry and
Verification of Methods for Assessing the Sustainability of Monitored Natural Attenuation (MNA)
2013-01-01
sugars TOC total organic carbon TSR thermal source removal USACE U.S. Army Corps of Engineers USEPA U.S. Environmental Protection Agency USGS...the SZD function for long-term DNAPL dissolution simulations. However, the sustainability assessment was easily implemented using an alternative...neutral sugars [THNS]). Chapelle et al. (2009) suggested THAA and THNS as measures of the bioavailability of organic carbon based on an analysis of
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saxena, Vikrant, E-mail: vikrant.saxena@desy.de; Hamburg Center for Ultrafast Imaging, Luruper Chaussee 149, 22761 Hamburg; Ziaja, Beata, E-mail: ziaja@mail.desy.de
The irradiation of an atomic cluster with a femtosecond x-ray free-electron laser pulse results in a nanoplasma formation. This typically occurs within a few hundred femtoseconds. By this time the x-ray pulse is over, and the direct photoinduced processes no longer contributing. All created electrons within the nanoplasma are thermalized. The nanoplasma thus formed is a mixture of atoms, electrons, and ions of various charges. While expanding, it is undergoing electron impact ionization and three-body recombination. Below we present a hydrodynamic model to describe the dynamics of such multi-component nanoplasmas. The model equations are derived by taking the moments ofmore » the corresponding Boltzmann kinetic equations. We include the equations obtained, together with the source terms due to electron impact ionization and three-body recombination, in our hydrodynamic solver. Model predictions for a test case, expanding spherical Ar nanoplasma, are obtained. With this model, we complete the two-step approach to simulate x-ray created nanoplasmas, enabling computationally efficient simulations of their picosecond dynamics. Moreover, the hydrodynamic framework including collisional processes can be easily extended for other source terms and then applied to follow relaxation of any finite non-isothermal multi-component nanoplasma with its components relaxed into local thermodynamic equilibrium.« less
A new lumped-parameter approach to simulating flow processes in unsaturated dual-porosity media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimmerman, R.W.; Hadgu, T.; Bodvarsson, G.S.
We have developed a new lumped-parameter dual-porosity approach to simulating unsaturated flow processes in fractured rocks. Fluid flow between the fracture network and the matrix blocks is described by a nonlinear equation that relates the imbibition rate to the local difference in liquid-phase pressure between the fractures and the matrix blocks. This equation is a generalization of the Warren-Root equation, but unlike the Warren-Root equation, is accurate in both the early and late time regimes. The fracture/matrix interflow equation has been incorporated into a computational module, compatible with the TOUGH simulator, to serve as a source/sink term for fracture elements.more » The new approach achieves accuracy comparable to simulations in which the matrix blocks are discretized, but typically requires an order of magnitude less computational time.« less
Calibration of semi-stochastic procedure for simulating high-frequency ground motions
Seyhan, Emel; Stewart, Jonathan P.; Graves, Robert
2013-01-01
Broadband ground motion simulation procedures typically utilize physics-based modeling at low frequencies, coupled with semi-stochastic procedures at high frequencies. The high-frequency procedure considered here combines deterministic Fourier amplitude spectra (dependent on source, path, and site models) with random phase. Previous work showed that high-frequency intensity measures from this simulation methodology attenuate faster with distance and have lower intra-event dispersion than in empirical equations. We address these issues by increasing crustal damping (Q) to reduce distance attenuation bias and by introducing random site-to-site variations to Fourier amplitudes using a lognormal standard deviation ranging from 0.45 for Mw < 7 to zero for Mw 8. Ground motions simulated with the updated parameterization exhibit significantly reduced distance attenuation bias and revised dispersion terms are more compatible with those from empirical models but remain lower at large distances (e.g., > 100 km).
Reagan, Matthew T; Moridis, George J; Keen, Noel D; Johnson, Jeffrey N
2015-04-01
Hydrocarbon production from unconventional resources and the use of reservoir stimulation techniques, such as hydraulic fracturing, has grown explosively over the last decade. However, concerns have arisen that reservoir stimulation creates significant environmental threats through the creation of permeable pathways connecting the stimulated reservoir with shallower freshwater aquifers, thus resulting in the contamination of potable groundwater by escaping hydrocarbons or other reservoir fluids. This study investigates, by numerical simulation, gas and water transport between a shallow tight-gas reservoir and a shallower overlying freshwater aquifer following hydraulic fracturing operations, if such a connecting pathway has been created. We focus on two general failure scenarios: (1) communication between the reservoir and aquifer via a connecting fracture or fault and (2) communication via a deteriorated, preexisting nearby well. We conclude that the key factors driving short-term transport of gas include high permeability for the connecting pathway and the overall volume of the connecting feature. Production from the reservoir is likely to mitigate release through reduction of available free gas and lowering of reservoir pressure, and not producing may increase the potential for release. We also find that hydrostatic tight-gas reservoirs are unlikely to act as a continuing source of migrating gas, as gas contained within the newly formed hydraulic fracture is the primary source for potential contamination. Such incidents of gas escape are likely to be limited in duration and scope for hydrostatic reservoirs. Reliable field and laboratory data must be acquired to constrain the factors and determine the likelihood of these outcomes. Short-term leakage fractured reservoirs requires high-permeability pathways Production strategy affects the likelihood and magnitude of gas release Gas release is likely short-term, without additional driving forces.
Part 2 of a Computational Study of a Drop-Laden Mixing Layer
NASA Technical Reports Server (NTRS)
Okongo, Nora; Bellan, Josette
2004-01-01
This second of three reports on a computational study of a mixing layer laden with evaporating liquid drops presents the evaluation of Large Eddy Simulation (LES) models. The LES models were evaluated on an existing database that had been generated using Direct Numerical Simulation (DNS). The DNS method and the database are described in the first report of this series, Part 1 of a Computational Study of a Drop-Laden Mixing Layer (NPO-30719), NASA Tech Briefs, Vol. 28, No.7 (July 2004), page 59. The LES equations, which are derived by applying a spatial filter to the DNS set, govern the evolution of the larger scales of the flow and can therefore be solved on a coarser grid. Consistent with the reduction in grid points, the DNS drops would be represented by fewer drops, called computational drops in the LES context. The LES equations contain terms that cannot be directly computed on the coarser grid and that must instead be modeled. Two types of models are necessary: (1) those for the filtered source terms representing the effects of drops on the filtered flow field and (2) those for the sub-grid scale (SGS) fluxes arising from filtering the convective terms in the DNS equations. All of the filtered-sourceterm models that were developed were found to overestimate the filtered source terms. For modeling the SGS fluxes, constant-coefficient Smagorinsky, gradient, and scale-similarity models were assessed and calibrated on the DNS database. The Smagorinsky model correlated poorly with the SGS fluxes, whereas the gradient and scale-similarity models were well correlated with the SGS quantities that they represented.
NASA Astrophysics Data System (ADS)
Kirstetter, G.; Popinet, S.; Fullana, J. M.; Lagrée, P. Y.; Josserand, C.
2015-12-01
The full resolution of shallow-water equations for modeling flash floods may have a high computational cost, so that majority of flood simulation softwares used for flood forecasting uses a simplification of this model : 1D approximations, diffusive or kinematic wave approximations or exotic models using non-physical free parameters. These kind of approximations permit to save a lot of computational time by sacrificing in an unquantified way the precision of simulations. To reduce drastically the cost of such 2D simulations by quantifying the lost of precision, we propose a 2D shallow-water flow solver built with the open source code Basilisk1, which is using adaptive refinement on a quadtree grid. This solver uses a well-balanced central-upwind scheme, which is at second order in time and space, and treats the friction and rain terms implicitly in finite volume approach. We demonstrate the validity of our simulation on the case of the flood of Tewkesbury (UK) occurred in July 2007, as shown on Fig. 1. On this case, a systematic study of the impact of the chosen criterium for adaptive refinement is performed. The criterium which has the best computational time / precision ratio is proposed. Finally, we present the power law giving the computational time in respect to the maximum resolution and we show that this law for our 2D simulation is close to the one of 1D simulation, thanks to the fractal dimension of the topography. [1] http://basilisk.fr/
Analysis of neutron and gamma-ray streaming along the maze of NRCAM thallium production target room.
Raisali, G; Hajiloo, N; Hamidi, S; Aslani, G
2006-08-01
Study of the shield performance of a thallium-203 production target room has been investigated in this work. Neutron and gamma-ray equivalent dose rates at various points of the maze are calculated by simulating the transport of streaming neutrons, and photons using Monte Carlo method. For determination of neutron and gamma-ray source intensities and their energy spectrum, we have applied SRIM 2003 and ALICE91 computer codes to Tl target and its Cu substrate for a 145 microA of 28.5 MeV protons beam. The MCNP/4C code has been applied with neutron source term in mode n p to consider both prompt neutrons and secondary gamma-rays. Then the code is applied for the prompt gamma-rays as the source term. The neutron-flux energy spectrum and equivalent dose rates for neutron and gamma-rays in various positions in the maze have been calculated. It has been found that the deviation between calculated and measured dose values along the maze is less than 20%.
Upper and lower bounds of ground-motion variabilities: implication for source properties
NASA Astrophysics Data System (ADS)
Cotton, Fabrice; Reddy-Kotha, Sreeram; Bora, Sanjay; Bindi, Dino
2017-04-01
One of the key challenges of seismology is to be able to analyse the physical factors that control earthquakes and ground-motion variabilities. Such analysis is particularly important to calibrate physics-based simulations and seismic hazard estimations at high frequencies. Within the framework of the development of ground-motion prediction equation (GMPE) developments, ground-motions residuals (differences between recorded ground motions and the values predicted by a GMPE) are computed. The exponential growth of seismological near-source records and modern GMPE analysis technics allow to partition these residuals into between- and a within-event components. In particular, the between-event term quantifies all those repeatable source effects (e.g. related to stress-drop or kappa-source variability) which have not been accounted by the magnitude-dependent term of the model. In this presentation, we first discuss the between-event variabilities computed both in the Fourier and Response Spectra domains, using recent high-quality global accelerometric datasets (e.g. NGA-west2, Resorce, Kiknet). These analysis lead to the assessment of upper bounds for the ground-motion variability. Then, we compare these upper bounds with lower bounds estimated by analysing seismic sequences which occurred on specific fault systems (e.g., located in Central Italy or in Japan). We show that the lower bounds of between-event variabilities are surprisingly large which indicates a large variability of earthquake dynamic properties even within the same fault system. Finally, these upper and lower bounds of ground-shaking variability are discussed in term of variability of earthquake physical properties (e.g., stress-drop and kappa_source).
NASA Astrophysics Data System (ADS)
Xu, Zexuan; Hu, Bill X.; Davis, Hal; Cao, Jianhua
2015-05-01
A research version of CFP (Conduit Flow Process) code, CFPv2, is applied with UMT3D to simulate long term (1966-2018) nitrate-N contamination transport processes in the Woodville Karst Plain (WKP), northern Florida, where karst conduit networks are well developed. Groundwater flow in the WKP limestone porous matrix is simulated using Darcy's law, and non-laminar flow within conduits is described by Darcy-Weisbach equation. Nitrate-N conduit transport and advective exchanges of groundwater and nitrate-N between conduits and limestone matrix are calculated by CFPv2 and UMT3D, instead of MODFLOW and MT3DMS since Reynolds numbers for flows in conduits are over the criteria of laminar flow. The developed numerical model is calibrated by field observations and then applied to simulate nitrate-N transport in the WKP. The numerical simulations verify the theories that two sprayfields near the City of Tallahassee and septic tanks in the rural area are major nitrate-N point sources within the WKP. High nitrate-N concentrations occur near Lost Creek Sink, and conduits of Wakulla Spring and Spring Creek Springs where aquifer discharge groundwater. Conduit networks control nitrate-N transport and regional contaminant distributions in the WKP, as nitrate-N is transported through conduits rapidly and spread over large areas.
Harris, Peter; Philip, Rachel; Robinson, Stephen; Wang, Lian
2016-03-22
Monitoring ocean acoustic noise has been the subject of considerable recent study, motivated by the desire to assess the impact of anthropogenic noise on marine life. A combination of measuring ocean sound using an acoustic sensor network and modelling sources of sound and sound propagation has been proposed as an approach to estimating the acoustic noise map within a region of interest. However, strategies for developing a monitoring network are not well established. In this paper, considerations for designing a network are investigated using a simulated scenario based on the measurement of sound from ships in a shipping lane. Using models for the sources of the sound and for sound propagation, a noise map is calculated and measurements of the noise map by a sensor network within the region of interest are simulated. A compressive sensing algorithm, which exploits the sparsity of the representation of the noise map in terms of the sources, is used to estimate the locations and levels of the sources and thence the entire noise map within the region of interest. It is shown that although the spatial resolution to which the sound sources can be identified is generally limited, estimates of aggregated measures of the noise map can be obtained that are more reliable compared with those provided by other approaches.
NASA Astrophysics Data System (ADS)
Chino, Masamichi; Terada, Hiroaki; Nagai, Haruyasu; Katata, Genki; Mikami, Satoshi; Torii, Tatsuo; Saito, Kimiaki; Nishizawa, Yukiyasu
2016-08-01
The Fukushima Daiichi nuclear power reactor units that generated large amounts of airborne discharges during the period of March 12-21, 2011 were identified individually by analyzing the combination of measured 134Cs/137Cs depositions on ground surfaces and atmospheric transport and deposition simulations. Because the values of 134Cs/137Cs are different in reactor units owing to fuel burnup differences, the 134Cs/137Cs ratio measured in the environment was used to determine which reactor unit ultimately contaminated a specific area. Atmospheric dispersion model simulations were used for predicting specific areas contaminated by each dominant release. Finally, by comparing the results from both sources, the specific reactor units that yielded the most dominant atmospheric release quantities could be determined. The major source reactor units were Unit 1 in the afternoon of March 12, 2011, Unit 2 during the period from the late night of March 14 to the morning of March 15, 2011. These results corresponded to those assumed in our previous source term estimation studies. Furthermore, new findings suggested that the major source reactors from the evening of March 15, 2011 were Units 2 and 3 and that the dominant source reactor on March 20, 2011 temporally changed from Unit 3 to Unit 2.
A two-dimensional analytical model of vapor intrusion involving vertical heterogeneity.
Yao, Yijun; Verginelli, Iason; Suuberg, Eric M
2017-05-01
In this work, we present an analytical chlorinated vapor intrusion (CVI) model that can estimate source-to-indoor air concentration attenuation by simulating two-dimensional (2-D) vapor concentration profile in vertically heterogeneous soils overlying a homogenous vapor source. The analytical solution describing the 2-D soil gas transport was obtained by applying a modified Schwarz-Christoffel mapping method. A partial field validation showed that the developed model provides results (especially in terms of indoor emission rates) in line with the measured data from a case involving a building overlying a layered soil. In further testing, it was found that the new analytical model can very closely replicate the results of three-dimensional (3-D) numerical models at steady state in scenarios involving layered soils overlying homogenous groundwater sources. By contrast, by adopting a two-layer approach (capillary fringe and vadose zone) as employed in the EPA implementation of the Johnson and Ettinger model, the spatially and temporally averaged indoor concentrations in the case of groundwater sources can be higher than the ones estimated by the numerical model up to two orders of magnitude. In short, the model proposed in this work can represent an easy-to-use tool that can simulate the subsurface soil gas concentration in layered soils overlying a homogenous vapor source while keeping the simplicity of an analytical approach that requires much less computational effort.
Chino, Masamichi; Terada, Hiroaki; Nagai, Haruyasu; Katata, Genki; Mikami, Satoshi; Torii, Tatsuo; Saito, Kimiaki; Nishizawa, Yukiyasu
2016-08-22
The Fukushima Daiichi nuclear power reactor units that generated large amounts of airborne discharges during the period of March 12-21, 2011 were identified individually by analyzing the combination of measured (134)Cs/(137)Cs depositions on ground surfaces and atmospheric transport and deposition simulations. Because the values of (134)Cs/(137)Cs are different in reactor units owing to fuel burnup differences, the (134)Cs/(137)Cs ratio measured in the environment was used to determine which reactor unit ultimately contaminated a specific area. Atmospheric dispersion model simulations were used for predicting specific areas contaminated by each dominant release. Finally, by comparing the results from both sources, the specific reactor units that yielded the most dominant atmospheric release quantities could be determined. The major source reactor units were Unit 1 in the afternoon of March 12, 2011, Unit 2 during the period from the late night of March 14 to the morning of March 15, 2011. These results corresponded to those assumed in our previous source term estimation studies. Furthermore, new findings suggested that the major source reactors from the evening of March 15, 2011 were Units 2 and 3 and that the dominant source reactor on March 20, 2011 temporally changed from Unit 3 to Unit 2.
Mirus, Benjamin B.; Nimmo, J.R.
2013-01-01
The impact of preferential flow on recharge and contaminant transport poses a considerable challenge to water-resources management. Typical hydrologic models require extensive site characterization, but can underestimate fluxes when preferential flow is significant. A recently developed source-responsive model incorporates film-flow theory with conservation of mass to estimate unsaturated-zone preferential fluxes with readily available data. The term source-responsive describes the sensitivity of preferential flow in response to water availability at the source of input. We present the first rigorous tests of a parsimonious formulation for simulating water table fluctuations using two case studies, both in arid regions with thick unsaturated zones of fractured volcanic rock. Diffuse flow theory cannot adequately capture the observed water table responses at both sites; the source-responsive model is a viable alternative. We treat the active area fraction of preferential flow paths as a scaled function of water inputs at the land surface then calibrate the macropore density to fit observed water table rises. Unlike previous applications, we allow the characteristic film-flow velocity to vary, reflecting the lag time between source and deep water table responses. Analysis of model performance and parameter sensitivity for the two case studies underscores the importance of identifying thresholds for initiation of film flow in unsaturated rocks, and suggests that this parsimonious approach is potentially of great practical value.
Harris, Peter; Philip, Rachel; Robinson, Stephen; Wang, Lian
2016-01-01
Monitoring ocean acoustic noise has been the subject of considerable recent study, motivated by the desire to assess the impact of anthropogenic noise on marine life. A combination of measuring ocean sound using an acoustic sensor network and modelling sources of sound and sound propagation has been proposed as an approach to estimating the acoustic noise map within a region of interest. However, strategies for developing a monitoring network are not well established. In this paper, considerations for designing a network are investigated using a simulated scenario based on the measurement of sound from ships in a shipping lane. Using models for the sources of the sound and for sound propagation, a noise map is calculated and measurements of the noise map by a sensor network within the region of interest are simulated. A compressive sensing algorithm, which exploits the sparsity of the representation of the noise map in terms of the sources, is used to estimate the locations and levels of the sources and thence the entire noise map within the region of interest. It is shown that although the spatial resolution to which the sound sources can be identified is generally limited, estimates of aggregated measures of the noise map can be obtained that are more reliable compared with those provided by other approaches. PMID:27011187
Mach wave properties in the presence of source and medium heterogeneity
NASA Astrophysics Data System (ADS)
Vyas, J. C.; Mai, P. M.; Galis, M.; Dunham, Eric M.; Imperatori, W.
2018-06-01
We investigate Mach wave coherence for kinematic supershear ruptures with spatially heterogeneous source parameters, embedded in 3D scattering media. We assess Mach wave coherence considering: 1) source heterogeneities in terms of variations in slip, rise time and rupture speed; 2) small-scale heterogeneities in Earth structure, parameterized from combinations of three correlation lengths and two standard deviations (assuming von Karman power spectral density with fixed Hurst exponent); and 3) joint effects of source and medium heterogeneities. Ground-motion simulations are conducted using a generalized finite-difference method, choosing a parameterization such that the highest resolved frequency is ˜5 Hz. We discover that Mach wave coherence is slightly diminished at near fault distances (< 10 km) due to spatially variable slip and rise time; beyond this distance the Mach wave coherence is more strongly reduced by wavefield scattering due to small-scale heterogeneities in Earth structure. Based on our numerical simulations and theoretical considerations we demonstrate that the standard deviation of medium heterogeneities controls the wavefield scattering, rather than the correlation length. In addition, we find that peak ground accelerations in the case of combined source and medium heterogeneities are consistent with empirical ground motion prediction equations for all distances, suggesting that in nature ground shaking amplitudes for supershear ruptures may not be elevated due to complexities in the rupture process and seismic wave-scattering.
Chino, Masamichi; Terada, Hiroaki; Nagai, Haruyasu; Katata, Genki; Mikami, Satoshi; Torii, Tatsuo; Saito, Kimiaki; Nishizawa, Yukiyasu
2016-01-01
The Fukushima Daiichi nuclear power reactor units that generated large amounts of airborne discharges during the period of March 12–21, 2011 were identified individually by analyzing the combination of measured 134Cs/137Cs depositions on ground surfaces and atmospheric transport and deposition simulations. Because the values of 134Cs/137Cs are different in reactor units owing to fuel burnup differences, the 134Cs/137Cs ratio measured in the environment was used to determine which reactor unit ultimately contaminated a specific area. Atmospheric dispersion model simulations were used for predicting specific areas contaminated by each dominant release. Finally, by comparing the results from both sources, the specific reactor units that yielded the most dominant atmospheric release quantities could be determined. The major source reactor units were Unit 1 in the afternoon of March 12, 2011, Unit 2 during the period from the late night of March 14 to the morning of March 15, 2011. These results corresponded to those assumed in our previous source term estimation studies. Furthermore, new findings suggested that the major source reactors from the evening of March 15, 2011 were Units 2 and 3 and that the dominant source reactor on March 20, 2011 temporally changed from Unit 3 to Unit 2. PMID:27546490
NASA Astrophysics Data System (ADS)
Rengarajan, Rajagopalan
Moderate resolution remote sensing data offers the potential to monitor the long and short term trends in the condition of the Earth's resources at finer spatial scales and over longer time periods. While improved calibration (radiometric and geometric), free access (Landsat, Sentinel, CBERS), and higher level products in reflectance units have made it easier for the science community to derive the biophysical parameters from these remotely sensed data, a number of issues still affect the analysis of multi-temporal datasets. These are primarily due to sources that are inherent in the process of imaging from single or multiple sensors. Some of these undesired or uncompensated sources of variation include variation in the view angles, illumination angles, atmospheric effects, and sensor effects such as Relative Spectral Response (RSR) variation between different sensors. The complex interaction of these sources of variation would make their study extremely difficult if not impossible with real data, and therefore, a simulated analysis approach is used in this study. A synthetic forest canopy is produced using the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model and its measured BRDFs are modeled using the RossLi canopy BRDF model. The simulated BRDF matches the real data to within 2% of the reflectance in the red and the NIR spectral bands studied. The BRDF modeling process is extended to model and characterize the defoliation of a forest, which is used in factor sensitivity studies to estimate the effect of each factor for varying environment and sensor conditions. Finally, a factorial experiment is designed to understand the significance of the sources of variation, and regression based analysis are performed to understand the relative importance of the factors. The design of experiment and the sensitivity analysis conclude that the atmospheric attenuation and variations due to the illumination angles are the dominant sources impacting the at-sensor radiance.
Liu, Xiaomang; Yang, Tiantian; Hsu, Koulin; ...
2017-01-10
On the Tibetan Plateau, the limited ground-based rainfall information owing to a harsh environment has brought great challenges to hydrological studies. Satellite-based rainfall products, which allow for a better coverage than both radar network and rain gauges on the Tibetan Plateau, can be suitable alternatives for studies on investigating the hydrological processes and climate change. In this study, a newly developed daily satellite-based precipitation product, termed Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks $-$ Climate Data Record (PERSIANN-CDR), is used as input for a hydrologic model to simulate streamflow in the upper Yellow and Yangtze River basinsmore » on the Tibetan Plateau. The results show that the simulated streamflows using PERSIANN-CDR precipitation and the Global Land Data Assimilation System (GLDAS) precipitation are closer to observation than that using limited gauge-based precipitation interpolation in the upper Yangtze River basin. The simulated streamflow using gauge-based precipitation are higher than the streamflow observation during the wet season. In the upper Yellow River basin, gauge-based precipitation, GLDAS precipitation, and PERSIANN-CDR precipitation have similar good performance in simulating streamflow. Finally, the evaluation of streamflow simulation capability in this study partly indicates that the PERSIANN-CDR rainfall product has good potential to be a reliable dataset and an alternative information source of a limited gauge network for conducting long-term hydrological and climate studies on the Tibetan Plateau.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Xiaomang; Yang, Tiantian; Hsu, Koulin
On the Tibetan Plateau, the limited ground-based rainfall information owing to a harsh environment has brought great challenges to hydrological studies. Satellite-based rainfall products, which allow for a better coverage than both radar network and rain gauges on the Tibetan Plateau, can be suitable alternatives for studies on investigating the hydrological processes and climate change. In this study, a newly developed daily satellite-based precipitation product, termed Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks $-$ Climate Data Record (PERSIANN-CDR), is used as input for a hydrologic model to simulate streamflow in the upper Yellow and Yangtze River basinsmore » on the Tibetan Plateau. The results show that the simulated streamflows using PERSIANN-CDR precipitation and the Global Land Data Assimilation System (GLDAS) precipitation are closer to observation than that using limited gauge-based precipitation interpolation in the upper Yangtze River basin. The simulated streamflow using gauge-based precipitation are higher than the streamflow observation during the wet season. In the upper Yellow River basin, gauge-based precipitation, GLDAS precipitation, and PERSIANN-CDR precipitation have similar good performance in simulating streamflow. Finally, the evaluation of streamflow simulation capability in this study partly indicates that the PERSIANN-CDR rainfall product has good potential to be a reliable dataset and an alternative information source of a limited gauge network for conducting long-term hydrological and climate studies on the Tibetan Plateau.« less
NASA Astrophysics Data System (ADS)
Wang, XiaoLiang; Li, JiaChun
2017-12-01
A new solver based on the high-resolution scheme with novel treatments of source terms and interface capture for the Savage-Hutter model is developed to simulate granular avalanche flows. The capability to simulate flow spread and deposit processes is verified through indoor experiments of a two-dimensional granular avalanche. Parameter studies show that reduction in bed friction enhances runout efficiency, and that lower earth pressure restraints enlarge the deposit spread. The April 9, 2000, Yigong avalanche in Tibet, China, is simulated as a case study by this new solver. The predicted results, including evolution process, deposit spread, and hazard impacts, generally agree with site observations. It is concluded that the new solver for the Savage-Hutter equation provides a comprehensive software platform for granular avalanche simulation at both experimental and field scales. In particular, the solver can be a valuable tool for providing necessary information for hazard forecasts, disaster mitigation, and countermeasure decisions in mountainous areas.
Simulation verification techniques study: Simulation self test hardware design and techniques report
NASA Technical Reports Server (NTRS)
1974-01-01
The final results are presented of the hardware verification task. The basic objectives of the various subtasks are reviewed along with the ground rules under which the overall task was conducted and which impacted the approach taken in deriving techniques for hardware self test. The results of the first subtask and the definition of simulation hardware are presented. The hardware definition is based primarily on a brief review of the simulator configurations anticipated for the shuttle training program. The results of the survey of current self test techniques are presented. The data sources that were considered in the search for current techniques are reviewed, and results of the survey are presented in terms of the specific types of tests that are of interest for training simulator applications. Specifically, these types of tests are readiness tests, fault isolation tests and incipient fault detection techniques. The most applicable techniques were structured into software flows that are then referenced in discussions of techniques for specific subsystems.
Samrat, Nahidul Hoque; Bin Ahmad, Norhafizan; Choudhury, Imtiaz Ahmed; Bin Taha, Zahari
2014-01-01
Today, the whole world faces a great challenge to overcome the environmental problems related to global energy production. Most of the islands throughout the world depend on fossil fuel importation with respect to energy production. Recent development and research on green energy sources can assure sustainable power supply for the islands. But unpredictable nature and high dependency on weather conditions are the main limitations of renewable energy sources. To overcome this drawback, different renewable sources and converters need to be integrated with each other. This paper proposes a standalone hybrid photovoltaic- (PV-) wave energy conversion system with energy storage. In the proposed hybrid system, control of the bidirectional buck-boost DC-DC converter (BBDC) is used to maintain the constant dc-link voltage. It also accumulates the excess hybrid power in the battery bank and supplies this power to the system load during the shortage of hybrid power. A three-phase complex vector control scheme voltage source inverter (VSI) is used to control the load side voltage in terms of the frequency and voltage amplitude. Based on the simulation results obtained from Matlab/Simulink, it has been found that the overall hybrid framework is capable of working under the variable weather and load conditions.
Samrat, Nahidul Hoque; Ahmad, Norhafizan Bin; Choudhury, Imtiaz Ahmed; Taha, Zahari Bin
2014-01-01
Today, the whole world faces a great challenge to overcome the environmental problems related to global energy production. Most of the islands throughout the world depend on fossil fuel importation with respect to energy production. Recent development and research on green energy sources can assure sustainable power supply for the islands. But unpredictable nature and high dependency on weather conditions are the main limitations of renewable energy sources. To overcome this drawback, different renewable sources and converters need to be integrated with each other. This paper proposes a standalone hybrid photovoltaic- (PV-) wave energy conversion system with energy storage. In the proposed hybrid system, control of the bidirectional buck-boost DC-DC converter (BBDC) is used to maintain the constant dc-link voltage. It also accumulates the excess hybrid power in the battery bank and supplies this power to the system load during the shortage of hybrid power. A three-phase complex vector control scheme voltage source inverter (VSI) is used to control the load side voltage in terms of the frequency and voltage amplitude. Based on the simulation results obtained from Matlab/Simulink, it has been found that the overall hybrid framework is capable of working under the variable weather and load conditions. PMID:24892049
Liao, Yi-Shan; Zhuo, Mu-Ning; Li, Ding-Qiang; Guo, Tai-Long
2013-08-01
In the Pearl Delta region, urban rivers have been seriously polluted, and the input of non-point source pollution materials, such as chemical oxygen demand (COD), into rivers cannot be neglected. During 2009-2010, the water qualities at eight different catchments in the Fenjiang River of Foshan city were monitored, and the COD loads for eight rivulet sewages were calculated in respect of different rainfall conditions. Interesting results were concluded in our paper. The rainfall and landuse type played important roles in the COD loading, with greater influence of rainfall than landuse type. Consequently, a COD loading formula was constructed that was defined as a function of runoff and landuse type that were derived SCS model and land use map. Loading of COD could be evaluated and predicted with the constructed formula. The mean simulation accuracy for single rainfall event was 75.51%. Long-term simulation accuracy was better than that of single rainfall. In 2009, the estimated COD loading and its loading intensity were 8 053 t and 339 kg x (hm2 x a)(-1), and the industrial land was regarded as the main source of COD pollution area. The severe non-point source pollution such as COD in Fenjiang River must be paid more attention in the future.
NASA Astrophysics Data System (ADS)
Ferrero, Pietro
The main objective of this work is to investigate the effects of the coupling between the turbulent fluctuations and the highly non-linear chemical source terms in the context of large-eddy simulations of turbulent reacting flows. To this aim we implement the filtered mass density function (FMDF) methodology on an existing finite volume (FV) fluid dynamics solver. The FMDF provides additional statistical sub-grid scale (SGS) information about the thermochemical state of the flow - species mass fractions and enthalpy - which would not be available otherwise. The core of the methodology involves solving a transport equation for the FMDF by means of a stochastic, grid-free, Lagrangian particle procedure. Any moments of the distribution can be obtained by taking ensemble averages of the particles. The main advantage of this strategy is that the chemical source terms appear in closed form so that the effects of turbulent fluctuations on these terms are already accounted for and do not need to be modeled. We first validate and demonstrate the consistency of our implementation by comparing the results of the hybrid FV/FMDF procedure against model-free LES for temporally developing, non-reacting mixing layers. Consistency requires that, for non-reacting cases, the two solvers should yield identical solutions. We investigate the sensitivity of the FMDF solution on the most relevant numerical parameters, such as the number of particles per cell and the size of the ensemble domain. Next, we apply the FMDF modeling strategy to the simulation of chemically reacting, two- and three-dimensional temporally developing mixing layers and compare the results against both DNS and model-free LES. We clearly show that, when the turbulence/chemistry interaction is accounted for with the FMDF methodology, the results are in much better agreement to the DNS data. Finally, we perform two- and three-dimensional simulations of high Reynolds number, spatially developing, chemically reacting mixing layers, with the intent of reproducing a set of experimental results obtained at the California Institute of Technology. The mean temperature rise calculated by the hybrid FV/FMDF solver, which is associated with the amount of product formed, lies very close to the experimental profile. Conversely, when the effects of turbulence/chemistry coupling are ignored, the simulations clearly over predict the amount of product that is formed.
NASA Technical Reports Server (NTRS)
Yee, Helen M. C.; Kotov, D. V.; Wang, Wei; Shu, Chi-Wang
2013-01-01
The goal of this paper is to relate numerical dissipations that are inherited in high order shock-capturing schemes with the onset of wrong propagation speed of discontinuities. For pointwise evaluation of the source term, previous studies indicated that the phenomenon of wrong propagation speed of discontinuities is connected with the smearing of the discontinuity caused by the discretization of the advection term. The smearing introduces a nonequilibrium state into the calculation. Thus as soon as a nonequilibrium value is introduced in this manner, the source term turns on and immediately restores equilibrium, while at the same time shifting the discontinuity to a cell boundary. The present study is to show that the degree of wrong propagation speed of discontinuities is highly dependent on the accuracy of the numerical method. The manner in which the smearing of discontinuities is contained by the numerical method and the overall amount of numerical dissipation being employed play major roles. Moreover, employing finite time steps and grid spacings that are below the standard Courant-Friedrich-Levy (CFL) limit on shockcapturing methods for compressible Euler and Navier-Stokes equations containing stiff reacting source terms and discontinuities reveals surprising counter-intuitive results. Unlike non-reacting flows, for stiff reactions with discontinuities, employing a time step and grid spacing that are below the CFL limit (based on the homogeneous part or non-reacting part of the governing equations) does not guarantee a correct solution of the chosen governing equations. Instead, depending on the numerical method, time step and grid spacing, the numerical simulation may lead to (a) the correct solution (within the truncation error of the scheme), (b) a divergent solution, (c) a wrong propagation speed of discontinuities solution or (d) other spurious solutions that are solutions of the discretized counterparts but are not solutions of the governing equations. The present investigation for three very different stiff system cases confirms some of the findings of Lafon & Yee (1996) and LeVeque & Yee (1990) for a model scalar PDE. The findings might shed some light on the reported difficulties in numerical combustion and problems with stiff nonlinear (homogeneous) source terms and discontinuities in general.
Multi-sources data fusion framework for remote triage prioritization in telehealth.
Salman, O H; Rasid, M F A; Saripan, M I; Subramaniam, S K
2014-09-01
The healthcare industry is streamlining processes to offer more timely and effective services to all patients. Computerized software algorithm and smart devices can streamline the relation between users and doctors by providing more services inside the healthcare telemonitoring systems. This paper proposes a multi-sources framework to support advanced healthcare applications. The proposed framework named Multi Sources Healthcare Architecture (MSHA) considers multi-sources: sensors (ECG, SpO2 and Blood Pressure) and text-based inputs from wireless and pervasive devices of Wireless Body Area Network. The proposed framework is used to improve the healthcare scalability efficiency by enhancing the remote triaging and remote prioritization processes for the patients. The proposed framework is also used to provide intelligent services over telemonitoring healthcare services systems by using data fusion method and prioritization technique. As telemonitoring system consists of three tiers (Sensors/ sources, Base station and Server), the simulation of the MSHA algorithm in the base station is demonstrated in this paper. The achievement of a high level of accuracy in the prioritization and triaging patients remotely, is set to be our main goal. Meanwhile, the role of multi sources data fusion in the telemonitoring healthcare services systems has been demonstrated. In addition to that, we discuss how the proposed framework can be applied in a healthcare telemonitoring scenario. Simulation results, for different symptoms relate to different emergency levels of heart chronic diseases, demonstrate the superiority of our algorithm compared with conventional algorithms in terms of classify and prioritize the patients remotely.
Seyler, C. E.; Martin, M. R.
2011-01-14
In this study, it is shown that the two-fluid model under a generalized Ohm’s law formulation and the resistive magnetohydrodynamics (MHD) can both be described as relaxation systems. In the relaxation model, the under-resolved stiff source terms constrain the dynamics of a set of hyperbolic equations to give the correct asymptotic solution. When applied to the collisional two-fluid model, the relaxation of fast time scales associated with displacement current and finite electron mass allows for a natural transition from a system where Ohm’s law determines the current density to a system where Ohm’s law determines the electric field. This resultmore » is used to derive novel algorithms, which allow for multiscale simulation of low and high frequency extended-MHD physics. This relaxation formulation offers an efficient way to implicitly advance the Hall term and naturally simulate a plasma-vacuum interface without invoking phenomenological models. The relaxation model is implemented as an extended-MHD code, which is used to analyze pulsed power loads such as wire arrays and ablating foils. Two-dimensional simulations of pulsed power loads are compared for extended-MHD and MHD. For these simulations, it is also shown that the relaxation model properly recovers the resistive-MHD limit.« less
NASA Astrophysics Data System (ADS)
Grogan, Brandon R.; Henkel, James J.; Johnson, Jeffrey O.; Mihalczo, John T.; Miller, Thomas M.; Patton, Bruce W.
2013-12-01
The detonation of a terrorist nuclear weapon in the United States would result in the massive loss of life and grave economic damage. Even if a device was not detonated, its known or suspected presence aboard a cargo container ship in a U.S. port would have major economic and political consequences. One possible means to prevent this threat would be to board a ship at sea and search for the device before it reaches port. The scenario considered here involves a small Coast Guard team with strong intelligence boarding a container ship to search for a nuclear device. Using active interrogation, the team would nonintrusively search a block of shipping containers to locate the fissile material. Potential interrogation source and detector technologies for the team are discussed. The methodology of the scan is presented along with a technique for calculating the required interrogation source strength using computer simulations. MCNPX was used to construct a computer model of a container ship, and several search scenarios were simulated. The results of the simulations are presented in terms of the source strength required for each interrogation scenario. Validation measurements were performed in order to scale these simulation results to expected performance. Interrogations through the short (2.4 m) axis of a standardized shipping container appear to be feasible given the entire range of container loadings tested. Interrogations through several containers at once or a single container through its long (12.2 m) axis do not appear to be viable with a portable interrogation system.
Pluri-annual sediment budget in a navigated river system: the Seine River (France).
Vilmin, Lauriane; Flipo, Nicolas; de Fouquet, Chantal; Poulin, Michel
2015-01-01
This study aims at quantifying pluri-annual Total Suspended Matter (TSM) budgets, and notably the share of river navigation in total re-suspension at a long-term scale, in the Seine River along a 225 km stretch including the Paris area. Erosion is calculated based on the transport capacity concept with an additional term for the energy dissipated by river navigation. Erosion processes are fitted for the 2007-2011 period based on i) a hydrological typology of sedimentary processes and ii) a simultaneous calibration and retrospective validation procedure. The correlation between observed and simulated TSM concentrations is higher than 0.91 at all monitoring stations. A variographic analysis points out the possible sources of discrepancies between the variabilities of observed and simulated TSM concentrations at three time scales: sub-weekly, monthly and seasonally. Most of the error on the variability of simulated concentrations concerns sub-weekly variations and may be caused by boundary condition estimates rather than modeling of in-river processes. Once fitted, the model permits to quantify that only a small fraction of the TSM flux sediments onto the river bed (<0.3‰). The river navigation contributes significantly to TSM re-suspension in average (about 20%) and during low flow periods (over 50%). Given the significant impact that sedimentary processes can have on the water quality of rivers, these results highlight the importance of taking into account river navigation as a source of re-suspension, especially during low flow periods when biogeochemical processes are the most intense. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Bilbro, Griff L.; Hou, Danqiong; Yin, Hong; Trew, Robert J.
2009-02-01
We have quantitatively modeled the conduction current and charge storage of an HFET in terms its physical dimensions and material properties. For DC or small-signal RF operation, no adjustable parameters are necessary to predict the terminal characteristics of the device. Linear performance measures such as small-signal gain and input admittance can be predicted directly from the geometric structure and material properties assumed for the device design. We have validated our model at low-frequency against experimental I-V measurements and against two-dimensional device simulations. We discuss our recent extension of our model to include a larger class of electron velocity-field curves. We also discuss the recent reformulation of our model to facilitate its implementation in commercial large-signal high-frequency circuit simulators. Large signal RF operation is more complex. First, the highest CW microwave power is fundamentally bounded by a brief, reversible channel breakdown in each RF cycle. Second, the highest experimental measurements of efficiency, power, or linearity always require harmonic load pull and possibly also harmonic source pull. Presently, our model accounts for these facts with an adjustable breakdown voltage and with adjustable load impedances and source impedances for the fundamental frequency and its harmonics. This has allowed us to validate our model for large signal RF conditions by simultaneously fitting experimental measurements of output power, gain, and power added efficiency of real devices. We show that the resulting model can be used to compare alternative device designs in terms of their large signal performance, such as their output power at 1dB gain compression or their third order intercept points. In addition, the model provides insight into new device physics features enabled by the unprecedented current and voltage levels of AlGaN/GaN HFETs, including non-ohmic resistance in the source access regions and partial depletion of the 2DEG in the drain access region.
Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A.; Zhang, Wenbo
2016-01-01
Objective Combined source imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a non-invasive fashion. Source imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source imaging algorithms to both find the network nodes (regions of interest) and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Methods Source imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from inter-ictal and ictal signals recorded by EEG and/or MEG. Results Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ~20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Conclusion Our study indicates that combined source imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). Significance The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions. PMID:27740473
Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A; Zhang, Wenbo; He, Bin
2016-12-01
Combined source-imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a noninvasive fashion. Source-imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source-imaging algorithms to both find the network nodes [regions of interest (ROI)] and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses, and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Source-imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from interictal and ictal signals recorded by EEG and/or Magnetoencephalography (MEG). Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ∼20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Our study indicates that combined source-imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions.
Kishii, Y; Kawasaki, S; Kitagawa, A; Muramatsu, M; Uchida, T
2014-02-01
A compact ECR ion source has utilized for carbon radiotherapy. In order to increase beam intensity with higher electric field at the extraction electrode and be better ion supply stability for long periods, electric geometry and surface conditions of an extraction electrode have been studied. Focusing attention on black deposited substances on the extraction electrode, which were observed around the extraction electrode after long-term use, the relation between black deposited substances and the electrical insulation property is investigated. The black deposited substances were inspected for the thickness of deposit, surface roughness, structural arrangement examined using Raman spectroscopy, and characteristics of electric discharge in a test bench, which was set up to simulate the ECR ion source.
Ground-Motion Variability for a Strike-Slip Earthquake from Broadband Ground-Motion Simulations
NASA Astrophysics Data System (ADS)
Iwaki, A.; Maeda, T.; Morikawa, N.; Fujiwara, H.
2016-12-01
One of the important issues in seismic hazard analysis is the evaluation of ground-motion variability due to the epistemic and aleatory uncertainties in various aspects of ground-motion simulations. This study investigates the within-event ground-motion variability in broadband ground-motion simulations for strike-slip events. We conduct ground-motion simulations for a past event (2000 MW6.6 Tottori earthquake) using a set of characterized source models (e.g. Irikura and Miyake, 2011) considering aleatory variability. Broadband ground motion is computed by a hybrid approach that combines a 3D finite-difference method (> 1 s) and the stochastic Green's function method (< 1 s), using the 3D velocity model J-SHIS v2. We consider various locations of the asperities, which are defined as the regions with large slip and stress drop within the fault, and the rupture nucleation point (hypocenter). Ground motion records at 29 K-NET and KiK-net stations are used to validate our simulations. By comparing the simulated and observed ground motion, we found that the performance of the simulations is acceptable under the condition that the source parameters are poorly constrained. In addition to the observation stations, we set 318 virtual receivers with the spatial intervals of 10 km for statistical analysis of the simulated ground motion. The maximum fault-distance is 160 km. Standard deviation (SD) of the simulated acceleration response spectra (Sa, 5% damped) of RotD50 component (Boore, 2010) is investigated at each receiver. SD from 50 different patterns of asperity locations is generally smaller than 0.15 in terms of log10 (0.34 in natural log). It shows dependence on distance at periods shorter than 1 s; SD increases as the distance decreases. On the other hand, SD from 39 different hypocenter locations is again smaller than 0.15 in log10, and showed azimuthal dependence at long periods; it increases as the rupture directivity parameter Xcosθ(Somerville et al. 1997) increases at periods longer than 1 s. The characteristics of ground-motion variability inferred from simulations can provide information on variability in simulation-based seismic hazard assessment for future earthquakes. We will further investigate the variability in other source parameters; rupture velocity and short-period level.
Lineal energy calibration of mini tissue-equivalent gas-proportional counters (TEPC)
NASA Astrophysics Data System (ADS)
Conte, V.; Moro, D.; Grosswendt, B.; Colautti, P.
2013-07-01
Mini TEPCs are cylindrical gas proportional counters of 1 mm or less of sensitive volume diameter. The lineal energy calibration of these tiny counters can be performed with an external gamma-ray source. However, to do that, first a method to get a simple and precise spectral mark has to be found and then the keV/μm value of this mark. A precise method (less than 1% of uncertainty) to identify this markis described here, and the lineal energy value of this mark has been measured for different simulated site sizes by using a 137Cs gamma source and a cylindrical TEPC equipped with a precision internal 244Cm alpha-particle source, and filled with propane-based tissue-equivalent gas mixture. Mini TEPCs can be calibrated in terms of lineal energy, by exposing them to 137Cesium sources, with an overall uncertainty of about 5%.
NASA Astrophysics Data System (ADS)
Schoeppner, M.; Plastino, W.; Budano, A.; De Vincenzi, M.; Ruggieri, F.
2012-04-01
Several nuclear reactors at the Fukushima Dai-ichi power plant have been severely damaged from the Tōhoku earthquake and the subsequent tsunami in March 2011. Due to the extremely difficult on-site situation it has been not been possible to directly determine the emissions of radioactive material. However, during the following days and weeks radionuclides of 137-Caesium and 131-Iodine (amongst others) were detected at monitoring stations throughout the world. Atmospheric transport models are able to simulate the worldwide dispersion of particles accordant to location, time and meteorological conditions following the release. The Lagrangian atmospheric transport model Flexpart is used by many authorities and has been proven to make valid predictions in this regard. The Flexpart software has first has been ported to a local cluster computer at the Grid Lab of INFN and Department of Physics of University of Roma Tre (Rome, Italy) and subsequently also to the European Mediterranean Grid (EUMEDGRID). Due to this computing power being available it has been possible to simulate the transport of particles originating from the Fukushima Dai-ichi plant site. Using the time series of the sampled concentration data and the assumption that the Fukushima accident was the only source of these radionuclides, it has been possible to estimate the time-dependent source-term for fourteen days following the accident using the atmospheric transport model. A reasonable agreement has been obtained between the modelling results and the estimated radionuclide release rates from the Fukushima accident.
2017-05-31
SUBJECT TERMS nonlinear finite element calculations, nuclear explosion monitoring, topography 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18...3D North Korea calculations........ Figure 6. The CRAM 3D finite element outer grid (left) is rectangular......................... Figure 7. Stress...Figure 6. The CRAM 3D finite element outer grid (left) is rectangular. The inner grid (center) is shaped to match the shape of the explosion shock wave
Spurious Numerical Solutions Of Differential Equations
NASA Technical Reports Server (NTRS)
Lafon, A.; Yee, H. C.
1995-01-01
Paper presents detailed study of spurious steady-state numerical solutions of differential equations that contain nonlinear source terms. Main objectives of this study are (1) to investigate how well numerical steady-state solutions of model nonlinear reaction/convection boundary-value problem mimic true steady-state solutions and (2) to relate findings of this investigation to implications for interpretation of numerical results from computational-fluid-dynamics algorithms and computer codes used to simulate reacting flows.
NASA Astrophysics Data System (ADS)
Poulet, T.; Veveakis, M.; Paesold, M.; Regenauer-Lieb, K.
2014-12-01
Multiphysics modelling has become an indispensable tool for geoscientists to simulate the complex behaviours observed in their various fields of study where multiple processes are involved, including thermal, hydraulic, mechanical and chemical (THMC) laws. This modelling activity involves simulations that are computationally expensive and its soaring uptake is tightly linked to the increasing availability of supercomputing power and easy access to powerful nonlinear solvers such as PETSc (http://www.mcs.anl.gov/petsc/). The Multiphysics Object-Oriented Simulation Environment (MOOSE) is a finite-element, multiphysics framework (http://mooseframework.org) that can harness such computational power and allow scientists to develop easily some tightly-coupled fully implicit multiphysics simulations that run automatically in parallel on large clusters. This open-source framework provides a powerful tool to collaborate on numerical modelling activities and we are contributing to its development with REDBACK (https://github.com/pou036/redback), a module for Rock mEchanics with Dissipative feedBACKs. REDBACK builds on the tensor mechanics finite strain implementation available in MOOSE to provide a THMC simulator where the energetic formulation highlights the importance of all dissipative terms in the coupled system of equations. We show first applications of fully coupled dehydration reactions triggering episodic fluid transfer through shear zones (Alevizos et al, 2014). The dimensionless approach used allows focusing on the critical underlying variables which are driving the resulting behaviours observed and this tool is specifically designed to study material instabilities underpinning geological features like faulting, folding, boudinage, shearing, fracturing, etc. REDBACK provides a collaborative and educational tool which captures the physical and mathematical understanding of such material instabilities and provides an easy way to apply this knowledge to realistic scenarios, where the size and complexity of the geometries considered, along with the material parameters distributions, add as many sources of different instabilities. References: Alevizos, S., T. Poulet, and E. Veveakis (2014), J. Geophys. Res., 119, 4558-4582, doi:10.1002/2013JB010070.
Numerical Modeling of Thermal-Hydrology in the Near Field of a Generic High-Level Waste Repository
NASA Astrophysics Data System (ADS)
Matteo, E. N.; Hadgu, T.; Park, H.
2016-12-01
Disposal in a deep geologic repository is one of the preferred option for long term isolation of high-level nuclear waste. Coupled thermal-hydrologic processes induced by decay heat from the radioactive waste may impact fluid flow and the associated migration of radionuclides. This study looked at the effects of those processes in simulations of thermal-hydrology for the emplacement of U. S. Department of Energy managed high-level waste and spent nuclear fuel. Most of the high-level waste sources have lower thermal output which would reduce the impact of thermal propagation. In order to quantify the thermal limits this study concentrated on the higher thermal output sources and on spent nuclear fuel. The study assumed a generic nuclear waste repository at 500 m depth. For the modeling a representative domain was selected representing a portion of the repository layout in order to conduct a detailed thermal analysis. A highly refined unstructured mesh was utilized with refinements near heat sources and at intersections of different materials. Simulations looked at different values for properties of components of the engineered barrier system (i.e. buffer, disturbed rock zone and the host rock). The simulations also looked at the effects of different durations of surface aging of the waste to reduce thermal perturbations. The PFLOTRAN code (Hammond et al., 2014) was used for the simulations. Modeling results for the different options are reported and include temperature and fluid flow profiles in the near field at different simulation times. References:G. E. Hammond, P.C. Lichtner and R.T. Mills, "Evaluating the Performance of Parallel Subsurface Simulators: An Illustrative Example with PFLOTRAN", Water Resources Research, 50, doi:10.1002/2012WR013483 (2014). Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. SAND2016-7510 A
21 CFR 352.71 - Light source (solar simulator).
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 5 2010-04-01 2010-04-01 false Light source (solar simulator). 352.71 Section 352... Procedures § 352.71 Light source (solar simulator). A solar simulator used for determining the SPF of a... nanometers. In addition, a solar simulator should have no significant time-related fluctuations in radiation...
21 CFR 352.71 - Light source (solar simulator).
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 5 2011-04-01 2011-04-01 false Light source (solar simulator). 352.71 Section 352... Procedures § 352.71 Light source (solar simulator). A solar simulator used for determining the SPF of a... nanometers. In addition, a solar simulator should have no significant time-related fluctuations in radiation...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Englbrecht, F; Lindner, F; Bin, J
2016-06-15
Purpose: To measure and simulate well-defined electron spectra using a linear accelerator and a permanent-magnetic wide-angle spectrometer to test the performance of a novel reconstruction algorithm for retrieval of unknown electron-sources, in view of application to diagnostics of laser-driven particle acceleration. Methods: Six electron energies (6, 9, 12, 15, 18 and 21 MeV, 40cm × 40cm field-size) delivered by a Siemens Oncor linear accelerator were recorded using a permanent-magnetic wide-angle electron spectrometer (150mT) with a one dimensional slit (0.2mm × 5cm). Two dimensional maps representing beam-energy and entrance-position along the slit were measured using different scintillating screens, read by anmore » online CMOS detector of high resolution (0.048mm × 0.048mm pixels) and large field of view (5cm × 10cm). Measured energy-slit position maps were compared to forward FLUKA simulations of electron transport through the spectrometer, starting from IAEA phase-spaces of the accelerator. The latter ones were validated against measured depth-dose and lateral profiles in water. Agreement of forward simulation and measurement was quantified in terms of position and shape of the signal distribution on the detector. Results: Measured depth-dose distributions and lateral profiles in the water phantom showed good agreement with forward simulations of IAEA phase-spaces, thus supporting usage of this simulation source in the study. Measured energy-slit position maps and those obtained by forward Monte-Carlo simulations showed satisfactory agreement in shape and position. Conclusion: Well-defined electron beams of known energy and shape will provide an ideal scenario to study the performance of a novel reconstruction algorithm using measured and simulated signal. Future work will increase the stability and convergence of the reconstruction-algorithm for unknown electron sources, towards final application to the electrons which drive the interaction of TW-class laser pulses with nanometer thin target foils to accelerate protons and ions to multi-MeV kinetic energy. Cluster of Excellence of the German Research Foundation (DFG) “Munich-Centre for Advanced Photonics”.« less
NASA Astrophysics Data System (ADS)
Landazuri, Andrea C.
This dissertation focuses on aerosol transport modeling in occupational environments and mining sites in Arizona using computational fluid dynamics (CFD). The impacts of human exposure in both environments are explored with the emphasis on turbulence, wind speed, wind direction and particle sizes. Final emissions simulations involved the digitalization process of available elevation contour plots of one of the mining sites to account for realistic topographical features. The digital elevation map (DEM) of one of the sites was imported to COMSOL MULTIPHYSICSRTM for subsequent turbulence and particle simulations. Simulation results that include realistic topography show considerable deviations of wind direction. Inter-element correlation results using metal and metalloid size resolved concentration data using a Micro-Orifice Uniform Deposit Impactor (MOUDI) under given wind speeds and directions provided guidance on groups of metals that coexist throughout mining activities. Groups between Fe-Mg, Cr-Fe, Al-Sc, Sc-Fe, and Mg-Al are strongly correlated for unrestricted wind directions and speeds, suggesting that the source may be of soil origin (e.g. ore and tailings); also, groups of elements where Cu is present, in the coarse fraction range, may come from mechanical action mining activities and saltation phenomenon. Besides, MOUDI data under low wind speeds (<2 m/s) and at night showed a strong correlation for 1 mum particles between the groups: Sc-Be-Mg, Cr-Al, Cu-Mn, Cd-Pb-Be, Cd-Cr, Cu-Pb, Pb-Cd, As-Cd-Pb. The As-Cd-Pb correlates strongly in almost all ranges of particle sizes. When restricted low wind speeds were imposed more groups of elements are evident and this may be justified with the fact that at lower speeds particles are more likely to settle. When linking these results with CFD simulations and Pb-isotope results it is concluded that the source of elements found in association with Pb in the fine fraction come from the ore that is subsequently processed in the smelter site, whereas the source of elements associated to Pb in the coarse fraction is of different origin. CFD simulation results will not only provide realistic and quantifiable information in terms of potential deleterious effects, but also that the application of CFD represents an important contribution to actual dispersion modeling studies; therefore, Computational Fluid Dynamics can be used as a source apportionment tool to identify areas that have an effect over specific sampling points and susceptible regions under certain meteorological conditions, and these conclusions can be supported with inter-element correlation matrices and lead isotope analysis, especially since there is limited access to the mining sites. Additional results concluded that grid adaption is a powerful tool that allows to refine specific regions that require lots of detail and therefore better resolve flow detail, provides higher number of locations with monotonic convergence than the manual grids, and requires the least computational effort. CFD simulations were approached using the k-epsilon model, with the aid of computer aided engineering software: ANSYSRTM and COMSOL MULTIPHYSICS RTM. The success of aerosol transport simulations depends on a good simulation of the turbulent flow. A lot of attention was placed on investigating and choosing the best models in terms of convergence, independence and computational effort. This dissertation also includes preliminary studies of transient discrete phase, eulerian and species transport modeling, importance of saltation of particles, information on CFD methods, and strategies for future directions that should be taken.
Lattice Boltzmann simulations of heat transfer in fully developed periodic incompressible flows
NASA Astrophysics Data System (ADS)
Wang, Zimeng; Shang, Helen; Zhang, Junfeng
2017-06-01
Flow and heat transfer in periodic structures are of great interest for many applications. In this paper, we carefully examine the periodic features of fully developed periodic incompressible thermal flows, and incorporate them in the lattice Boltzmann method (LBM) for flow and heat transfer simulations. Two numerical approaches, the distribution modification (DM) approach and the source term (ST) approach, are proposed; and they can both be used for periodic thermal flows with constant wall temperature (CWT) and surface heat flux boundary conditions. However, the DM approach might be more efficient, especially for CWT systems since the ST approach requires calculations of the streamwise temperature gradient at all lattice nodes. Several example simulations are conducted, including flows through flat and wavy channels and flows through a square array with circular cylinders. Results are compared to analytical solutions, previous studies, and our own LBM calculations using different simulation techniques (i.e., the one-module simulation vs. the two-module simulation, and the DM approach vs. the ST approach) with good agreement. These simple, however, representative simulations demonstrate the accuracy and usefulness of our proposed LBM methods for future thermal periodic flow simulations.
A One Dimensional, Time Dependent Inlet/Engine Numerical Simulation for Aircraft Propulsion Systems
NASA Technical Reports Server (NTRS)
Garrard, Doug; Davis, Milt, Jr.; Cole, Gary
1999-01-01
The NASA Lewis Research Center (LeRC) and the Arnold Engineering Development Center (AEDC) have developed a closely coupled computer simulation system that provides a one dimensional, high frequency inlet/engine numerical simulation for aircraft propulsion systems. The simulation system, operating under the LeRC-developed Application Portable Parallel Library (APPL), closely coupled a supersonic inlet with a gas turbine engine. The supersonic inlet was modeled using the Large Perturbation Inlet (LAPIN) computer code, and the gas turbine engine was modeled using the Aerodynamic Turbine Engine Code (ATEC). Both LAPIN and ATEC provide a one dimensional, compressible, time dependent flow solution by solving the one dimensional Euler equations for the conservation of mass, momentum, and energy. Source terms are used to model features such as bleed flows, turbomachinery component characteristics, and inlet subsonic spillage while unstarted. High frequency events, such as compressor surge and inlet unstart, can be simulated with a high degree of fidelity. The simulation system was exercised using a supersonic inlet with sixty percent of the supersonic area contraction occurring internally, and a GE J85-13 turbojet engine.
A new lumped-parameter model for flow in unsaturated dual-porosity media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimmerman, Robert W.; Hadgu, Teklu; Bodvarsson, Gudmundur S.
A new lumped-parameter approach to simulating unsaturated flow processes in dual-porosity media such as fractured rocks or aggregated soils is presented. Fluid flow between the fracture network and the matrix blocks is described by a non-linear equation that relates the imbibition rate to the local difference in liquid-phase pressure between the fractures and the matrix blocks. Unlike a Warren-Root-type equation, this equation is accurate in both the early and late time regimes. The fracture/matrix interflow equation has been incorporated into an existing unsaturated flow simulator, to serve as a source/sink term for fracture gridblocks. Flow processes are then simulated usingmore » only fracture gridblocks in the computational grid. This new lumped-parameter approach has been tested on two problems involving transient flow in fractured/porous media, and compared with simulations performed using explicit discretization of the matrix blocks. The new procedure seems to accurately simulate flow processes in unsaturated fractured rocks, and typically requires an order of magnitude less computational time than do simulations using fully-discretized matrix blocks. [References: 37]« less
Performance simulation of a grid connected photovoltaic power system using TRNSYS 17
NASA Astrophysics Data System (ADS)
Raja Sekhar, Y.; Ganesh, D.; Kumar, A. Suresh; Abraham, Raju; Padmanathan, P.
2017-11-01
Energy plays an important role in a country’s economic growth in the current energy scenario, the major problem is depletion of energy sources (non-renewable) are more than being formed. One of the prominent solutions is minimizing the use of fossil fuels by utilization of renewable energy resources. A photovoltaic system is an efficient option in terms of utilizing the solar energy resource. The electricity output produced by the photovoltaic systems depends upon the incident solar radiation. This paper examines the performance simulation of 200KW photovoltaic power system at VIT University, Vellore. The main objective of this paper is to correlate the results between the predicted simulation data and the experimental data. The simulation tool used here is TRNSYS. Using TRNSYS modelling prediction of electricity produced throughout the year can be calculated with the help of TRNSYS weather station. The deviation of the simulated results with the experimented results varies due to the choice of weather station. Results from the field test and simulation results are to be correlated to attain the maximum performance of the system.
Inferring the nature of anthropogenic threats from long-term abundance records.
Shoemaker, Kevin T; Akçakaya, H Resit
2015-02-01
Diagnosing the processes that threaten species persistence is critical for recovery planning and risk forecasting. Dominant threats are typically inferred by experts on the basis of a patchwork of informal methods. Transparent, quantitative diagnostic tools would contribute much-needed consistency, objectivity, and rigor to the process of diagnosing anthropogenic threats. Long-term census records, available for an increasingly large and diverse set of taxa, may exhibit characteristic signatures of specific threatening processes and thereby provide information for threat diagnosis. We developed a flexible Bayesian framework for diagnosing threats on the basis of long-term census records and diverse ancillary sources of information. We tested this framework with simulated data from artificial populations subjected to varying degrees of exploitation and habitat loss and several real-world abundance time series for which threatening processes are relatively well understood: bluefin tuna (Thunnus maccoyii) and Atlantic cod (Gadus morhua) (exploitation) and Red Grouse (Lagopus lagopus scotica) and Eurasian Skylark (Alauda arvensis) (habitat loss). Our method correctly identified the process driving population decline for over 90% of time series simulated under moderate to severe threat scenarios. Successful identification of threats approached 100% for severe exploitation and habitat loss scenarios. Our method identified threats less successfully when threatening processes were weak and when populations were simultaneously affected by multiple threats. Our method selected the presumed true threat model for all real-world case studies, although results were somewhat ambiguous in the case of the Eurasian Skylark. In the latter case, incorporation of an ancillary source of information (records of land-use change) increased the weight assigned to the presumed true model from 70% to 92%, illustrating the value of the proposed framework in bringing diverse sources of information into a common rigorous framework. Ultimately, our framework may greatly assist conservation organizations in documenting threatening processes and planning species recovery. © 2014 Society for Conservation Biology.
Howard, David M; Pong-Wong, Ricardo; Knap, Pieter W; Kremer, Valentin D; Woolliams, John A
2018-05-10
Optimal contributions selection (OCS) provides animal breeders with a framework for maximising genetic gain for a predefined rate of inbreeding. Simulation studies have indicated that the source of the selective advantage of OCS is derived from breeding decisions being more closely aligned with estimates of Mendelian sampling terms ([Formula: see text]) of selection candidates, rather than estimated breeding values (EBV). This study represents the first attempt to assess the source of the selective advantage provided by OCS using a commercial pig population and by testing three hypotheses: (1) OCS places more emphasis on [Formula: see text] compared to EBV for determining which animals were selected as parents, (2) OCS places more emphasis on [Formula: see text] compared to EBV for determining which of those parents were selected to make a long-term genetic contribution (r), and (3) OCS places more emphasis on [Formula: see text] compared to EBV for determining the magnitude of r. The population studied also provided an opportunity to investigate the convergence of r over time. Selection intensity limited the number of males available for analysis, but females provided some evidence that the selective advantage derived from applying an OCS algorithm resulted from greater weighting being placed on [Formula: see text] during the process of decision-making. Male r were found to converge initially at a faster rate than female r, with approximately 90% convergence achieved within seven generations across both sexes. This study of commercial data provides some support to results from theoretical and simulation studies that the source of selective advantage from OCS comes from [Formula: see text]. The implication that genomic selection (GS) improves estimation of [Formula: see text] should allow for even greater genetic gains for a predefined rate of inbreeding, once the synergistic benefits of combining OCS and GS are realised.
Unfolding the neutron spectrum of a NE213 scintillator using artificial neural networks.
Sharghi Ido, A; Bonyadi, M R; Etaati, G R; Shahriari, M
2009-10-01
Artificial neural networks technology has been applied to unfold the neutron spectra from the pulse height distribution measured with NE213 liquid scintillator. Here, both the single and multi-layer perceptron neural network models have been implemented to unfold the neutron spectrum from an Am-Be neutron source. The activation function and the connectivity of the neurons have been investigated and the results have been analyzed in terms of the network's performance. The simulation results show that the neural network that utilizes the Satlins transfer function has the best performance. In addition, omitting the bias connection of the neurons improve the performance of the network. Also, the SCINFUL code is used for generating the response functions in the training phase of the process. Finally, the results of the neural network simulation have been compared with those of the FORIST unfolding code for both (241)Am-Be and (252)Cf neutron sources. The results of neural network are in good agreement with FORIST code.
Thermal neutron calibration channel at LNMRI/IRD.
Astuto, A; Salgado, A P; Leite, S P; Patrão, K C S; Fonseca, E S; Pereira, W W; Lopes, R T
2014-10-01
The Brazilian Metrology Laboratory of Ionizing Radiations (LNMRI) standard thermal neutron flux facility was designed to provide uniform neutron fluence for calibration of small neutron detectors and individual dosemeters. This fluence is obtained by neutron moderation from four (241)Am-Be sources, each with 596 GBq, in a facility built with blocks of graphite/paraffin compound and high-purity carbon graphite. This study was carried out in two steps. In the first step, simulations using the MCNPX code on different geometric arrangements of moderator materials and neutron sources were performed. The quality of the resulting neutron fluence in terms of spectrum, cadmium ratio and gamma-neutron ratio was evaluated. In the second step, the system was assembled based on the results obtained on the simulations, and new measurements are being made. These measurements will validate the system, and other intercomparisons will ensure traceability to the International System of Units. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A Simulated Environment Experiment on Annoyance Due to Combined Road Traffic and Industrial Noises.
Marquis-Favre, Catherine; Morel, Julien
2015-07-21
Total annoyance due to combined noises is still difficult to predict adequately. This scientific gap is an obstacle for noise action planning, especially in urban areas where inhabitants are usually exposed to high noise levels from multiple sources. In this context, this work aims to highlight potential to enhance the prediction of total annoyance. The work is based on a simulated environment experiment where participants performed activities in a living room while exposed to combined road traffic and industrial noises. The first objective of the experiment presented in this paper was to gain further understanding of the effects on annoyance of some acoustical factors, non-acoustical factors and potential interactions between the combined noise sources. The second one was to assess total annoyance models constructed from the data collected during the experiment and tested using data gathered in situ. The results obtained in this work highlighted the superiority of perceptual models. In particular, perceptual models with an interaction term seemed to be the best predictors for the two combined noise sources under study, even with high differences in sound pressure level. Thus, these results reinforced the need to focus on perceptual models and to improve the prediction of partial annoyances.
Unsteady Flow Dynamics and Acoustics of Two-Outlet Centrifugal Fan Design
NASA Astrophysics Data System (ADS)
Wong, I. Y. W.; Leung, R. C. K.; Law, A. K. Y.
2011-09-01
In this study, a centrifugal fan design with two flow outlets is investigated. This design aims to provide high mass flow rate but low noise performance. Two dimensional unsteady flow simulation with CFD code (FLUENT 6.3) is carried out to analyze the fan flow dynamics and its acoustics. The calculations were done using the unsteady Reynolds averaged Navier Stokes (URANS) approach in which effects of turbulence were accounted for using κ-ɛ model. This work aims to provide an insight how the dominant noise source mechanisms vary with a key fan geometrical paramters, namely, the ratio between cutoff distance and the radius of curvature of the fan housing. Four new fan designs were calculated. Simulation results show that the unsteady flow-induced forces on the fan blades are found to be the main noise sources. The blade force coefficients are then used to build the dipole source terms in Ffowcs Williams and Hawkings (FW-H) Equation for estimating their noise effects. It is found that one design is able to deliver a mass flow 34% more, but with sound pressure level (SPL) 10 dB lower, than the existing design .
PyFLOWGO: An open-source platform for simulation of channelized lava thermo-rheological properties
NASA Astrophysics Data System (ADS)
Chevrel, Magdalena Oryaëlle; Labroquère, Jérémie; Harris, Andrew J. L.; Rowland, Scott K.
2018-02-01
Lava flow advance can be modeled through tracking the evolution of the thermo-rheological properties of a control volume of lava as it cools and crystallizes. An example of such a model was conceived by Harris and Rowland (2001) who developed a 1-D model, FLOWGO, in which the velocity of a control volume flowing down a channel depends on rheological properties computed following the thermal path estimated via a heat balance box model. We provide here an updated version of FLOWGO written in Python that is an open-source, modern and flexible language. Our software, named PyFLOWGO, allows selection of heat fluxes and rheological models of the user's choice to simulate the thermo-rheological evolution of the lava control volume. We describe its architecture which offers more flexibility while reducing the risk of making error when changing models in comparison to the previous FLOWGO version. Three cases are tested using actual data from channel-fed lava flow systems and results are discussed in terms of model validation and convergence. PyFLOWGO is open-source and packaged in a Python library to be imported and reused in any Python program (https://github.com/pyflowgo/pyflowgo)
NASA Astrophysics Data System (ADS)
Carvalho, David Joao da Silva
The high dependence of Portugal from foreign energy sources (mainly fossil fuels), together with the international commitments assumed by Portugal and the national strategy in terms of energy policy, as well as resources sustainability and climate change issues, inevitably force Portugal to invest in its energetic self-sufficiency. The 20/20/20 Strategy defined by the European Union defines that in 2020 60% of the total electricity consumption must come from renewable energy sources. Wind energy is currently a major source of electricity generation in Portugal, producing about 23% of the national total electricity consumption in 2013. The National Energy Strategy 2020 (ENE2020), which aims to ensure the national compliance of the European Strategy 20/20/20, states that about half of this 60% target will be provided by wind energy. This work aims to implement and optimise a numerical weather prediction model in the simulation and modelling of the wind energy resource in Portugal, both in offshore and onshore areas. The numerical model optimisation consisted in the determination of which initial and boundary conditions and planetary boundary layer physical parameterizations options provide wind power flux (or energy density), wind speed and direction simulations closest to in situ measured wind data. Specifically for offshore areas, it is also intended to evaluate if the numerical model, once optimised, is able to produce power flux, wind speed and direction simulations more consistent with in situ measured data than wind measurements collected by satellites. This work also aims to study and analyse possible impacts that anthropogenic climate changes may have on the future wind energetic resource in Europe. The results show that the ECMWF reanalysis ERA-Interim are those that, among all the forcing databases currently available to drive numerical weather prediction models, allow wind power flux, wind speed and direction simulations more consistent with in situ wind measurements. It was also found that the Pleim-Xiu and ACM2 planetary boundary layer parameterizations are the ones that showed the best performance in terms of wind power flux, wind speed and direction simulations. This model optimisation allowed a significant reduction of the wind power flux, wind speed and direction simulations errors and, specifically for offshore areas, wind power flux, wind speed and direction simulations more consistent with in situ wind measurements than data obtained from satellites, which is a very valuable and interesting achievement. This work also revealed that future anthropogenic climate changes can negatively impact future European wind energy resource, due to tendencies towards a reduction in future wind speeds especially by the end of the current century and under stronger radiative forcing conditions.
A review on vegetation models and applicability to climate simulations at regional scale
NASA Astrophysics Data System (ADS)
Myoung, Boksoon; Choi, Yong-Sang; Park, Seon Ki
2011-11-01
The lack of accurate representations of biospheric components and their biophysical and biogeochemical processes is a great source of uncertainty in current climate models. The interactions between terrestrial ecosystems and the climate include exchanges not only of energy, water and momentum, but also of carbon and nitrogen. Reliable simulations of these interactions are crucial for predicting the potential impacts of future climate change and anthropogenic intervention on terrestrial ecosystems. In this paper, two biogeographical (Neilson's rule-based model and BIOME), two biogeochemical (BIOME-BGC and PnET-BGC), and three dynamic global vegetation models (Hybrid, LPJ, and MC1) were reviewed and compared in terms of their biophysical and physiological processes. The advantages and limitations of the models were also addressed. Lastly, the applications of the dynamic global vegetation models to regional climate simulations have been discussed.
Computer simulations of space-borne meteorological systems on the CYBER 205
NASA Technical Reports Server (NTRS)
Halem, M.
1984-01-01
Because of the extreme expense involved in developing and flight testing meteorological instruments, an extensive series of numerical modeling experiments to simulate the performance of meteorological observing systems were performed on CYBER 205. The studies compare the relative importance of different global measurements of individual and composite systems of the meteorological variables needed to determine the state of the atmosphere. The assessments are made in terms of the systems ability to improve 12 hour global forecasts. Each experiment involves the daily assimilation of simulated data that is obtained from a data set called nature. This data is obtained from two sources: first, a long two-month general circulation integration with the GLAS 4th Order Forecast Model and second, global analysis prepared by the National Meteorological Center, NOAA, from the current observing systems twice daily.
Perturbed redshifts from N -body simulations
NASA Astrophysics Data System (ADS)
Adamek, Julian
2018-01-01
In order to keep pace with the increasing data quality of astronomical surveys the observed source redshift has to be modeled beyond the well-known Doppler contribution. In this article I want to examine the gauge issue that is often glossed over when one assigns a perturbed redshift to simulated data generated with a Newtonian N -body code. A careful analysis reveals the presence of a correction term that has so far been neglected. It is roughly proportional to the observed length scale divided by the Hubble scale and therefore suppressed inside the horizon. However, on gigaparsec scales it can be comparable to the gravitational redshift and hence amounts to an important relativistic effect.
Experimentally validated finite element model of electrocaloric multilayer ceramic structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, N. A. S., E-mail: nadia.smith@npl.co.uk, E-mail: maciej.rokosz@npl.co.uk, E-mail: tatiana.correia@npl.co.uk; Correia, T. M., E-mail: nadia.smith@npl.co.uk, E-mail: maciej.rokosz@npl.co.uk, E-mail: tatiana.correia@npl.co.uk; Rokosz, M. K., E-mail: nadia.smith@npl.co.uk, E-mail: maciej.rokosz@npl.co.uk, E-mail: tatiana.correia@npl.co.uk
2014-07-28
A novel finite element model to simulate the electrocaloric response of a multilayer ceramic capacitor (MLCC) under real environment and operational conditions has been developed. The two-dimensional transient conductive heat transfer model presented includes the electrocaloric effect as a source term, as well as accounting for radiative and convective effects. The model has been validated with experimental data obtained from the direct imaging of MLCC transient temperature variation under application of an electric field. The good agreement between simulated and experimental data, suggests that the novel experimental direct measurement methodology and the finite element model could be used to supportmore » the design of optimised electrocaloric units and operating conditions.« less
Towards next generation time-domain diffuse optics devices
NASA Astrophysics Data System (ADS)
Dalla Mora, Alberto; Contini, Davide; Arridge, Simon R.; Martelli, Fabrizio; Tosi, Alberto; Boso, Gianluca; Farina, Andrea; Durduran, Turgut; Martinenghi, Edoardo; Torricelli, Alessandro; Pifferi, Antonio
2015-03-01
Diffuse Optics is growing in terms of applications ranging from e.g. oximetry, to mammography, molecular imaging, quality assessment of food and pharmaceuticals, wood optics, physics of random media. Time-domain (TD) approaches, although appealing in terms of quantitation and depth sensibility, are presently limited to large fiber-based systems, with limited number of source-detector pairs. We present a miniaturized TD source-detector probe embedding integrated laser sources and single-photon detectors. Some electronics are still external (e.g. power supply, pulse generators, timing electronics), yet full integration on-board using already proven technologies is feasible. The novel devices were successfully validated on heterogeneous phantoms showing performances comparable to large state-of-the-art TD rack-based systems. With an investigation based on simulations we provide numerical evidence that the possibility to stack many TD compact source-detector pairs in a dense, null source-detector distance arrangement could yield on the brain cortex about 1 decade higher contrast as compared to a continuous wave (CW) approach. Further, a 3-fold increase in the maximum depth (down to 6 cm) is estimated, opening accessibility to new organs such as the lung or the heart. Finally, these new technologies show the way towards compact and wearable TD probes with orders of magnitude reduction in size and cost, for a widespread use of TD devices in real life.
NASA Astrophysics Data System (ADS)
Rau, U.; Bhatnagar, S.; Owen, F. N.
2016-11-01
Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1-2 GHz)) and 46-pointing mosaic (D-array, C-Band (4-8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μJy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in the reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
Correcting STIS CCD Point-Source Spectra for CTE Loss
NASA Technical Reports Server (NTRS)
Goudfrooij, Paul; Bohlin, Ralph C.; Maiz-Apellaniz, Jesus
2006-01-01
We review the on-orbit spectroscopic observations that are being used to characterize the Charge Transfer Efficiency (CTE) of the STIS CCD in spectroscopic mode. We parameterize the CTE-related loss for spectrophotometry of point sources in terms of dependencies on the brightness of the source, the background level, the signal in the PSF outside the standard extraction box, and the time of observation. Primary constraints on our correction algorithm are provided by measurements of the CTE loss rates for simulated spectra (images of a tungsten lamp taken through slits oriented along the dispersion axis) combined with estimates of CTE losses for actual spectra of spectrophotometric standard stars in the first order CCD modes. For point-source spectra at the standard reference position at the CCD center, CTE losses as large as 30% are corrected to within approx.1% RMS after application of the algorithm presented here, rendering the Poisson noise associated with the source detection itself to be the dominant contributor to the total flux calibration uncertainty.
SU-F-T-50: Evaluation of Monte Carlo Simulations Performance for Pediatric Brachytherapy Dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatzipapas, C; Kagadis, G; Papadimitroulas, P
Purpose: Pediatric tumors are generally treated with multi-modal procedures. Brachytherapy can be used with pediatric tumors, especially given that in this patient population low toxicity on normal tissues is critical as is the suppression of the probability for late malignancies. Our goal is to validate the GATE toolkit on realistic brachytherapy applications, and evaluate brachytherapy plans on pediatrics for accurate dosimetry on sensitive and critical organs of interest. Methods: The GATE Monte Carlo (MC) toolkit was used. Two High Dose Rate (HDR) 192Ir brachytherapy sources were simulated (Nucletron mHDR-v1 and Varian VS2000), and fully validated using the AAPM and ESTROmore » protocols. A realistic brachytherapy plan was also simulated using the XCAT anthropomorphic computational model .The simulated data were compared to the clinical dose points. Finally, a 14 years old girl with vaginal rhabdomyosarcoma was modelled based on clinical procedures for the calculation of the absorbed dose per organ. Results: The MC simulations resulted in accurate dosimetry in terms of dose rate constant (Λ), radial dose gL(r) and anisotropy function F(r,θ) for both sources.The simulations were executed using ∼1010 number of primaries resulting in statistical uncertainties lower than 2%.The differences between the theoretical values and the simulated ones ranged from 0.01% up to 3.3%, with the largest discrepancy (6%) being observed in the dose rate constant calculation.The simulated DVH using an adult female XCAT model was also compared to a clinical one resulting in differences smaller than 5%. Finally, a realistic pediatric brachytherapy simulation was performed to evaluate the absorbed dose per organ and to calculate DVH with respect to heterogeneities of the human anatomy. Conclusion: GATE is a reliable tool for brachytherapy simulations both for source modeling and for dosimetry in anthropomorphic voxelized models. Our project aims to evaluate a variety of pediatric brachytherapy schemes using a population of pediatric phantoms for several pathological cases. This study is part of a project that has received funding from the European Union Horizon2020 research and innovation programme under the MarieSklodowska-Curiegrantagreement.No691203.The results published in this study reflect only the authors view and the Research Executive Agency (REA) and the European Commission is not responsible for any use that may be madeof the information it contains.« less
NASA Astrophysics Data System (ADS)
Tecklenburg, Jan; Neuweiler, Insa; Dentz, Marco; Carrera, Jesus; Geiger, Sebastian
2013-04-01
Flow processes in geotechnical applications do often take place in highly heterogeneous porous media, such as fractured rock. Since, in this type of media, classical modelling approaches are problematic, flow and transport is often modelled using multi-continua approaches. From such approaches, multirate mass transfer models (mrmt) can be derived to describe the flow and transport in the "fast" or mobile zone of the medium. The porous media is then modeled with one mobile zone and multiple immobile zones, where the immobile zones are connected to the mobile zone by single rate mass transfer. We proceed from a mrmt model for immiscible displacement of two fluids, where the Buckley-Leverett equation is expanded by a sink-source-term which is nonlocal in time. This sink-source-term models exchange with an immobile zone with mass transfer driven by capillary diffusion. This nonlinear diffusive mass transfer can be approximated for particular imbibition or drainage cases by a linear process. We present a numerical scheme for this model together with simulation results for a single fracture test case. We solve the mrmt model with the finite volume method and explicit time integration. The sink-source-term is transformed to multiple single rate mass transfer processes, as shown by Carrera et. al. (1998), to make it local in time. With numerical simulations we studied immiscible displacement in a single fracture test case. To do this we calculated the flow parameters using information about the geometry and the integral solution for two phase flow by McWorther and Sunnada (1990). Comparision to the results of the full two dimensional two phase flow model by Flemisch et. al. (2011) show good similarities of the saturation breakthrough curves. Carrera, J., Sanchez-Vila, X., Benet, I., Medina, A., Galarza, G., and Guimera, J.: On matrix diffusion: formulations, solution methods and qualitative effects, Hydrogeology Journal, 6, 178-190, 1998. Flemisch, B., Darcis, M., Erbertseder, K., Faigle, B., Lauser, A. et al.: Dumux: Dune for multi-{Phase, Component, Scale, Physics, ...} flow and transport in porous media, Advances in Water Resources, 34, 1102-1112, 2011. McWhorter, D. B., and Sunada, D. K.: Exact integral solutions for two-phase flow, Water Resources Research, 26(3), 399-413, 1990.
NASA Astrophysics Data System (ADS)
Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.
2015-12-01
Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.
NASA Astrophysics Data System (ADS)
Parajuli, Sagar Prasad; Yang, Zong-Liang; Lawrence, David M.
2016-06-01
Large amounts of mineral dust are injected into the atmosphere during dust storms, which are common in the Middle East and North Africa (MENA) where most of the global dust hotspots are located. In this work, we present simulations of dust emission using the Community Earth System Model Version 1.2.2 (CESM 1.2.2) and evaluate how well it captures the spatio-temporal characteristics of dust emission in the MENA region with a focus on large-scale dust storm mobilization. We explicitly focus our analysis on the model's two major input parameters that affect the vertical mass flux of dust-surface winds and the soil erodibility factor. We analyze dust emissions in simulations with both prognostic CESM winds and with CESM winds that are nudged towards ERA-Interim reanalysis values. Simulations with three existing erodibility maps and a new observation-based erodibility map are also conducted. We compare the simulated results with MODIS satellite data, MACC reanalysis data, AERONET station data, and CALIPSO 3-d aerosol profile data. The dust emission simulated by CESM, when driven by nudged reanalysis winds, compares reasonably well with observations on daily to monthly time scales despite CESM being a global General Circulation Model. However, considerable bias exists around known high dust source locations in northwest/northeast Africa and over the Arabian Peninsula where recurring large-scale dust storms are common. The new observation-based erodibility map, which can represent anthropogenic dust sources that are not directly represented by existing erodibility maps, shows improved performance in terms of the simulated dust optical depth (DOD) and aerosol optical depth (AOD) compared to existing erodibility maps although the performance of different erodibility maps varies by region.
Simulation for learning and teaching procedural skills: the state of the science.
Nestel, Debra; Groom, Jeffrey; Eikeland-Husebø, Sissel; O'Donnell, John M
2011-08-01
Simulation is increasingly used to support learning of procedural skills. Our panel was tasked with summarizing the "best evidence." We addressed the following question: To what extent does simulation support learning and teaching in procedural skills? We conducted a literature search from 2000 to 2010 using Medline, CINAHL, ERIC, and PSYCHINFO databases. Inclusion criteria were established and then data extracted from abstracts according to several categories. Although secondary sources of literature were sourced from key informants and participants at the "Research Consensus Summit: State of the Science," they were not included in the data extraction process but were used to inform discussion. Eighty-one of 1,575 abstracts met inclusion criteria. The uses of simulation for learning and teaching procedural skills were diverse. The most commonly reported simulator type was manikins (n = 17), followed by simulated patients (n = 14), anatomic simulators (eg, part-task) (n = 12), and others. For research design, most abstracts (n = 52) were at Level IV of the National Health and Medical Research Council classification (ie, case series, posttest, or pretest/posttest, with no control group, narrative reviews, and editorials). The most frequent Best Evidence Medical Education ranking was for conclusions probable (n = 37). Using the modified Kirkpatrick scale for impact of educational intervention, the most frequent classification was for modification of knowledge and/or skills (Level 2b) (n = 52). Abstracts assessed skills (n = 47), knowledge (n = 32), and attitude (n = 15) with the majority demonstrating improvements after simulation-based interventions. Studies focused on immediate gains and skills assessments were usually conducted in simulation. The current state of the science finds that simulation usually leads to improved knowledge and skills. Learners and instructors express high levels of satisfaction with the method. While most studies focus on short-term gains attained in the simulation setting, a small number support the transfer of simulation learning to clinical practice. Further study is needed to optimize the alignment of learner, instructor, simulator, setting, and simulation for learning and teaching procedural skills. Instructional design and educational theory, contextualization, transferability, accessibility, and scalability must all be considered in simulation-based education programs. More consistently, robust research designs are required to strengthen the evidence.
NASA Astrophysics Data System (ADS)
Paramasivan, K.; Das, Sandip; Marimuthu, Sundar; Misra, Dipten
2018-06-01
The aim of this experimental study is to identify and characterize the response related to the effects of process parameters in terms of bending angle for micro-bending of AISI 304 sheet using a low power Nd:YVO4 laser source. Numerical simulation is also carried out through a coupled thermo-mechanical formulation with finite element method using COMSOL MULTIPHYSICS. The developed numerical simulation indicates that bending is caused by temperature gradient mechanism in the present investigation involving laser micro-bending. The results of experiment indicate that bending angle increases with laser power, number of irradiations, and decreases with increase in scanning speed. Moreover, average bending angle increases with number of laser passes and edge effect, defined in terms of relative variation of bending angle (RBAV), decreases monotonically with the number of laser scans. The substrate is damaged over a width of about 80 μm due to the high temperatures experienced during laser forming at a low scanning speed.
Momentum and Heat Transfer Models for Detonation in Nitromethane with Metal Particles
NASA Astrophysics Data System (ADS)
Ripley, Robert; Zhang, Fan; Lien, Fue-Sang
2009-06-01
Models for momentum and heat exchange have been derived from the results of previous 3D mesoscale simulations of detonation in packed aluminum particles saturated with nitromethane, where the shock interaction timescale was resolved. In these models, particle acceleration and heating within the shock and detonation zone have been expressed in terms of velocity and temperature transmission factors, which are a function of metal to explosive density ratio, metal volume fraction and ratio of particle size to detonation zone thickness. These models are incorporated as source terms in the governing equations for continuum dense two-phase flow and macroscopic simulation is then applied to detonation of nitromethane/aluminum in lightly-cased cylinders. Heterogeneous detonation features such as velocity deficit, enhanced pressure, and critical diameter effects are reproduced. Various spherical particle diameters from 3 -- 30 μm are utilized where most of the particles react in the expanding detonation products. Results for detonation velocity, pressure history, failure and U-shaped critical diameter behavior are compared to the existing experiments.
Momentum and Heat Transfer Models for Detonation in Nitromethane with Metal Particles
NASA Astrophysics Data System (ADS)
Ripley, R. C.; Zhang, F.; Lien, F.-S.
2009-12-01
Models for momentum and heat exchange have been derived from the results of previous 3D mesoscale simulations of detonation in packed aluminum particles saturated with nitromethane, where the shock interaction timescale was resolved. In these models, particle acceleration and heating within the shock and detonation zone are expressed in terms of velocity and temperature transmission factors, which are a function of the metal to explosive density ratio, solid volume fraction and ratio of particle size to detonation zone thickness. These models are incorporated as source terms in the governing equations for continuum dense two-phase flow, and then applied to macroscopic simulation of detonation of nitromethane/aluminum in lightly-cased cylinders. Heterogeneous detonation features such as velocity deficit, enhanced pressure, and critical diameter effects are demonstrated. Various spherical particle diameters from 3-350 μm are utilized where most of the particles react in the expanding detonation products. Results for detonation velocity, pressure history, failure and U-shaped critical diameter behavior are compared to existing experiments.
Development and Use of an Open-Source, User-Friendly Package to Simulate Voltammetry Experiments
ERIC Educational Resources Information Center
Wang, Shuo; Wang, Jing; Gao, Yanjing
2017-01-01
An open-source electrochemistry simulation package has been developed that simulates the electrode processes of four reaction mechanisms and two typical electroanalysis techniques: cyclic voltammetry and chronoamperometry. Unlike other open-source simulation software, this package balances the features with ease of learning and implementation and…
NASA Astrophysics Data System (ADS)
Heimann, F. U. M.; Rickenmann, D.; Turowski, J. M.; Kirchner, J. W.
2015-01-01
Especially in mountainous environments, the prediction of sediment dynamics is important for managing natural hazards, assessing in-stream habitats and understanding geomorphic evolution. We present the new modelling tool {sedFlow} for simulating fractional bedload transport dynamics in mountain streams. sedFlow is a one-dimensional model that aims to realistically reproduce the total transport volumes and overall morphodynamic changes resulting from sediment transport events such as major floods. The model is intended for temporal scales from the individual event (several hours to few days) up to longer-term evolution of stream channels (several years). The envisaged spatial scale covers complete catchments at a spatial discretisation of several tens of metres to a few hundreds of metres. sedFlow can deal with the effects of streambeds that slope uphill in a downstream direction and uses recently proposed and tested approaches for quantifying macro-roughness effects in steep channels. sedFlow offers different options for bedload transport equations, flow-resistance relationships and other elements which can be selected to fit the current application in a particular catchment. Local grain-size distributions are dynamically adjusted according to the transport dynamics of each grain-size fraction. sedFlow features fast calculations and straightforward pre- and postprocessing of simulation data. The high simulation speed allows for simulations of several years, which can be used, e.g., to assess the long-term impact of river engineering works or climate change effects. In combination with the straightforward pre- and postprocessing, the fast calculations facilitate efficient workflows for the simulation of individual flood events, because the modeller gets the immediate results as direct feedback to the selected parameter inputs. The model is provided together with its complete source code free of charge under the terms of the GNU General Public License (GPL) (www.wsl.ch/sedFlow). Examples of the application of sedFlow are given in a companion article by Heimann et al. (2015).
NASA Astrophysics Data System (ADS)
Lucchi, M.; Lorenzini, M.; Valdiserri, P.
2017-01-01
This work presents a numerical simulation of the annual performance of two different systems: a traditional one composed by a gas boiler-chiller pair and one consisting of a ground source heat pump (GSHP) both coupled to two thermal storage tanks. The systems serve a bloc of flats located in northern Italy and are assessed over a typical weather year, covering both the heating and cooling seasons. The air handling unit (AHU) coupled with the GSHP exhibits excellent characteristics in terms of temperature control, and has high performance parameters (EER and COP), which make conduction costs about 30% lower than those estimated for the traditional plant.
Design and simulation of ion optics for ion sources for production of singly charged ions
NASA Astrophysics Data System (ADS)
Zelenak, A.; Bogomolov, S. L.
2004-05-01
During the last 2 years different types of the singly charged ion sources were developed for FLNR (JINR) new projects such as Dubna radioactive ion beams, (Phase I and Phase II), the production of the tritium ion beam and the MASHA mass separator. The ion optics simulations for 2.45 GHz electron cyclotron resonance source, rf source, and the plasma ion source were performed. In this article the design and simulation results of the optics of new ion sources are presented. The results of simulation are compared with measurements obtained during the experiments.
Fischell, Erin M; Schmidt, Henrik
2015-12-01
One of the long term goals of autonomous underwater vehicle (AUV) minehunting is to have multiple inexpensive AUVs in a harbor autonomously classify hazards. Existing acoustic methods for target classification using AUV-based sensing, such as sidescan and synthetic aperture sonar, require an expensive payload on each outfitted vehicle and post-processing and/or image interpretation. A vehicle payload and machine learning classification methodology using bistatic angle dependence of target scattering amplitudes between a fixed acoustic source and target has been developed for onboard, fully autonomous classification with lower cost-per-vehicle. To achieve the high-quality, densely sampled three-dimensional (3D) bistatic scattering data required by this research, vehicle sampling behaviors and an acoustic payload for precision timed data acquisition with a 16 element nose array were demonstrated. 3D bistatic scattered field data were collected by an AUV around spherical and cylindrical targets insonified by a 7-9 kHz fixed source. The collected data were compared to simulated scattering models. Classification and confidence estimation were shown for the sphere versus cylinder case on the resulting real and simulated bistatic amplitude data. The final models were used for classification of simulated targets in real time in the LAMSS MOOS-IvP simulation package [M. Benjamin, H. Schmidt, P. Newman, and J. Leonard, J. Field Rob. 27, 834-875 (2010)].
Rothman, Jason S.; Silver, R. Angus
2018-01-01
Acquisition, analysis and simulation of electrophysiological properties of the nervous system require multiple software packages. This makes it difficult to conserve experimental metadata and track the analysis performed. It also complicates certain experimental approaches such as online analysis. To address this, we developed NeuroMatic, an open-source software toolkit that performs data acquisition (episodic, continuous and triggered recordings), data analysis (spike rasters, spontaneous event detection, curve fitting, stationarity) and simulations (stochastic synaptic transmission, synaptic short-term plasticity, integrate-and-fire and Hodgkin-Huxley-like single-compartment models). The merging of a wide range of tools into a single package facilitates a more integrated style of research, from the development of online analysis functions during data acquisition, to the simulation of synaptic conductance trains during dynamic-clamp experiments. Moreover, NeuroMatic has the advantage of working within Igor Pro, a platform-independent environment that includes an extensive library of built-in functions, a history window for reviewing the user's workflow and the ability to produce publication-quality graphics. Since its original release, NeuroMatic has been used in a wide range of scientific studies and its user base has grown considerably. NeuroMatic version 3.0 can be found at http://www.neuromatic.thinkrandom.com and https://github.com/SilverLabUCL/NeuroMatic. PMID:29670519
Electroweak baryogenesis in the exceptional supersymmetric standard model
Chao, Wei
2015-08-28
Here, we study electroweak baryogenesis in the E 6 inspired exceptional supersymmetric standard model (E 6SSM). The relaxation coefficients driven by singlinos and the new gaugino as well as the transport equation of the Higgs supermultiplet number density in the E 6SSM are calculated. Our numerical simulation shows that both CP-violating source terms from singlinos and the new gaugino can solely give rise to a correct baryon asymmetry of the Universe via the electroweak baryogenesis mechanism.
Development of a Chemically Reacting Flow Solver on the Graphic Processing Units
2011-05-10
been implemented on the GPU by Schive et al. (2010). The outcome of their work is the GAMER code for astrophysical simulation. Thibault and...Euler equations at each cell. For simplification, consider the Euler equations in one dimension with no source terms; the discretized form of the...is known to be more diffusive than the other fluxes due to the large bound of the numerical signal velocities: b+, b-. 3.4 Time Marching Methods
Large Eddy Simulations of Supercritical Mixing Layers for Air Force Applications
2010-05-01
Taskinoglu and J. Bellan Species m( gmo \\-]) 7). (K) P,(MPa) N2 28.013 126.3 3.399 C7H16 100.205...equation is that its source term, called the irreversible en- tropy production, which is by definition is the dissipation [3|, con - tains the full extent...with sim- ilar or even larger gradient magnitudes under fully turbulent con - ditions (the experimental data was for Re = O(104) - O(105)). Thus
High Resolution WENO Simulation of 3D Detonation Waves
2012-02-27
pocket behind the detonation front was not observed in their results because the rotating transverse detonation completely consumed the unburned gas. Dou...three-dimensional detonations We add source terms (functions of x, y, z and t) to the PDE system so that the following functions are exact solutions to... detonation rotates counter-clockwise, opposite to that in [48]. It can be seen that, the triple lines and transverse waves collide with the walls, and strong
Probabilistic Model for Laser Damage to the Human Retina
2012-03-01
the beam. Power density may be measured in radiant exposure, J cm2 , or by irradiance , W cm2 . In the experimental database used in this study and...to quan- tify a binary response, either lethal or non-lethal, within a population such as insects or rats. In directed energy research, probit...value of the normalized Arrhenius damage integral. In a one-dimensional simulation, the source term is determined as a spatially averaged irradiance (W
NOTE: Implementation of angular response function modeling in SPECT simulations with GATE
NASA Astrophysics Data System (ADS)
Descourt, P.; Carlier, T.; Du, Y.; Song, X.; Buvat, I.; Frey, E. C.; Bardies, M.; Tsui, B. M. W.; Visvikis, D.
2010-05-01
Among Monte Carlo simulation codes in medical imaging, the GATE simulation platform is widely used today given its flexibility and accuracy, despite long run times, which in SPECT simulations are mostly spent in tracking photons through the collimators. In this work, a tabulated model of the collimator/detector response was implemented within the GATE framework to significantly reduce the simulation times in SPECT. This implementation uses the angular response function (ARF) model. The performance of the implemented ARF approach has been compared to standard SPECT GATE simulations in terms of the ARF tables' accuracy, overall SPECT system performance and run times. Considering the simulation of the Siemens Symbia T SPECT system using high-energy collimators, differences of less than 1% were measured between the ARF-based and the standard GATE-based simulations, while considering the same noise level in the projections, acceleration factors of up to 180 were obtained when simulating a planar 364 keV source seen with the same SPECT system. The ARF-based and the standard GATE simulation results also agreed very well when considering a four-head SPECT simulation of a realistic Jaszczak phantom filled with iodine-131, with a resulting acceleration factor of 100. In conclusion, the implementation of an ARF-based model of collimator/detector response for SPECT simulations within GATE significantly reduces the simulation run times without compromising accuracy.
Extended lattice Boltzmann scheme for droplet combustion.
Ashna, Mostafa; Rahimian, Mohammad Hassan; Fakhari, Abbas
2017-05-01
The available lattice Boltzmann (LB) models for combustion or phase change are focused on either single-phase flow combustion or two-phase flow with evaporation assuming a constant density for both liquid and gas phases. To pave the way towards simulation of spray combustion, we propose a two-phase LB method for modeling combustion of liquid fuel droplets. We develop an LB scheme to model phase change and combustion by taking into account the density variation in the gas phase and accounting for the chemical reaction based on the Cahn-Hilliard free-energy approach. Evaporation of liquid fuel is modeled by adding a source term, which is due to the divergence of the velocity field being nontrivial, in the continuity equation. The low-Mach-number approximation in the governing Navier-Stokes and energy equations is used to incorporate source terms due to heat release from chemical reactions, density variation, and nonluminous radiative heat loss. Additionally, the conservation equation for chemical species is formulated by including a source term due to chemical reaction. To validate the model, we consider the combustion of n-heptane and n-butanol droplets in stagnant air using overall single-step reactions. The diameter history and flame standoff ratio obtained from the proposed LB method are found to be in good agreement with available numerical and experimental data. The present LB scheme is believed to be a promising approach for modeling spray combustion.
Arctic Ocean Freshwater: How Robust are Model Simulations
NASA Technical Reports Server (NTRS)
Jahn, A.; Aksenov, Y.; deCuevas, B. A.; deSteur, L.; Haekkinen, S.; Hansen, E.; Herbaut, C.; Houssais, M.-N.; Karcher, M.; Kauker, F.;
2012-01-01
The Arctic freshwater (FW) has been the focus of many modeling studies, due to the potential impact of Arctic FW on the deep water formation in the North Atlantic. A comparison of the hindcasts from ten ocean-sea ice models shows that the simulation of the Arctic FW budget is quite different in the investigated models. While they agree on the general sink and source terms of the Arctic FW budget, the long-term means as well as the variability of the FW export vary among models. The best model-to-model agreement is found for the interannual and seasonal variability of the solid FW export and the solid FW storage, which also agree well with observations. For the interannual and seasonal variability of the liquid FW export, the agreement among models is better for the Canadian Arctic Archipelago (CAA) than for Fram Strait. The reason for this is that models are more consistent in simulating volume flux anomalies than salinity anomalies and volume-flux anomalies dominate the liquid FW export variability in the CAA but not in Fram Strait. The seasonal cycle of the liquid FW export generally shows a better agreement among models than the interannual variability, and compared to observations the models capture the seasonality of the liquid FW export rather well. In order to improve future simulations of the Arctic FW budget, the simulation of the salinity field needs to be improved, so that model results on the variability of the liquid FW export and storage become more robust.
The systems biology simulation core algorithm
2013-01-01
Background With the increasing availability of high dimensional time course data for metabolites, genes, and fluxes, the mathematical description of dynamical systems has become an essential aspect of research in systems biology. Models are often encoded in formats such as SBML, whose structure is very complex and difficult to evaluate due to many special cases. Results This article describes an efficient algorithm to solve SBML models that are interpreted in terms of ordinary differential equations. We begin our consideration with a formal representation of the mathematical form of the models and explain all parts of the algorithm in detail, including several preprocessing steps. We provide a flexible reference implementation as part of the Systems Biology Simulation Core Library, a community-driven project providing a large collection of numerical solvers and a sophisticated interface hierarchy for the definition of custom differential equation systems. To demonstrate the capabilities of the new algorithm, it has been tested with the entire SBML Test Suite and all models of BioModels Database. Conclusions The formal description of the mathematics behind the SBML format facilitates the implementation of the algorithm within specifically tailored programs. The reference implementation can be used as a simulation backend for Java™-based programs. Source code, binaries, and documentation can be freely obtained under the terms of the LGPL version 3 from http://simulation-core.sourceforge.net. Feature requests, bug reports, contributions, or any further discussion can be directed to the mailing list simulation-core-development@lists.sourceforge.net. PMID:23826941
Fish-Eye Observing with Phased Array Radio Telescopes
NASA Astrophysics Data System (ADS)
Wijnholds, S. J.
The radio astronomical community is currently developing and building several new radio telescopes based on phased array technology. These telescopes provide a large field-of-view, that may in principle span a full hemisphere. This makes calibration and imaging very challenging tasks due to the complex source structures and direction dependent radio wave propagation effects. In this thesis, calibration and imaging methods are developed based on least squares estimation of instrument and source parameters. Monte Carlo simulations and actual observations with several prototype show that this model based approach provides statistically and computationally efficient solutions. The error analysis provides a rigorous mathematical framework to assess the imaging performance of current and future radio telescopes in terms of the effective noise, which is the combined effect of propagated calibration errors, noise in the data and source confusion.
NASA Astrophysics Data System (ADS)
Fu, Shihang; Zhang, Li; Hu, Yao; Ding, Xiang
2018-01-01
Confocal Raman Microscopy (CRM) has matured to become one of the most powerful instruments in analytical science because of its molecular sensitivity and high spatial resolution. Compared with conventional Raman Microscopy, CRM can perform three dimensions mapping of tiny samples and has the advantage of high spatial resolution thanking to the unique pinhole. With the wide application of the instrument, there is a growing requirement for the evaluation of the imaging performance of the system. Point-spread function (PSF) is an important approach to the evaluation of imaging capability of an optical instrument. Among a variety of measurement methods of PSF, the point source method has been widely used because it is easy to operate and the measurement results are approximate to the true PSF. In the point source method, the point source size has a significant impact on the final measurement accuracy. In this paper, the influence of the point source sizes on the measurement accuracy of PSF is analyzed and verified experimentally. A theoretical model of the lateral PSF for CRM is established and the effect of point source size on full-width at half maximum of lateral PSF is simulated. For long-term preservation and measurement convenience, PSF measurement phantom using polydimethylsiloxane resin, doped with different sizes of polystyrene microspheres is designed. The PSF of CRM with different sizes of microspheres are measured and the results are compared with the simulation results. The results provide a guide for measuring the PSF of the CRM.
21 CFR 352.71 - Light source (solar simulator).
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 5 2013-04-01 2013-04-01 false Light source (solar simulator). 352.71 Section 352... Procedures § 352.71 Light source (solar simulator). A solar simulator used for determining the SPF of a... of its total energy output contributed by nonsolar wavelengths shorter than 290 nanometers; and it...
21 CFR 352.71 - Light source (solar simulator).
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 5 2014-04-01 2014-04-01 false Light source (solar simulator). 352.71 Section 352.71 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... Procedures § 352.71 Light source (solar simulator). A solar simulator used for determining the SPF of a...
21 CFR 352.71 - Light source (solar simulator).
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 5 2012-04-01 2012-04-01 false Light source (solar simulator). 352.71 Section 352.71 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED... Procedures § 352.71 Light source (solar simulator). A solar simulator used for determining the SPF of a...
Water Vapor Tracers as Diagnostics of the Regional Hydrologic Cycle
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Schubert, Siegfried D.; Einaudi, Franco (Technical Monitor)
2001-01-01
Numerous studies suggest that local feedback of surface evaporation on precipitation, or recycling, is a significant source of water for precipitation. Quantitative results on the exact amount of recycling have been difficult to obtain in view of the inherent limitations of diagnostic recycling calculations. The current study describes a calculation of the amount of local and remote geographic sources of surface evaporation for precipitation, based on the implementation of three-dimensional constituent tracers of regional water vapor sources (termed water vapor tracers, WVT) in a general circulation model. The major limitation on the accuracy of the recycling estimates is the veracity of the numerically simulated hydrological cycle, though we note that this approach can also be implemented within the context of a data assimilation system. In the WVT approach, each tracer is associated with an evaporative source region for a prognostic three-dimensional variable that represents a partial amount of the total atmospheric water vapor. The physical processes that act on a WVT are determined in proportion to those that act on the model's prognostic water vapor. In this way, the local and remote sources of water for precipitation can be predicted within the model simulation, and can be validated against the model's prognostic water vapor. As a demonstration of the method, the regional hydrologic cycles for North America and India are evaluated for six summers (June, July and August) of model simulation. More than 50% of the precipitation in the Midwestern United States came from continental regional sources, and the local source was the largest of the regional tracers (14%). The Gulf of Mexico and Atlantic regions contributed 18% of the water for Midwestern precipitation, but further analysis suggests that the greater region of the Tropical Atlantic Ocean may also contribute significantly. In most North American continental regions, the local source of precipitation is correlated with total precipitation. There is a general positive correlation between local evaporation and local precipitation, but it can be weaker because large evaporation can occur when precipitation is inhibited. In India, the local source of precipitation is a small percentage of the precipitation owing to the dominance of the atmospheric transport of oceanic water. The southern Indian Ocean provides a key source of water for both the Indian continent and the Sahelian region.
Estimating Sources and Fluxes of Dissolved and Particulate Organic Matter in UK Rivers
NASA Astrophysics Data System (ADS)
Adams, Jessica; Tipping, Edward; Quinton, John; Old, Gareth
2014-05-01
Over the past two centuries, pools and fluxes of carbon, nitrogen and phosphorus in UK ecosystems have been altered by intensification of agriculture, land use change and atmospheric pollution leading to acidification and eutrophication of surface waters. In addition to this, climate change is now also predicted to substantially impact these systems. The CEH Long Term Large Scale (LTLS) project therefore aims to simulate the pools and fluxes of carbon, nitrogen and phosphorus and their stoichiometry during the cycling process. Through the N14C model, simulations of the release of C, N and P through drainage water and erosion processes will be predicted using historical climate data, which will be tested using contemporary data. For present data, water from four UK catchments (Ribble, Wiltshire Avon, Conwy, Dee) were collected at the tidal limit of each river, which included a combination of high and low flow samples predicted using 5 day forecasts and local weather station data. These samples were filtered, centrifuged and sent to the NERC radiocarbon facility for analysis by accelerator mass spectrometry (AMS) to obtain both PO14C and DO14C data. Radiocarbon enables a unique and dynamic way of estimating long term turnover rates of organic matter, and has proven to be an invaluable tool for measuring upland terrestrial and aquatic systems. It has however, been scarcely used in larger, lowland river systems. Since the riverine organic matter captured is likely to have originated from terrestrial and riparian sources, the radiocarbon data will be a rigorous test of the model's ability to simulate the coupling of erosion and leaching processes, and stoichiometric relationships between C:N:P.
Unsteady, Cooled Turbine Simulation Using a PC-Linux Analysis System
NASA Technical Reports Server (NTRS)
List, Michael G.; Turner, Mark G.; Chen, Jen-Pimg; Remotigue, Michael G.; Veres, Joseph P.
2004-01-01
The fist stage of the high-pressure turbine (HPT) of the GE90 engine was simulated with a three-dimensional unsteady Navier-Sokes solver, MSU Turbo, which uses source terms to simulate the cooling flows. In addition to the solver, its pre-processor, GUMBO, and a post-processing and visualization tool, Turbomachinery Visual3 (TV3) were run in a Linux environment to carry out the simulation and analysis. The solver was run both with and without cooling. The introduction of cooling flow on the blade surfaces, case, and hub and its effects on both rotor-vane interaction as well the effects on the blades themselves were the principle motivations for this study. The studies of the cooling flow show the large amount of unsteadiness in the turbine and the corresponding hot streak migration phenomenon. This research on the GE90 turbomachinery has also led to a procedure for running unsteady, cooled turbine analysis on commodity PC's running the Linux operating system.
Multi-dimensional Core-Collapse Supernova Simulations with Neutrino Transport
NASA Astrophysics Data System (ADS)
Pan, Kuo-Chuan; Liebendörfer, Matthias; Hempel, Matthias; Thielemann, Friedrich-Karl
We present multi-dimensional core-collapse supernova simulations using the Isotropic Diffusion Source Approximation (IDSA) for the neutrino transport and a modified potential for general relativity in two different supernova codes: FLASH and ELEPHANT. Due to the complexity of the core-collapse supernova explosion mechanism, simulations require not only high-performance computers and the exploitation of GPUs, but also sophisticated approximations to capture the essential microphysics. We demonstrate that the IDSA is an elegant and efficient neutrino radiation transfer scheme, which is portable to multiple hydrodynamics codes and fast enough to investigate long-term evolutions in two and three dimensions. Simulations with a 40 solar mass progenitor are presented in both FLASH (1D and 2D) and ELEPHANT (3D) as an extreme test condition. It is found that the black hole formation time is delayed in multiple dimensions and we argue that the strong standing accretion shock instability before black hole formation will lead to strong gravitational waves.
Pryor, Alan; Ophus, Colin; Miao, Jianwei
2017-10-25
Simulation of atomic-resolution image formation in scanning transmission electron microscopy can require significant computation times using traditional methods. A recently developed method, termed plane-wave reciprocal-space interpolated scattering matrix (PRISM), demonstrates potential for significant acceleration of such simulations with negligible loss of accuracy. In this paper, we present a software package called Prismatic for parallelized simulation of image formation in scanning transmission electron microscopy (STEM) using both the PRISM and multislice methods. By distributing the workload between multiple CUDA-enabled GPUs and multicore processors, accelerations as high as 1000 × for PRISM and 15 × for multislice are achieved relative to traditionalmore » multislice implementations using a single 4-GPU machine. We demonstrate a potentially important application of Prismatic, using it to compute images for atomic electron tomography at sufficient speeds to include in the reconstruction pipeline. Prismatic is freely available both as an open-source CUDA/C++ package with a graphical user interface and as a Python package, PyPrismatic.« less
Pryor, Alan; Ophus, Colin; Miao, Jianwei
2017-01-01
Simulation of atomic-resolution image formation in scanning transmission electron microscopy can require significant computation times using traditional methods. A recently developed method, termed plane-wave reciprocal-space interpolated scattering matrix (PRISM), demonstrates potential for significant acceleration of such simulations with negligible loss of accuracy. Here, we present a software package called Prismatic for parallelized simulation of image formation in scanning transmission electron microscopy (STEM) using both the PRISM and multislice methods. By distributing the workload between multiple CUDA-enabled GPUs and multicore processors, accelerations as high as 1000 × for PRISM and 15 × for multislice are achieved relative to traditional multislice implementations using a single 4-GPU machine. We demonstrate a potentially important application of Prismatic , using it to compute images for atomic electron tomography at sufficient speeds to include in the reconstruction pipeline. Prismatic is freely available both as an open-source CUDA/C++ package with a graphical user interface and as a Python package, PyPrismatic .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pryor, Alan; Ophus, Colin; Miao, Jianwei
Simulation of atomic-resolution image formation in scanning transmission electron microscopy can require significant computation times using traditional methods. A recently developed method, termed plane-wave reciprocal-space interpolated scattering matrix (PRISM), demonstrates potential for significant acceleration of such simulations with negligible loss of accuracy. In this paper, we present a software package called Prismatic for parallelized simulation of image formation in scanning transmission electron microscopy (STEM) using both the PRISM and multislice methods. By distributing the workload between multiple CUDA-enabled GPUs and multicore processors, accelerations as high as 1000 × for PRISM and 15 × for multislice are achieved relative to traditionalmore » multislice implementations using a single 4-GPU machine. We demonstrate a potentially important application of Prismatic, using it to compute images for atomic electron tomography at sufficient speeds to include in the reconstruction pipeline. Prismatic is freely available both as an open-source CUDA/C++ package with a graphical user interface and as a Python package, PyPrismatic.« less
SmartSIM - a virtual reality simulator for laparoscopy training using a generic physics engine.
Khan, Zohaib Amjad; Kamal, Nabeel; Hameed, Asad; Mahmood, Amama; Zainab, Rida; Sadia, Bushra; Mansoor, Shamyl Bin; Hasan, Osman
2017-09-01
Virtual reality (VR) training simulators have started playing a vital role in enhancing surgical skills, such as hand-eye coordination in laparoscopy, and practicing surgical scenarios that cannot be easily created using physical models. We describe a new VR simulator for basic training in laparoscopy, i.e. SmartSIM, which has been developed using a generic open-source physics engine called the simulation open framework architecture (SOFA). This paper describes the systems perspective of SmartSIM including design details of both hardware and software components, while highlighting the critical design decisions. Some of the distinguishing features of SmartSIM include: (i) an easy-to-fabricate custom-built hardware interface; (ii) use of a generic physics engine to facilitate wider accessibility of our work and flexibility in terms of using various graphical modelling algorithms and their implementations; and (iii) an intelligent and smart evaluation mechanism that facilitates unsupervised and independent learning. Copyright © 2016 John Wiley & Sons, Ltd.
Shim, Kyusung; Do, Nhu Tri; An, Beongku
2017-01-01
In this paper, we study the physical layer security (PLS) of opportunistic scheduling for uplink scenarios of multiuser multirelay cooperative networks. To this end, we propose a low-complexity, yet comparable secrecy performance source relay selection scheme, called the proposed source relay selection (PSRS) scheme. Specifically, the PSRS scheme first selects the least vulnerable source and then selects the relay that maximizes the system secrecy capacity for the given selected source. Additionally, the maximal ratio combining (MRC) technique and the selection combining (SC) technique are considered at the eavesdropper, respectively. Investigating the system performance in terms of secrecy outage probability (SOP), closed-form expressions of the SOP are derived. The developed analysis is corroborated through Monte Carlo simulation. Numerical results show that the PSRS scheme significantly improves the secure ability of the system compared to that of the random source relay selection scheme, but does not outperform the optimal joint source relay selection (OJSRS) scheme. However, the PSRS scheme drastically reduces the required amount of channel state information (CSI) estimations compared to that required by the OJSRS scheme, specially in dense cooperative networks. PMID:28212286
Phase I of the Near Term Hybrid Passenger Vehicle Development Program. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1980-10-01
The results of Phase I of the Near-Term Hybrid Vehicle Program are summarized. This phase of the program ws a study leading to the preliminary design of a 5-passenger hybrid vehicle utilizing two energy sources (electricity and gasoline/diesel fuel) to minimize petroleum usage on a fleet basis. This report presents the following: overall summary of the Phase I activity; summary of the individual tasks; summary of the hybrid vehicle design; summary of the alternative design options; summary of the computer simulations; summary of the economic analysis; summary of the maintenance and reliability considerations; summary of the design for crash safety;more » and bibliography.« less
NASA Astrophysics Data System (ADS)
Chen, X.; Millet, D. B.; Singh, H. B.; Wisthaler, A.
2017-12-01
We present an integrated analysis of the atmospheric VOC budget over North America using a high-resolution GEOS-Chem simulation and observations from a large suite of recent aircraft campaigns. Here, the standard model simulation is expanded to include a more comprehensive VOC treatment encompassing the best current understanding of emissions and chemistry. Based on this updated framework, we find in the model that biogenic emission dominate VOC carbon sources over North America (accounting for 71% of total primary emissions), and this is especially the case from a reactivity perspective (with biogenic VOCs accounting for 90% of reactivity-weighted emissions). Physical processes and chemical degradation make comparable contributions to the removal of VOC carbon over North America. We further apply this simulation to explore the impacts of different primary VOC sources on atmospheric chemistry in terms of OH reactivity and key atmospheric chemicals including NOx, HCHO, glyoxal, and ozone. The airborne observations show that the majority of detected VOC carbon is carried by oxygenated VOC throughout the North American troposphere, and this tendency is well captured by the model. Model-measurement comparisons along the campaign flight tracks show that the total observed VOC abundance is generally well-predicted by the model within the boundary layer (with some regionally-specific biases) but severely underestimated in the upper troposphere. The observations imply significant missing sources in the model for upper tropospheric methanol, acetone, peroxyacetic acid, and glyoxal, and for organic acids in the lower troposphere. Elemental ratios derived from airborne high-resolution mass spectrometry show only modest change in the ensemble VOC carbon oxidation state with aging (in NOx:NOy space), and the model successfully captures this behavior.
Drivers of inorganic carbon dynamics in first-year sea ice: A model study
NASA Astrophysics Data System (ADS)
Moreau, Sébastien; Vancoppenolle, Martin; Delille, Bruno; Tison, Jean-Louis; Zhou, Jiayun; Kotovitch, Marie; Thomas, David N.; Geilfus, Nicolas-Xavier; Goosse, Hugues
2015-01-01
Sea ice is an active source or a sink for carbon dioxide (CO2), although to what extent is not clear. Here, we analyze CO2 dynamics within sea ice using a one-dimensional halothermodynamic sea ice model including gas physics and carbon biogeochemistry. The ice-ocean fluxes, and vertical transport, of total dissolved inorganic carbon (DIC) and total alkalinity (TA) are represented using fluid transport equations. Carbonate chemistry, the consumption, and release of CO2 by primary production and respiration, the precipitation and dissolution of ikaite (CaCO3·6H2O) and ice-air CO2 fluxes, are also included. The model is evaluated using observations from a 6 month field study at Point Barrow, Alaska, and an ice-tank experiment. At Barrow, results show that the DIC budget is mainly driven by physical processes, wheras brine-air CO2 fluxes, ikaite formation, and net primary production, are secondary factors. In terms of ice-atmosphere CO2 exchanges, sea ice is a net CO2 source and sink in winter and summer, respectively. The formulation of the ice-atmosphere CO2 flux impacts the simulated near-surface CO2 partial pressure (pCO2), but not the DIC budget. Because the simulated ice-atmosphere CO2 fluxes are limited by DIC stocks, and therefore <2 mmol m-2 d-1, we argue that the observed much larger CO2 fluxes from eddy covariance retrievals cannot be explained by a sea ice direct source and must involve other processes or other sources of CO2. Finally, the simulations suggest that near-surface TA/DIC ratios of ˜2, sometimes used as an indicator of calcification, would rather suggest outgassing.
Drivers of inorganic carbon dynamics in first-year sea ice: A model study
NASA Astrophysics Data System (ADS)
Moreau, Sébastien; Vancoppenolle, Martin; Delille, Bruno; Tison, Jean-Louis; Zhou, Jiayun; Kotovich, Marie; Thomas, David; Geilfus, Nicolas-Xavier; Goosse, Hugues
2015-04-01
Sea ice is an active source or a sink for carbon dioxide (CO2), although to what extent is not clear. Here, we analyze CO2 dynamics within sea ice using a one-dimensional halo-thermodynamic sea ice model including gas physics and carbon biogeochemistry. The ice-ocean fluxes, and vertical transport, of total dissolved inorganic carbon (DIC) and total alkalinity (TA) are represented using fluid transport equations. Carbonate chemistry, the consumption and release of CO2 by primary production and respiration, the precipitation and dissolution of ikaite (CaCO3•6H2O) and ice-air CO2 fluxes, are also included. The model is evaluated using observations from a 6-month field study at Point Barrow, Alaska and an ice-tank experiment. At Barrow, results show that the DIC budget is mainly driven by physical processes, wheras brine-air CO2 fluxes, ikaite formation, and net primary production, are secondary factors. In terms of ice-atmosphere CO2 exchanges, sea ice is a net CO2 source and sink in winter and summer, respectively. The formulation of the ice-atmosphere CO2 flux impacts the simulated near-surface CO2 partial pressure (pCO2), but not the DIC budget. Because the simulated ice-atmosphere CO2 fluxes are limited by DIC stocks, and therefore < 2 mmol m-2 day-1, we argue that the observed much larger CO2 fluxes from eddy covariance retrievals cannot be explained by a sea ice direct source and must involve other processes or other sources of CO2. Finally, the simulations suggest that near surface TA/DIC ratios of ~2, sometimes used as an indicator of calcification, would rather suggest outgassing.
NASA Astrophysics Data System (ADS)
Boyer, E. W.; Smith, R. A.; Alexander, R. B.; Schwarz, G. E.
2004-12-01
Organic carbon (OC) is a critical water quality characteristic in riverine systems that is an important component of the aquatic carbon cycle and energy balance. Examples of processes controlled by OC interactions are complexation of trace metals; enhancement of the solubility of hydrophobic organic contaminants; formation of trihalomethanes in drinking water; and absorption of visible and UV radiation. Organic carbon also can have indirect effects on water quality by influencing internal processes of aquatic ecosystems (e.g. photosynthesis and autotrophic and heterotrophic activity). The importance of organic matter dynamics on water quality has been recognized, but challenges remain in quantitatively addressing OC processes over broad spatial scales in a hydrological context. In this study, we apply spatially referenced watershed models (SPARROW) to statistically estimate long-term mean-annual rates of dissolved- and total- organic carbon export in streams and reservoirs across the conterminous United States. We make use of a GIS framework for the analysis, describing sources, transport, and transformations of organic matter from spatial databases providing characterizations of climate, land use, primary productivity, topography, soils, and geology. This approach is useful because it illustrates spatial patterns of organic carbon fluxes in streamflow, highlighting hot spots (e.g., organic-rich environments in the southeastern coastal plain). Further, our simulations provide estimates of the relative contributions to streams from allochthonous and autochthonous sources. We quantify surface water fluxes of OC with estimates of uncertainty in relation to the overall US carbon budget; our simulations highlight that aquatic sources and sinks of OC may be a more significant component of regional carbon cycling than was previously thought. Further, we are using our simulations to explore the potential role of climate and other changes in the terrestrial environment on OC fluxes in aquatic systems.
NASA Astrophysics Data System (ADS)
Gururaja Rao, C.; Nagabhushana Rao, V.; Krishna Das, C.
2008-04-01
Prominent results of a simulation study on conjugate convection with surface radiation from an open cavity with a traversable flush mounted discrete heat source in the left wall are presented in this paper. The open cavity is considered to be of fixed height but with varying spacing between the legs. The position of the heat source is varied along the left leg of the cavity. The governing equations for temperature distribution along the cavity are obtained by making energy balance between heat generated, conducted, convected and radiated. Radiation terms are tackled using radiosity-irradiation formulation, while the view factors, therein, are evaluated using the crossed-string method of Hottel. The resulting non-linear partial differential equations are converted into algebraic form using finite difference formulation and are subsequently solved by Gauss Seidel iterative technique. An optimum grid system comprising 111 grids along the legs of the cavity, with 30 grids in the heat source and 31 grids across the cavity has been used. The effects of various parameters, such as surface emissivity, convection heat transfer coefficient, aspect ratio and thermal conductivity on the important results, including local temperature distribution along the cavity, peak temperature in the left and right legs of the cavity and relative contributions of convection and radiation to heat dissipation in the cavity, are studied in great detail.
NASA Astrophysics Data System (ADS)
Nan, Tongchao; Li, Kaixuan; Wu, Jichun; Yin, Lihe
2018-04-01
Sustainability has been one of the key criteria of effective water exploitation. Groundwater exploitation and water-table decline at Haolebaoji water source site in the Ordos basin in NW China has drawn public attention due to concerns about potential threats to ecosystems and grazing land in the area. To better investigate the impact of production wells at Haolebaoji on the water table, an adapted algorithm called the random walk on grid method (WOG) is applied to simulate the hydraulic head in the unconfined and confined aquifers. This is the first attempt to apply WOG to a real groundwater problem. The method can not only evaluate the head values but also the contributions made by each source/sink term. One is allowed to analyze the impact of source/sink terms just as if one had an analytical solution. The head values evaluated by WOG match the values derived from the software Groundwater Modeling System (GMS). It suggests that WOG is effective and applicable in a heterogeneous aquifer with respect to practical problems, and the resultant information is useful for groundwater management.
Nonlinear synthesis of infrasound propagation through an inhomogeneous, absorbing atmosphere.
de Groot-Hedlin, C D
2012-08-01
An accurate and efficient method to predict infrasound amplitudes from large explosions in the atmosphere is required for diverse source types, including bolides, volcanic eruptions, and nuclear and chemical explosions. A finite-difference, time-domain approach is developed to solve a set of nonlinear fluid dynamic equations for total pressure, temperature, and density fields rather than acoustic perturbations. Three key features for the purpose of synthesizing nonlinear infrasound propagation in realistic media are that it includes gravitational terms, it allows for acoustic absorption, including molecular vibration losses at frequencies well below the molecular vibration frequencies, and the environmental models are constrained to have axial symmetry, allowing a three-dimensional simulation to be reduced to two dimensions. Numerical experiments are performed to assess the algorithm's accuracy and the effect of source amplitudes and atmospheric variability on infrasound waveforms and shock formation. Results show that infrasound waveforms steepen and their associated spectra are shifted to higher frequencies for nonlinear sources, leading to enhanced infrasound attenuation. Results also indicate that nonlinear infrasound amplitudes depend strongly on atmospheric temperature and pressure variations. The solution for total field variables and insertion of gravitational terms also allows for the computation of other disturbances generated by explosions, including gravity waves.
Simulating variable source problems via post processing of individual particle tallies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bleuel, D.L.; Donahue, R.J.; Ludewigt, B.A.
2000-10-20
Monte Carlo is an extremely powerful method of simulating complex, three dimensional environments without excessive problem simplification. However, it is often time consuming to simulate models in which the source can be highly varied. Similarly difficult are optimization studies involving sources in which many input parameters are variable, such as particle energy, angle, and spatial distribution. Such studies are often approached using brute force methods or intelligent guesswork. One field in which these problems are often encountered is accelerator-driven Boron Neutron Capture Therapy (BNCT) for the treatment of cancers. Solving the reverse problem of determining the best neutron source formore » optimal BNCT treatment can be accomplished by separating the time-consuming particle-tracking process of a full Monte Carlo simulation from the calculation of the source weighting factors which is typically performed at the beginning of a Monte Carlo simulation. By post-processing these weighting factors on a recorded file of individual particle tally information, the effect of changing source variables can be realized in a matter of seconds, instead of requiring hours or days for additional complete simulations. By intelligent source biasing, any number of different source distributions can be calculated quickly from a single Monte Carlo simulation. The source description can be treated as variable and the effect of changing multiple interdependent source variables on the problem's solution can be determined. Though the focus of this study is on BNCT applications, this procedure may be applicable to any problem that involves a variable source.« less
Numerical simulation and experimental verification of extended source interferometer
NASA Astrophysics Data System (ADS)
Hou, Yinlong; Li, Lin; Wang, Shanshan; Wang, Xiao; Zang, Haijun; Zhu, Qiudong
2013-12-01
Extended source interferometer, compared with the classical point source interferometer, can suppress coherent noise of environment and system, decrease dust scattering effects and reduce high-frequency error of reference surface. Numerical simulation and experimental verification of extended source interferometer are discussed in this paper. In order to provide guidance for the experiment, the modeling of the extended source interferometer is realized by using optical design software Zemax. Matlab codes are programmed to rectify the field parameters of the optical system automatically and get a series of interferometric data conveniently. The communication technique of DDE (Dynamic Data Exchange) was used to connect Zemax and Matlab. Then the visibility of interference fringes can be calculated through adding the collected interferometric data. Combined with the simulation, the experimental platform of the extended source interferometer was established, which consists of an extended source, interference cavity and image collection system. The decrease of high-frequency error of reference surface and coherent noise of the environment is verified. The relation between the spatial coherence and the size, shape, intensity distribution of the extended source is also verified through the analysis of the visibility of interference fringes. The simulation result is in line with the result given by real extended source interferometer. Simulation result shows that the model can simulate the actual optical interference of the extended source interferometer quite well. Therefore, the simulation platform can be used to guide the experiment of interferometer which is based on various extended sources.
Development of Simulated Disturbing Source for Isolation Switch
NASA Astrophysics Data System (ADS)
Cheng, Lin; Liu, Xiang; Deng, Xiaoping; Pan, Zhezhe; Zhou, Hang; Zhu, Yong
2018-01-01
In order to simulate the substation in the actual scene of the harsh electromagnetic environment, and then research on electromagnetic compatibility testing of electronic instrument transformer, On the basis of the original isolation switch as a harassment source of the electronic instrument transformer electromagnetic compatibility test system, an isolated switch simulation source system was developed, to promote the standardization of the original test. In this paper, the circuit breaker is used to control the opening and closing of the gap arc to simulate the operating of isolating switch, and the isolation switch simulation harassment source system is designed accordingly. Comparison with the actual test results of the isolating switch, it is proved that the system can meet the test requirements, and the simulation harassment source system has good stability and high reliability.
Studies and simulations of the DigiCipher system
NASA Technical Reports Server (NTRS)
Sayood, K.; Chen, Y. C.; Kipp, G.
1993-01-01
During this period the development of simulators for the various high definition television (HDTV) systems proposed to the FCC was continued. The FCC has indicated that it wants the various proposers to collaborate on a single system. Based on all available information this system will look very much like the advanced digital television (ADTV) system with major contributions only from the DigiCipher system. The results of our simulations of the DigiCipher system are described. This simulator was tested using test sequences from the MPEG committee. The results are extrapolated to HDTV video sequences. Once again, some caveats are in order. The sequences used for testing the simulator and generating the results are those used for testing the MPEG algorithm. The sequences are of much lower resolution than the HDTV sequences would be, and therefore the extrapolations are not totally accurate. One would expect to get significantly higher compression in terms of bits per pixel with sequences that are of higher resolution. However, the simulator itself is a valid one, and should HDTV sequences become available, they could be used directly with the simulator. A brief overview of the DigiCipher system is given. Some coding results obtained using the simulator are looked at. These results are compared to those obtained using the ADTV system. These results are evaluated in the context of the CCSDS specifications and make some suggestions as to how the DigiCipher system could be implemented in the NASA network. Simulations such as the ones reported can be biased depending on the particular source sequence used. In order to get more complete information about the system one needs to obtain a reasonable set of models which mirror the various kinds of sources encountered during video coding. A set of models which can be used to effectively model the various possible scenarios is provided. As this is somewhat tangential to the other work reported, the results are included as an appendix.
Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong
2008-12-01
How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.
NASA Technical Reports Server (NTRS)
Leboissertier, Anthony; Okong'O, Nora; Bellan, Josette
2005-01-01
Large-eddy simulation (LES) is conducted of a three-dimensional temporal mixing layer whose lower stream is initially laden with liquid drops which may evaporate during the simulation. The gas-phase equations are written in an Eulerian frame for two perfect gas species (carrier gas and vapour emanating from the drops), while the liquid-phase equations are written in a Lagrangian frame. The effect of drop evaporation on the gas phase is considered through mass, species, momentum and energy source terms. The drop evolution is modelled using physical drops, or using computational drops to represent the physical drops. Simulations are performed using various LES models previously assessed on a database obtained from direct numerical simulations (DNS). These LES models are for: (i) the subgrid-scale (SGS) fluxes and (ii) the filtered source terms (FSTs) based on computational drops. The LES, which are compared to filtered-and-coarsened (FC) DNS results at the coarser LES grid, are conducted with 64 times fewer grid points than the DNS, and up to 64 times fewer computational than physical drops. It is found that both constant-coefficient and dynamic Smagorinsky SGS-flux models, though numerically stable, are overly dissipative and damp generated small-resolved-scale (SRS) turbulent structures. Although the global growth and mixing predictions of LES using Smagorinsky models are in good agreement with the FC-DNS, the spatial distributions of the drops differ significantly. In contrast, the constant-coefficient scale-similarity model and the dynamic gradient model perform well in predicting most flow features, with the latter model having the advantage of not requiring a priori calibration of the model coefficient. The ability of the dynamic models to determine the model coefficient during LES is found to be essential since the constant-coefficient gradient model, although more accurate than the Smagorinsky model, is not consistently numerically stable despite using DNS-calibrated coefficients. With accurate SGS-flux models, namely scale-similarity and dynamic gradient, the FST model allows up to a 32-fold reduction in computational drops compared to the number of physical drops, without degradation of accuracy; a 64-fold reduction leads to a slight decrease in accuracy.
Gravitational waveforms for neutron star binaries from binary black hole simulations
NASA Astrophysics Data System (ADS)
Barkett, Kevin; Scheel, Mark; Haas, Roland; Ott, Christian; Bernuzzi, Sebastiano; Brown, Duncan; Szilagyi, Bela; Kaplan, Jeffrey; Lippuner, Jonas; Muhlberger, Curran; Foucart, Francois; Duez, Matthew
2016-03-01
Gravitational waves from binary neutron star (BNS) and black-hole/neutron star (BHNS) inspirals are primary sources for detection by the Advanced Laser Interferometer Gravitational-Wave Observatory. The tidal forces acting on the neutron stars induce changes in the phase evolution of the gravitational waveform, and these changes can be used to constrain the nuclear equation of state. Current methods of generating BNS and BHNS waveforms rely on either computationally challenging full 3D hydrodynamical simulations or approximate analytic solutions. We introduce a new method for computing inspiral waveforms for BNS/BHNS systems by adding the post-Newtonian (PN) tidal effects to full numerical simulations of binary black holes (BBHs), effectively replacing the non-tidal terms in the PN expansion with BBH results. Comparing a waveform generated with this method against a full hydrodynamical simulation of a BNS inspiral yields a phase difference of < 1 radian over ~ 15 orbits. The numerical phase accuracy required of BNS simulations to measure the accuracy of the method we present here is estimated as a function of the tidal deformability parameter λ.
Gravitational waveforms for neutron star binaries from binary black hole simulations
NASA Astrophysics Data System (ADS)
Barkett, Kevin; Scheel, Mark A.; Haas, Roland; Ott, Christian D.; Bernuzzi, Sebastiano; Brown, Duncan A.; Szilágyi, Béla; Kaplan, Jeffrey D.; Lippuner, Jonas; Muhlberger, Curran D.; Foucart, Francois; Duez, Matthew D.
2016-02-01
Gravitational waves from binary neutron star (BNS) and black hole/neutron star (BHNS) inspirals are primary sources for detection by the Advanced Laser Interferometer Gravitational-Wave Observatory. The tidal forces acting on the neutron stars induce changes in the phase evolution of the gravitational waveform, and these changes can be used to constrain the nuclear equation of state. Current methods of generating BNS and BHNS waveforms rely on either computationally challenging full 3D hydrodynamical simulations or approximate analytic solutions. We introduce a new method for computing inspiral waveforms for BNS/BHNS systems by adding the post-Newtonian (PN) tidal effects to full numerical simulations of binary black holes (BBHs), effectively replacing the nontidal terms in the PN expansion with BBH results. Comparing a waveform generated with this method against a full hydrodynamical simulation of a BNS inspiral yields a phase difference of <1 radian over ˜15 orbits. The numerical phase accuracy required of BNS simulations to measure the accuracy of the method we present here is estimated as a function of the tidal deformability parameter λ .
Monte Carlo simulation of moderator and reflector in coal analyzer based on a D-T neutron generator.
Shan, Qing; Chu, Shengnan; Jia, Wenbao
2015-11-01
Coal is one of the most popular fuels in the world. The use of coal not only produces carbon dioxide, but also contributes to the environmental pollution by heavy metals. In prompt gamma-ray neutron activation analysis (PGNAA)-based coal analyzer, the characteristic gamma rays of C and O are mainly induced by fast neutrons, whereas thermal neutrons can be used to induce the characteristic gamma rays of H, Si, and heavy metals. Therefore, appropriate thermal and fast neutrons are beneficial in improving the measurement accuracy of heavy metals, and ensure that the measurement accuracy of main elements meets the requirements of the industry. Once the required yield of the deuterium-tritium (d-T) neutron generator is determined, appropriate thermal and fast neutrons can be obtained by optimizing the neutron source term. In this article, the Monte Carlo N-Particle (MCNP) Transport Code and Evaluated Nuclear Data File (ENDF) database are used to optimize the neutron source term in PGNAA-based coal analyzer, including the material and shape of the moderator and neutron reflector. The optimized targets include two points: (1) the ratio of the thermal to fast neutron is 1:1 and (2) the total neutron flux from the optimized neutron source in the sample increases at least 100% when compared with the initial one. The simulation results show that, the total neutron flux in the sample increases 102%, 102%, 85%, 72%, and 62% with Pb, Bi, Nb, W, and Be reflectors, respectively. Maximum optimization of the targets is achieved when the moderator is a 3-cm-thick lead layer coupled with a 3-cm-thick high-density polyethylene (HDPE) layer, and the neutron reflector is a 27-cm-thick hemispherical lead layer. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Partially-Stirred Batch Reactor Model for Under-Ventilated Fire Dynamics
NASA Astrophysics Data System (ADS)
McDermott, Randall; Weinschenk, Craig
2013-11-01
A simple discrete quadrature method is developed for closure of the mean chemical source term in large-eddy simulations (LES) and implemented in the publicly available fire model, Fire Dynamics Simulator (FDS). The method is cast as a partially-stirred batch reactor model for each computational cell. The model has three distinct components: (1) a subgrid mixing environment, (2) a mixing model, and (3) a set of chemical rate laws. The subgrid probability density function (PDF) is described by a linear combination of Dirac delta functions with quadrature weights set to satisfy simple integral constraints for the computational cell. It is shown that under certain limiting assumptions, the present method reduces to the eddy dissipation concept (EDC). The model is used to predict carbon monoxide concentrations in direct numerical simulation (DNS) of a methane slot burner and in LES of an under-ventilated compartment fire.
Development of Northeast Asia Nuclear Power Plant Accident Simulator.
Kim, Juyub; Kim, Juyoul; Po, Li-Chi Cliff
2017-06-15
A conclusion from the lessons learned after the March 2011 Fukushima Daiichi accident was that Korea needs a tool to estimate consequences from a major accident that could occur at a nuclear power plant located in a neighboring country. This paper describes a suite of computer-based codes to be used by Korea's nuclear emergency response staff for training and potentially operational support in Korea's national emergency preparedness and response program. The systems of codes, Northeast Asia Nuclear Accident Simulator (NANAS), consist of three modules: source-term estimation, atmospheric dispersion prediction and dose assessment. To quickly assess potential doses to the public in Korea, NANAS includes specific reactor data from the nuclear power plants in China, Japan and Taiwan. The completed simulator is demonstrated using data for a hypothetical release. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Evaluating Discovery Services Architectures in the Context of the Internet of Things
NASA Astrophysics Data System (ADS)
Polytarchos, Elias; Eliakis, Stelios; Bochtis, Dimitris; Pramatari, Katerina
As the "Internet of Things" is expected to grow rapidly in the following years, the need to develop and deploy efficient and scalable Discovery Services in this context is very important for its success. Thus, the ability to evaluate and compare the performance of different Discovery Services architectures is vital if we want to allege that a given design is better at meeting requirements of a specific application. The purpose of this chapter is to provide a paradigm for the evaluation of different Discovery Services for the Internet of Things in terms of efficiency, scalability and performance through the use of simulations. The methodology presented uses the application of Discovery Services to a supply chain with the Service Lookup Service Discovery Service using OMNeT++, an open source network simulation suite. Then, we delve into the simulation design and the details of our findings.
NASA Astrophysics Data System (ADS)
Yang, Chen; Liu, Ying
2017-08-01
A two-dimensional depth-integrated numerical model is refined in this paper to simulate the hydrodynamics, graded sediment transport process and the fate of faecal bacteria in estuarine and coastal waters. The sediment mixture is divided into several fractions according to the grain size. A bed evolution model is adopted to simulate the processes of the bed elevation change and sediment grain size sorting. The faecal bacteria transport equation includes enhanced source and sink terms to represent bacterial kinetic transformation and disappearance or reappearance due to sediment deposition or re-suspension. A novel partition ratio and dynamic decay rates of faecal bacteria are adopted in the numerical model. The model has been applied to the turbid water environment in the Bristol Channel and Severn estuary, UK. The predictions by the present model are compared with field data and those by non-fractionated model.
Maintaining Quality and Confidence in Open-Source, Evolving Software: Lessons Learned with PFLOTRAN
NASA Astrophysics Data System (ADS)
Frederick, J. M.; Hammond, G. E.
2017-12-01
Software evolution in an open-source framework poses a major challenge to a geoscientific simulator, but when properly managed, the pay-off can be enormous for both the developers and the community at large. Developers must juggle implementing new scientific process models, adopting increasingly efficient numerical methods and programming paradigms, changing funding sources (or total lack of funding), while also ensuring that legacy code remains functional and reported bugs are fixed in a timely manner. With robust software engineering and a plan for long-term maintenance, a simulator can evolve over time incorporating and leveraging many advances in the computational and domain sciences. In this positive light, what practices in software engineering and code maintenance can be employed within open-source development to maximize the positive aspects of software evolution and community contributions while minimizing its negative side effects? This presentation will discusses steps taken in the development of PFLOTRAN (www.pflotran.org), an open source, massively parallel subsurface simulator for multiphase, multicomponent, and multiscale reactive flow and transport processes in porous media. As PFLOTRAN's user base and development team continues to grow, it has become increasingly important to implement strategies which ensure sustainable software development while maintaining software quality and community confidence. In this presentation, we will share our experiences and "lessons learned" within the context of our open-source development framework and community engagement efforts. Topics discussed will include how we've leveraged both standard software engineering principles, such as coding standards, version control, and automated testing, as well unique advantages of object-oriented design in process model coupling, to ensure software quality and confidence. We will also be prepared to discuss the major challenges faced by most open-source software teams, such as on-boarding new developers or one-time contributions, dealing with competitors or lookie-loos, and other downsides of complete transparency, as well as our approach to community engagement, including a user group email list, hosting short courses and workshops for new users, and maintaining a website. SAND2017-8174A
Efficient techniques for wave-based sound propagation in interactive applications
NASA Astrophysics Data System (ADS)
Mehra, Ravish
Sound propagation techniques model the effect of the environment on sound waves and predict their behavior from point of emission at the source to the final point of arrival at the listener. Sound is a pressure wave produced by mechanical vibration of a surface that propagates through a medium such as air or water, and the problem of sound propagation can be formulated mathematically as a second-order partial differential equation called the wave equation. Accurate techniques based on solving the wave equation, also called the wave-based techniques, are too expensive computationally and memory-wise. Therefore, these techniques face many challenges in terms of their applicability in interactive applications including sound propagation in large environments, time-varying source and listener directivity, and high simulation cost for mid-frequencies. In this dissertation, we propose a set of efficient wave-based sound propagation techniques that solve these three challenges and enable the use of wave-based sound propagation in interactive applications. Firstly, we propose a novel equivalent source technique for interactive wave-based sound propagation in large scenes spanning hundreds of meters. It is based on the equivalent source theory used for solving radiation and scattering problems in acoustics and electromagnetics. Instead of using a volumetric or surface-based approach, this technique takes an object-centric approach to sound propagation. The proposed equivalent source technique generates realistic acoustic effects and takes orders of magnitude less runtime memory compared to prior wave-based techniques. Secondly, we present an efficient framework for handling time-varying source and listener directivity for interactive wave-based sound propagation. The source directivity is represented as a linear combination of elementary spherical harmonic sources. This spherical harmonic-based representation of source directivity can support analytical, data-driven, rotating or time-varying directivity function at runtime. Unlike previous approaches, the listener directivity approach can be used to compute spatial audio (3D audio) for a moving, rotating listener at interactive rates. Lastly, we propose an efficient GPU-based time-domain solver for the wave equation that enables wave simulation up to the mid-frequency range in tens of minutes on a desktop computer. It is demonstrated that by carefully mapping all the components of the wave simulator to match the parallel processing capabilities of the graphics processors, significant improvement in performance can be achieved compared to the CPU-based simulators, while maintaining numerical accuracy. We validate these techniques with offline numerical simulations and measured data recorded in an outdoor scene. We present results of preliminary user evaluations conducted to study the impact of these techniques on user's immersion in virtual environment. We have integrated these techniques with the Half-Life 2 game engine, Oculus Rift head-mounted display, and Xbox game controller to enable users to experience high-quality acoustics effects and spatial audio in the virtual environment.
NASA Astrophysics Data System (ADS)
Saunier, Olivier; Mathieu, Anne; Didier, Damien; Tombette, Marilyne; Quélo, Denis; Winiarek, Victor; Bocquet, Marc
2013-04-01
The Chernobyl nuclear accident and more recently the Fukushima accident highlighted that the largest source of error on consequences assessment is the source term estimation including the time evolution of the release rate and its distribution between radioisotopes. Inverse modelling methods have proved to be efficient to assess the source term due to accidental situation (Gudiksen, 1989, Krysta and Bocquet, 2007, Stohl et al 2011, Winiarek et al 2012). These methods combine environmental measurements and atmospheric dispersion models. They have been recently applied to the Fukushima accident. Most existing approaches are designed to use air sampling measurements (Winiarek et al, 2012) and some of them use also deposition measurements (Stohl et al, 2012, Winiarek et al, 2013). During the Fukushima accident, such measurements are far less numerous and not as well distributed within Japan than the dose rate measurements. To efficiently document the evolution of the contamination, gamma dose rate measurements were numerous, well distributed within Japan and they offered a high temporal frequency. However, dose rate data are not as easy to use as air sampling measurements and until now they were not used in inverse modelling approach. Indeed, dose rate data results from all the gamma emitters present in the ground and in the atmosphere in the vicinity of the receptor. They do not allow one to determine the isotopic composition or to distinguish the plume contribution from wet deposition. The presented approach proposes a way to use dose rate measurement in inverse modeling approach without the need of a-priori information on emissions. The method proved to be efficient and reliable when applied on the Fukushima accident. The emissions for the 8 main isotopes Xe-133, Cs-134, Cs-136, Cs-137, Ba-137m, I-131, I-132 and Te-132 have been assessed. The Daiichi power plant events (such as ventings, explosions…) known to have caused atmospheric releases are well identified in the retrieved source term, except for unit 3 explosion where no measurement was available. The comparisons between the simulations of atmospheric dispersion and deposition of the retrieved source term show a good agreement with environmental observations. Moreover, an important outcome of this study is that the method proved to be perfectly suited to crisis management and should contribute to improve our response in case of a nuclear accident.
The Nonlinear Evolution of Massive Stellar Core Collapses That ``Fizzle''
NASA Astrophysics Data System (ADS)
Imamura, James N.; Pickett, Brian K.; Durisen, Richard H.
2003-04-01
Core collapse in a massive rotating star may pause before nuclear density is reached, if the core contains total angular momentum J>~1049 g cm2 s-1. In such aborted or ``fizzled'' collapses, temporary equilibrium objects form that, although rapidly rotating, are secularly and dynamically stable because of the high electron fraction per baryon Ye>0.3 and the high entropy per baryon Sb/k~1-2 of the core material at neutrino trapping. These fizzled collapses are called ``fizzlers.'' In the absence of prolonged infall from the surrounding star, the evolution of fizzlers is driven by deleptonization, which causes them to contract and spin up until they either become stable neutron stars or reach the dynamic instability point for barlike modes. The barlike instability case is of current interest because the bars would be sources of gravitational wave (GW) radiation. In this paper, we use linear and nonlinear techniques, including three-dimensional hydrodynamic simulations, to study the behavior of fizzlers that have deleptonized to the point of reaching dynamic bar instability. The simulations show that the GW emission produced by bar-unstable fizzlers has rms strain amplitude r15h=10-23 to 10-22 for an observer on the rotation axis, with wave frequency of roughly 60-600 Hz. Here h is the strain and r15= (r/15 Mpc) is the distance to the fizzler in units of 15 Mpc. If the bars that form by dynamic instability can maintain GW emission at this level for 100 periods or more, they may be detectable by the Laser Interferometer Gravitational-Wave Observatory at the distance of the Virgo Cluster. They would be detectable as burst sources, defined as sources that persist for ~10 cycles or less, if they occurred in the Local Group of galaxies. The long-term behavior of the bars is the crucial issue for the detection of fizzler events. The bars present at the end of our simulations are dynamically stable but will evolve on longer timescales because of a variety of effects, such as shock heating, infall, deleptonization, and cooling, as well as gravitational radiation and Newtonian gravitational coupling to surrounding material. Long-term simulations including these effects will be necessary to determine the ultimate fate and GW production of fizzlers with certainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nutaro, James; Kuruganti, Teja
Numerical simulations of the wave equation that are intended to provide accurate time domain solutions require a computational mesh with grid points separated by a distance less than the wavelength of the source term and initial data. However, calculations of radio signal pathloss generally do not require accurate time domain solutions. This paper describes an approach for calculating pathloss by using the finite difference time domain and transmission line matrix models of wave propagation on a grid with points separated by distances much greater than the signal wavelength. The calculated pathloss can be kept close to the true value formore » freespace propagation with an appropriate selection of initial conditions. This method can also simulate diffraction with an error governed by the ratio of the signal wavelength to the grid spacing.« less
Deist, T M; Gorissen, B L
2016-02-07
High-dose-rate brachytherapy is a tumor treatment method where a highly radioactive source is brought in close proximity to the tumor. In this paper we develop a simulated annealing algorithm to optimize the dwell times at preselected dwell positions to maximize tumor coverage under dose-volume constraints on the organs at risk. Compared to existing algorithms, our algorithm has advantages in terms of speed and objective value and does not require an expensive general purpose solver. Its success mainly depends on exploiting the efficiency of matrix multiplication and a careful selection of the neighboring states. In this paper we outline its details and make an in-depth comparison with existing methods using real patient data.
Electroweak baryogenesis in the exceptional supersymmetric standard model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, Wei, E-mail: chao@physics.umass.edu
2015-08-01
We study electroweak baryogenesis in the E{sub 6} inspired exceptional supersymmetric standard model (E{sub 6}SSM). The relaxation coefficients driven by singlinos and the new gaugino as well as the transport equation of the Higgs supermultiplet number density in the E{sub 6}SSM are calculated. Our numerical simulation shows that both CP-violating source terms from singlinos and the new gaugino can solely give rise to a correct baryon asymmetry of the Universe via the electroweak baryogenesis mechanism.
Electroweak baryogenesis in the exceptional supersymmetric standard model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, Wei
2015-08-28
We study electroweak baryogenesis in the E{sub 6} inspired exceptional supersymmetric standard model (E{sub 6}SSM). The relaxation coefficients driven by singlinos and the new gaugino as well as the transport equation of the Higgs supermultiplet number density in the E{sub 6}SSM are calculated. Our numerical simulation shows that both CP-violating source terms from singlinos and the new gaugino can solely give rise to a correct baryon asymmetry of the Universe via the electroweak baryogenesis mechanism.
Curvature-induced domain wall pinning
NASA Astrophysics Data System (ADS)
Yershov, Kostiantyn V.; Kravchuk, Volodymyr P.; Sheka, Denis D.; Gaididei, Yuri
2015-09-01
It is shown that a local bend of a nanowire is a source of pinning potential for a transversal head-to-head (tail-to-tail) domain wall. Eigenfrequency of the domain wall free oscillations at the pinning potential and the effective friction are determined as functions of the curvature and domain wall width. The pinning potential originates from the effective curvature-induced Dzyaloshinsky-like term in the exchange energy. The theoretical results are verified by means of micromagnetic simulations for the case of parabolic shape of the wire bend.
Background and Source Term Identification in Active Neutron Interrogation Methods
2011-03-24
interactions occurred to observe gamma ray peaks and not unduly increase simulation time. Not knowing the uranium enrichment modeled by Gozani, pure U...neutron interactions can occur. The uranium targets, though, should have increased neutron fluencies as the energy levels become below 2 MeV. This is...Assessment Monitor Site (TEAMS) at Kirtland AFB, NM. Iron (Fe-56), lead (Pb-207), polyethylene (C2H4 –– > C-12 & H-1), and uranium (U-235 and U-238) were
Coherent attacking continuous-variable quantum key distribution with entanglement in the middle
NASA Astrophysics Data System (ADS)
Zhang, Zhaoyuan; Shi, Ronghua; Zeng, Guihua; Guo, Ying
2018-06-01
We suggest an approach on the coherent attack of continuous-variable quantum key distribution (CVQKD) with an untrusted entangled source in the middle. The coherent attack strategy can be performed on the double links of quantum system, enabling the eavesdropper to steal more information from the proposed scheme using the entanglement correlation. Numeric simulation results show the improved performance of the attacked CVQKD system in terms of the derived secret key rate with the controllable parameters maximizing the stolen information.
Numerical Modeling of Poroelastic-Fluid Systems Using High-Resolution Finite Volume Methods
NASA Astrophysics Data System (ADS)
Lemoine, Grady
Poroelasticity theory models the mechanics of porous, fluid-saturated, deformable solids. It was originally developed by Maurice Biot to model geophysical problems, such as seismic waves in oil reservoirs, but has also been applied to modeling living bone and other porous media. Poroelastic media often interact with fluids, such as in ocean bottom acoustics or propagation of waves from soft tissue into bone. This thesis describes the development and testing of high-resolution finite volume numerical methods, and simulation codes implementing these methods, for modeling systems of poroelastic media and fluids in two and three dimensions. These methods operate on both rectilinear grids and logically rectangular mapped grids. To allow the use of these methods, Biot's equations of poroelasticity are formulated as a first-order hyperbolic system with a source term; this source term is incorporated using operator splitting. Some modifications are required to the classical high-resolution finite volume method. Obtaining correct solutions at interfaces between poroelastic media and fluids requires a novel transverse propagation scheme and the removal of the classical second-order correction term at the interface, and in three dimensions a new wave limiting algorithm is also needed to correctly limit shear waves. The accuracy and convergence rates of the methods of this thesis are examined for a variety of analytical solutions, including simple plane waves, reflection and transmission of waves at an interface between different media, and scattering of acoustic waves by a poroelastic cylinder. Solutions are also computed for a variety of test problems from the computational poroelasticity literature, as well as some original test problems designed to mimic possible applications for the simulation code.
Towards medium-term (order of months) morphodynamic modelling of the Teign estuary, UK
NASA Astrophysics Data System (ADS)
Bernardes, Marcos E. C.; Davidson, Mark A.; Dyer, Keith R.; George, Ken J.
2006-07-01
The main objective of this paper is to address the principal mechanisms involved in the medium-term (order of months to years) morphodynamic evolution of estuaries through the application of a process-based numerical modelling. The Teign estuary (Teignmouth, UK) is the selected site. The system is forced by the macrotidal semi-diurnal tide in the English Channel and is perturbed to a minor extent by high river discharge events (freshets). Although waves have a definite influence on the adjacent coastal area, Wells (Teignmouth Quay Development Environmental Statement: Changes to Physical Processes. Report R.984c:140. ABP Marine Environmental Research Ltd., Southampton, 2002b) suggested that swell waves do not enter the estuary. Hence, wave effects are neglected in this study, as only tides and the river discharge are taken into account. The sediment grain size is highly variable, but mainly sandy. Within the frame of the COAST3D project (
Effects of radiative heat transfer on the turbulence structure in inert and reacting mixing layers
NASA Astrophysics Data System (ADS)
Ghosh, Somnath; Friedrich, Rainer
2015-05-01
We use large-eddy simulation to study the interaction between turbulence and radiative heat transfer in low-speed inert and reacting plane temporal mixing layers. An explicit filtering scheme based on approximate deconvolution is applied to treat the closure problem arising from quadratic nonlinearities of the filtered transport equations. In the reacting case, the working fluid is a mixture of ideal gases where the low-speed stream consists of hydrogen and nitrogen and the high-speed stream consists of oxygen and nitrogen. Both streams are premixed in a way that the free-stream densities are the same and the stoichiometric mixture fraction is 0.3. The filtered heat release term is modelled using equilibrium chemistry. In the inert case, the low-speed stream consists of nitrogen at a temperature of 1000 K and the highspeed stream is pure water vapour of 2000 K, when radiation is turned off. Simulations assuming the gas mixtures as gray gases with artificially increased Planck mean absorption coefficients are performed in which the large-eddy simulation code and the radiation code PRISSMA are fully coupled. In both cases, radiative heat transfer is found to clearly affect fluctuations of thermodynamic variables, Reynolds stresses, and Reynolds stress budget terms like pressure-strain correlations. Source terms in the transport equation for the variance of temperature are used to explain the decrease of this variance in the reacting case and its increase in the inert case.
Effects of radiative heat transfer on the turbulence structure in inert and reacting mixing layers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Somnath, E-mail: sghosh@aero.iitkgp.ernet.in; Friedrich, Rainer
2015-05-15
We use large-eddy simulation to study the interaction between turbulence and radiative heat transfer in low-speed inert and reacting plane temporal mixing layers. An explicit filtering scheme based on approximate deconvolution is applied to treat the closure problem arising from quadratic nonlinearities of the filtered transport equations. In the reacting case, the working fluid is a mixture of ideal gases where the low-speed stream consists of hydrogen and nitrogen and the high-speed stream consists of oxygen and nitrogen. Both streams are premixed in a way that the free-stream densities are the same and the stoichiometric mixture fraction is 0.3. Themore » filtered heat release term is modelled using equilibrium chemistry. In the inert case, the low-speed stream consists of nitrogen at a temperature of 1000 K and the highspeed stream is pure water vapour of 2000 K, when radiation is turned off. Simulations assuming the gas mixtures as gray gases with artificially increased Planck mean absorption coefficients are performed in which the large-eddy simulation code and the radiation code PRISSMA are fully coupled. In both cases, radiative heat transfer is found to clearly affect fluctuations of thermodynamic variables, Reynolds stresses, and Reynolds stress budget terms like pressure-strain correlations. Source terms in the transport equation for the variance of temperature are used to explain the decrease of this variance in the reacting case and its increase in the inert case.« less
NASA Astrophysics Data System (ADS)
Ishijima, K.; Toyoda, S.; Sudo, K.; Yoshikawa, C.; Nanbu, S.; Aoki, S.; Nakazawa, T.; Yoshida, N.
2009-12-01
It is well known that isotopic information is useful to qualitatively understand cycles and constrain sources of some atmospheric species, but so far there has been no study to model N2O isotopomers throughout the atmosphere from the troposphere to the stratosphere, including realistic surface N2O isotopomers emissions. We have started to develop a model to simulate spatiotemporal variations of the atmospheric N2O isotopomers in both the troposphere and the stratosphere, based on a chemistry-coupled atmospheric general circulation model, in order to obtain more accurate quantitative understanding of the global N2O cycle. For surface emissions of the isotopomers, combination of EDGAR-based anthropogenic and soil fluxes and monthly varying GEIA oceanic fluxes are factored, using isotopic values of global total sources estimated from firn-air analyses based long-term trend of the atmospheric N2O isotopomers. Isotopic fractionations in chemical reactions are considered for photolysis and photo-oxidation of N2O in the stratosphere. The isotopic fractionation coefficients have been employed from studies based on laboratory experiments, but we also will test the coefficients determined by theoretical calculations. In terms of the global N2O isotopomer budgets, precise quantification of the sources is quite challenging, because even the spatiotemporal variabilities of N2O sources have never been adequately estimated. Therefore, we have firstly started validation of simulated isotopomer results in the stratosphere, by using the isotopomer profiles obtained by balloon observations. N2O concentration profiles are mostly well reproduced, partly because of realistic reproduction of dynamical processes by nudging with reanalysis meteorological data. However, the concentration in the polar vortex tends to be overestimated, probably due to relatively coarse wave-length resolution in photolysis calculation. Such model features also appear in the isotopomers results, which are almost underestimated, relative to the balloon observations, although the concentration is well simulated. The tendency has been somewhat improved by incorporating another photolysis scheme with slightly higher wave-length resolution into the model. From another point of view, these facts indicate that N2O isotopomers can be used for validation of the stratospheric photochemical calculations in model, because of very high sensitivity of the isotopomer ratio values to some settings such as the wave-length resolution in the photochemical scheme.Therefore, N2O isotopomers modeling seems to be not only useful for validation of the fractionation coefficients and of isotopic characterization of sources, but also have the possibility to be an index especially for precision in the stratospheric photolysis in model.
NASA Astrophysics Data System (ADS)
Arpino, F.; Cortellessa, G.; Dell'Isola, M.; Scungio, M.; Focanti, V.; Profili, M.; Rotondi, M.
2017-11-01
The increasing price of fossil derivatives, global warming and energy market instabilities, have led to an increasing interest in renewable energy sources such as wind energy. Amongst the different typologies of wind generators, small scale Vertical Axis Wind Turbines (VAWT) present the greatest potential for off grid power generation at low wind speeds. In the present work, Computational Fluid Dynamic (CFD) simulations were performed in order to investigate the performance of an innovative configuration of straight-blades Darrieus-style vertical axis micro wind turbine, specifically developed for small scale energy conversion at low wind speeds. The micro turbine under investigation is composed of three pairs of airfoils, consisting of a main and auxiliary blades with different chord lengths. The simulations were made using the open source finite volume based CFD toolbox OpenFOAM, considering different turbulence models and adopting a moving mesh approach for the turbine rotor. The simulated data were reported in terms of dimensionless power coefficients for dynamic performance analysis. The results from the simulations were compared to the data obtained from experiments on a scaled model of the same VAWT configuration, conducted in a closed circuit open chamber wind tunnel facility available at the Laboratory of Industrial Measurements (LaMI) of the University of Cassino and Lazio Meridionale (UNICLAM). From the proposed analysis, it was observed that the most suitable model for the simulation of the performances of the micro turbine under investigation is the one-equation Spalart-Allmaras, even if under the conditions analysed in the present work and for TSR values higher than 1.1, some discrepancies between numerical and experimental data can be observed.
New VLBI2010 scheduling strategies and implications on the terrestrial reference frames.
Sun, Jing; Böhm, Johannes; Nilsson, Tobias; Krásná, Hana; Böhm, Sigrid; Schuh, Harald
In connection with the work for the next generation VLBI2010 Global Observing System (VGOS) of the International VLBI Service for Geodesy and Astrometry, a new scheduling package (Vie_Sched) has been developed at the Vienna University of Technology as a part of the Vienna VLBI Software. In addition to the classical station-based approach it is equipped with a new scheduling strategy based on the radio sources to be observed. We introduce different configurations of source-based scheduling options and investigate the implications on present and future VLBI2010 geodetic schedules. By comparison to existing VLBI schedules of the continuous campaign CONT11, we find that the source-based approach with two sources has a performance similar to the station-based approach in terms of number of observations, sky coverage, and geodetic parameters. For an artificial 16 station VLBI2010 network, the source-based approach with four sources provides an improved distribution of source observations on the celestial sphere. Monte Carlo simulations yield slightly better repeatabilities of station coordinates with the source-based approach with two sources or four sources than the classical strategy. The new VLBI scheduling software with its alternative scheduling strategy offers a promising option with respect to applications of the VGOS.
New VLBI2010 scheduling strategies and implications on the terrestrial reference frames
NASA Astrophysics Data System (ADS)
Sun, Jing; Böhm, Johannes; Nilsson, Tobias; Krásná, Hana; Böhm, Sigrid; Schuh, Harald
2014-05-01
In connection with the work for the next generation VLBI2010 Global Observing System (VGOS) of the International VLBI Service for Geodesy and Astrometry, a new scheduling package (Vie_Sched) has been developed at the Vienna University of Technology as a part of the Vienna VLBI Software. In addition to the classical station-based approach it is equipped with a new scheduling strategy based on the radio sources to be observed. We introduce different configurations of source-based scheduling options and investigate the implications on present and future VLBI2010 geodetic schedules. By comparison to existing VLBI schedules of the continuous campaign CONT11, we find that the source-based approach with two sources has a performance similar to the station-based approach in terms of number of observations, sky coverage, and geodetic parameters. For an artificial 16 station VLBI2010 network, the source-based approach with four sources provides an improved distribution of source observations on the celestial sphere. Monte Carlo simulations yield slightly better repeatabilities of station coordinates with the source-based approach with two sources or four sources than the classical strategy. The new VLBI scheduling software with its alternative scheduling strategy offers a promising option with respect to applications of the VGOS.
Computational modeling of blast exposure associated with recoilless weapons combat training
NASA Astrophysics Data System (ADS)
Wiri, S.; Ritter, A. C.; Bailie, J. M.; Needham, C.; Duckworth, J. L.
2017-11-01
Military personnel are exposed to blast as part of routine combat training with shoulder-fired recoilless rifles. These weapons fire large-caliber ammunitions capable of disabling structures and uparmored vehicles (e.g., tanks). Scientific, medical, and military leaders are beginning to recognize the blast overpressure from these shoulder-fired weapons may result in acute and even long-term physiological effects to military personnel. However, the back blast generated from the Carl Gustav and Shoulder-launched Multipurpose Assault Weapon (SMAW) shoulder-fired weapons on the weapon operator has not been quantified. By quantifying and modeling the full-body blast exposure from these weapons, better injury correlations can be constructed. Blast exposure data from the Carl Gustav and SMAW were used to calibrate a propellant burn source term for computational simulations of blast exposure on operators of these shoulder-mounted weapon systems. A propellant burn model provided the source term for each weapon to capture blast effects. Blast data from personnel-mounted gauges during weapon firing were used to create initial, high-fidelity 3D computational fluid dynamic simulations using SHAMRC (Second-order Hydrodynamic Automatic Mesh Refinement Code). These models were then improved upon using data collected from static blast sensors positioned around the military personnel while weapons were utilized in actual combat training. The final simulation models for both the Carl Gustav and SMAW were in good agreement with the data collected from the personnel-mounted and static pressure gauges. Using the final simulation results, contour maps were created for peak overpressure and peak overpressure impulse experienced by military personnel firing the weapon as well as those assisting with firing of those weapons. Reconstruction of the full-body blast loading enables a more accurate assessment of the cause of potential mechanisms of injury due to air blast even for subjects not wearing blast gauges themselves. By accurately understanding the blast exposure and its variations across an individual, more meaningful correlations with physiologic response including potential TBI spectrum physiology associated with sub-concussive blast exposure can be established. As blast injury thresholds become better defined, results from these reconstructions can provide important insights into approaches for reducing possible risk of injury to personnel operating shoulder-launched weapons.
NASA Astrophysics Data System (ADS)
van Walsum, P. E. V.; Supit, I.
2012-06-01
Hydrologic climate change modelling is hampered by climate-dependent model parameterizations. To reduce this dependency, we extended the regional hydrologic modelling framework SIMGRO to host a two-way coupling between the soil moisture model MetaSWAP and the crop growth simulation model WOFOST, accounting for ecohydrologic feedbacks in terms of radiation fraction that reaches the soil, crop coefficient, interception fraction of rainfall, interception storage capacity, and root zone depth. Except for the last, these feedbacks are dependent on the leaf area index (LAI). The influence of regional groundwater on crop growth is included via a coupling to MODFLOW. Two versions of the MetaSWAP-WOFOST coupling were set up: one with exogenous vegetation parameters, the "static" model, and one with endogenous crop growth simulation, the "dynamic" model. Parameterization of the static and dynamic models ensured that for the current climate the simulated long-term averages of actual evapotranspiration are the same for both models. Simulations were made for two climate scenarios and two crops: grass and potato. In the dynamic model, higher temperatures in a warm year under the current climate resulted in accelerated crop development, and in the case of potato a shorter growing season, thus partly avoiding the late summer heat. The static model has a higher potential transpiration; depending on the available soil moisture, this translates to a higher actual transpiration. This difference between static and dynamic models is enlarged by climate change in combination with higher CO2 concentrations. Including the dynamic crop simulation gives for potato (and other annual arable land crops) systematically higher effects on the predicted recharge change due to climate change. Crop yields from soils with poor water retention capacities strongly depend on capillary rise if moisture supply from other sources is limited. Thus, including a crop simulation model in an integrated hydrologic simulation provides a valuable addition for hydrologic modelling as well as for crop modelling.
NASA Astrophysics Data System (ADS)
Yihdego, Yohannes; Al-Weshah, Radwan A.
2017-11-01
The transport groundwater modelling has been undertaken to assess potential remediation scenarios and provide an optimal remediation options for consideration. The purpose of the study was to allow 50 years of predictive remediation simulation time. The results depict the likely total petroleum hydrocarbon migration pattern in the area under the worst-case scenario. The remediation scenario simulations indicate that do nothing approach will likely not achieve the target water quality within 50 years. Similarly, complete source removal approach will also likely not achieve the target water quality within 50 years. Partial source removal could be expected to remove a significant portion of the contaminant mass, but would increase the rate of contaminant recharge in the short to medium term. The pump-treat-reinject simulation indicates that the option appears feasible and could achieve a reduction in the area of the 0.01 mg/L TPH contour area for both Raudhatain and Umm Al-Aish by 35 and 30%, respectively, within 50 years. The rate of improvement and the completion date would depend on a range of factors such as bore field arrangements, pumping rates, reinjection water quality and additional volumes being introduced and require further optimisation and field pilot trials.
Simulating ensembles of source water quality using a K-nearest neighbor resampling approach.
Towler, Erin; Rajagopalan, Balaji; Seidel, Chad; Summers, R Scott
2009-03-01
Climatological, geological, and water management factors can cause significant variability in surface water quality. As drinking water quality standards become more stringent, the ability to quantify the variability of source water quality becomes more important for decision-making and planning in water treatment for regulatory compliance. However, paucity of long-term water quality data makes it challenging to apply traditional simulation techniques. To overcome this limitation, we have developed and applied a robust nonparametric K-nearest neighbor (K-nn) bootstrap approach utilizing the United States Environmental Protection Agency's Information Collection Rule (ICR) data. In this technique, first an appropriate "feature vector" is formed from the best available explanatory variables. The nearest neighbors to the feature vector are identified from the ICR data and are resampled using a weight function. Repetition of this results in water quality ensembles, and consequently the distribution and the quantification of the variability. The main strengths of the approach are its flexibility, simplicity, and the ability to use a large amount of spatial data with limited temporal extent to provide water quality ensembles for any given location. We demonstrate this approach by applying it to simulate monthly ensembles of total organic carbon for two utilities in the U.S. with very different watersheds and to alkalinity and bromide at two other U.S. utilities.
Three Dimensional Vapor Intrusion Modeling: Model Validation and Uncertainty Analysis
NASA Astrophysics Data System (ADS)
Akbariyeh, S.; Patterson, B.; Rakoczy, A.; Li, Y.
2013-12-01
Volatile organic chemicals (VOCs), such as chlorinated solvents and petroleum hydrocarbons, are prevalent groundwater contaminants due to their improper disposal and accidental spillage. In addition to contaminating groundwater, VOCs may partition into the overlying vadose zone and enter buildings through gaps and cracks in foundation slabs or basement walls, a process termed vapor intrusion. Vapor intrusion of VOCs has been recognized as a detrimental source for human exposures to potential carcinogenic or toxic compounds. The simulation of vapor intrusion from a subsurface source has been the focus of many studies to better understand the process and guide field investigation. While multiple analytical and numerical models were developed to simulate the vapor intrusion process, detailed validation of these models against well controlled experiments is still lacking, due to the complexity and uncertainties associated with site characterization and soil gas flux and indoor air concentration measurement. In this work, we present an effort to validate a three-dimensional vapor intrusion model based on a well-controlled experimental quantification of the vapor intrusion pathways into a slab-on-ground building under varying environmental conditions. Finally, a probabilistic approach based on Monte Carlo simulations is implemented to determine the probability distribution of indoor air concentration based on the most uncertain input parameters.
Reagan, Matthew T; Moridis, George J; Keen, Noel D; Johnson, Jeffrey N
2015-01-01
Hydrocarbon production from unconventional resources and the use of reservoir stimulation techniques, such as hydraulic fracturing, has grown explosively over the last decade. However, concerns have arisen that reservoir stimulation creates significant environmental threats through the creation of permeable pathways connecting the stimulated reservoir with shallower freshwater aquifers, thus resulting in the contamination of potable groundwater by escaping hydrocarbons or other reservoir fluids. This study investigates, by numerical simulation, gas and water transport between a shallow tight-gas reservoir and a shallower overlying freshwater aquifer following hydraulic fracturing operations, if such a connecting pathway has been created. We focus on two general failure scenarios: (1) communication between the reservoir and aquifer via a connecting fracture or fault and (2) communication via a deteriorated, preexisting nearby well. We conclude that the key factors driving short-term transport of gas include high permeability for the connecting pathway and the overall volume of the connecting feature. Production from the reservoir is likely to mitigate release through reduction of available free gas and lowering of reservoir pressure, and not producing may increase the potential for release. We also find that hydrostatic tight-gas reservoirs are unlikely to act as a continuing source of migrating gas, as gas contained within the newly formed hydraulic fracture is the primary source for potential contamination. Such incidents of gas escape are likely to be limited in duration and scope for hydrostatic reservoirs. Reliable field and laboratory data must be acquired to constrain the factors and determine the likelihood of these outcomes. Key Points: Short-term leakage fractured reservoirs requires high-permeability pathways Production strategy affects the likelihood and magnitude of gas release Gas release is likely short-term, without additional driving forces PMID:26726274
Re-formulation and Validation of Cloud Microphysics Schemes
NASA Astrophysics Data System (ADS)
Wang, J.; Georgakakos, K. P.
2007-12-01
The research focuses on improving quantitative precipitation forecasts by removing significant uncertainties in current cloud microphysics schemes embedded in models such as WRF and MM5 and cloud-resolving models such as GCE. Reformulation of several production terms in these microphysics schemes was found necessary. When estimating four graupel production terms involved in the accretion between rain, snow and graupel, current microphysics schemes assumes that all raindrops and snow particles are falling at their appropriate mass-weighted mean terminal velocities and thus analytic solutions are able to be found for these production terms. Initial analysis and tests showed that these approximate analytic solutions give significant and systematic overestimates of these terms, and, thus, become one of major error sources of the graupel overproduction and associated extreme radar reflectivity in simulations. These results are corroborated by several reports. For example, the analytic solution overestimates the graupel production by collisions between raindrops and snow by up to 230%. The structure of "pure" snow (not rimed) and "pure graupel" (completely rimed) in current microphysics schemes excludes intermediate forms between "pure" snow and "pure" graupel and thus becomes a significant reason of graupel overproduction in hydrometeor simulations. In addition, the generation of the same density graupel by both the freezing of supercooled water and the riming of snow may cause underestimation of graupel production by freezing. A parameterization scheme of the riming degree of snow is proposed and then a dynamic fallspeed-diameter relationship and density- diameter relationship of rimed snow is assigned to graupel based on the diagnosed riming degree. To test if these new treatments can improve quantitative precipitation forecast, the Hurricane Katrina and a severe winter snowfall event in the Sierra Nevada Range are selected as case studies. A series of control simulation and sensitivity tests was conducted for these two cases. Two statistical methods are used to compare simulated radar reflectivity by the model with that detected by ground-based and airborne radar at different height levels. It was found that the changes made in current microphysical schemes improve QPF and microphysics simulation significantly.
Integrating diverse forage sources reduces feed gaps on mixed crop-livestock farms.
Bell, L W; Moore, A D; Thomas, D T
2017-12-04
Highly variable climates induce large variability in the supply of forage for livestock and so farmers must manage their livestock systems to reduce the risk of feed gaps (i.e. periods when livestock feed demand exceeds forage supply). However, mixed crop-livestock farmers can utilise a range of feed sources on their farms to help mitigate these risks. This paper reports on the development and application of a simple whole-farm feed-energy balance calculator which is used to evaluate the frequency and magnitude of feed gaps. The calculator matches long-term simulations of variation in forage and metabolisable energy supply from diverse sources against energy demand for different livestock enterprises. Scenarios of increasing the diversity of forage sources in livestock systems is investigated for six locations selected to span Australia's crop-livestock zone. We found that systems relying on only one feed source were prone to higher risk of feed gaps, and hence, would often have to reduce stocking rates to mitigate these risks or use supplementary feed. At all sites, by adding more feed sources to the farm feedbase the continuity of supply of both fresh and carry-over forage was improved, reducing the frequency and magnitude of feed deficits. However, there were diminishing returns from making the feedbase more complex, with combinations of two to three feed sources typically achieving the maximum benefits in terms of reducing the risk of feed gaps. Higher stocking rates could be maintained while limiting risk when combinations of other feed sources were introduced into the feedbase. For the same level of risk, a feedbase relying on a diversity of forage sources could support stocking rates 1.4 to 3 times higher than if they were using a single pasture source. This suggests that there is significant capacity to mitigate both risk of feed gaps at the same time as increasing 'safe' stocking rates through better integration of feed sources on mixed crop-livestock farms across diverse regions and climates.
Gravitational wave source counts at high redshift and in models with extra dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
García-Bellido, Juan; Nesseris, Savvas; Trashorras, Manuel, E-mail: juan.garciabellido@uam.es, E-mail: savvas.nesseris@csic.es, E-mail: manuel.trashorras@csic.es
2016-07-01
Gravitational wave (GW) source counts have been recently shown to be able to test how gravitational radiation propagates with the distance from the source. Here, we extend this formalism to cosmological scales, i.e. the high redshift regime, and we discuss the complications of applying this methodology to high redshift sources. We also allow for models with compactified extra dimensions like in the Kaluza-Klein model. Furthermore, we also consider the case of intermediate redshifts, i.e. 0 < z ∼< 1, where we show it is possible to find an analytical approximation for the source counts dN / d ( S /more » N ). This can be done in terms of cosmological parameters, such as the matter density Ω {sub m} {sub ,0} of the cosmological constant model or the cosmographic parameters for a general dark energy model. Our analysis is as general as possible, but it depends on two important factors: a source model for the black hole binary mergers and the GW source to galaxy bias. This methodology also allows us to obtain the higher order corrections of the source counts in terms of the signal-to-noise S / N . We then forecast the sensitivity of future observations in constraining GW physics but also the underlying cosmology by simulating sources distributed over a finite range of signal-to-noise with a number of sources ranging from 10 to 500 sources as expected from future detectors. We find that with 500 events it will be possible to provide constraints on the matter density parameter at present Ω {sub m} {sub ,0} on the order of a few percent and with the precision growing fast with the number of events. In the case of extra dimensions we find that depending on the degeneracies of the model, with 500 events it may be possible to provide stringent limits on the existence of the extra dimensions if the aforementioned degeneracies can be broken.« less
Harmony: EEG/MEG Linear Inverse Source Reconstruction in the Anatomical Basis of Spherical Harmonics
Petrov, Yury
2012-01-01
EEG/MEG source localization based on a “distributed solution” is severely underdetermined, because the number of sources is much larger than the number of measurements. In particular, this makes the solution strongly affected by sensor noise. A new way to constrain the problem is presented. By using the anatomical basis of spherical harmonics (or spherical splines) instead of single dipoles the dimensionality of the inverse solution is greatly reduced without sacrificing the quality of the data fit. The smoothness of the resulting solution reduces the surface bias and scatter of the sources (incoherency) compared to the popular minimum-norm algorithms where single-dipole basis is used (MNE, depth-weighted MNE, dSPM, sLORETA, LORETA, IBF) and allows to efficiently reduce the effect of sensor noise. This approach, termed Harmony, performed well when applied to experimental data (two exemplars of early evoked potentials) and showed better localization precision and solution coherence than the other tested algorithms when applied to realistically simulated data. PMID:23071497
Barczi, Jean-François; Rey, Hervé; Griffon, Sébastien; Jourdan, Christophe
2018-04-18
Many studies exist in the literature dealing with mathematical representations of root systems, categorized, for example, as pure structure description, partial derivative equations or functional-structural plant models. However, in these studies, root architecture modelling has seldom been carried out at the organ level with the inclusion of environmental influences that can be integrated into a whole plant characterization. We have conducted a multidisciplinary study on root systems including field observations, architectural analysis, and formal and mathematical modelling. This integrative and coherent approach leads to a generic model (DigR) and its software simulator. Architecture analysis applied to root systems helps at root type classification and architectural unit design for each species. Roots belonging to a particular type share dynamic and morphological characteristics which consist of topological and geometric features. The DigR simulator is integrated into the Xplo environment, with a user interface to input parameter values and make output ready for dynamic 3-D visualization, statistical analysis and saving to standard formats. DigR is simulated in a quasi-parallel computing algorithm and may be used either as a standalone tool or integrated into other simulation platforms. The software is open-source and free to download at http://amapstudio.cirad.fr/soft/xplo/download. DigR is based on three key points: (1) a root-system architectural analysis, (2) root type classification and modelling and (3) a restricted set of 23 root type parameters with flexible values indexed in terms of root position. Genericity and botanical accuracy of the model is demonstrated for growth, branching, mortality and reiteration processes, and for different root architectures. Plugin examples demonstrate the model's versatility at simulating plastic responses to environmental constraints. Outputs of the model include diverse root system structures such as tap-root, fasciculate, tuberous, nodulated and clustered root systems. DigR is based on plant architecture analysis which leads to specific root type classification and organization that are directly linked to field measurements. The open source simulator of the model has been included within a friendly user environment. DigR accuracy and versatility are demonstrated for growth simulations of complex root systems for both annual and perennial plants.
A Simulated Environment Experiment on Annoyance Due to Combined Road Traffic and Industrial Noises
Marquis-Favre, Catherine; Morel, Julien
2015-01-01
Total annoyance due to combined noises is still difficult to predict adequately. This scientific gap is an obstacle for noise action planning, especially in urban areas where inhabitants are usually exposed to high noise levels from multiple sources. In this context, this work aims to highlight potential to enhance the prediction of total annoyance. The work is based on a simulated environment experiment where participants performed activities in a living room while exposed to combined road traffic and industrial noises. The first objective of the experiment presented in this paper was to gain further understanding of the effects on annoyance of some acoustical factors, non-acoustical factors and potential interactions between the combined noise sources. The second one was to assess total annoyance models constructed from the data collected during the experiment and tested using data gathered in situ. The results obtained in this work highlighted the superiority of perceptual models. In particular, perceptual models with an interaction term seemed to be the best predictors for the two combined noise sources under study, even with high differences in sound pressure level. Thus, these results reinforced the need to focus on perceptual models and to improve the prediction of partial annoyances. PMID:26197326
Guo, Yin; Sun, LiQun; Yang, Zheng; Liu, Zilong
2016-02-20
During this study we constructed a generalized parametric modified four-objective multipass matrix system (MMS). We used an optical system comprising four asymmetrical spherical mirrors to improve the alignment process. The use of a paraxial equation for the design of the front transfer optics yielded the initial condition for modeling our MMS. We performed a ray tracing simulation to calculate the significant aberration of the system (astigmatism). Based on the calculated meridional and sagittal focus positions, the complementary focusing mirror was easily designed to provide an output beam free of astigmatism. We have presented an example of a 108-transit multipass system (5×7 matrix arrangement) with a relatively larger numerical aperture source (xenon light source). The whole system exhibits zero theoretical geometrical loss when simulated with Zemax software. The MMS construction strategy described in this study provides an anastigmatic output beam and the generalized approach to design a controllable matrix spot pattern on the field mirrors. Asymmetrical reflective mirrors aid in aligning the whole system with high efficiency. With the generalized design strategy in terms of optics configuration and asymmetrical fabrication method in this paper, other kinds of multipass matrix system coupled with different sources and detector systems also can be achieved.
Ng, Ding-Quan; Lin, Yi-Pin
2016-01-01
In this pilot study, a modified sampling protocol was evaluated for the detection of lead contamination and locating the source of lead release in a simulated premise plumbing system with one-, three- and seven-day stagnation for a total period of 475 days. Copper pipes, stainless steel taps and brass fittings were used to assemble the “lead-free” system. Sequential sampling using 100 mL was used to detect lead contamination while that using 50 mL was used to locate the lead source. Elevated lead levels, far exceeding the World Health Organization (WHO) guideline value of 10 µg·L−1, persisted for as long as five months in the system. “Lead-free” brass fittings were identified as the source of lead contamination. Physical disturbances, such as renovation works, could cause short-term spikes in lead release. Orthophosphate was able to suppress total lead levels below 10 µg·L−1, but caused “blue water” problems. When orthophosphate addition was ceased, total lead levels began to spike within one week, implying that a continuous supply of orthophosphate was required to control total lead levels. Occasional total lead spikes were observed in one-day stagnation samples throughout the course of the experiments. PMID:26927154
Ng, Ding-Quan; Lin, Yi-Pin
2016-02-27
In this pilot study, a modified sampling protocol was evaluated for the detection of lead contamination and locating the source of lead release in a simulated premise plumbing system with one-, three- and seven-day stagnation for a total period of 475 days. Copper pipes, stainless steel taps and brass fittings were used to assemble the "lead-free" system. Sequential sampling using 100 mL was used to detect lead contamination while that using 50 mL was used to locate the lead source. Elevated lead levels, far exceeding the World Health Organization (WHO) guideline value of 10 µg · L(-1), persisted for as long as five months in the system. "Lead-free" brass fittings were identified as the source of lead contamination. Physical disturbances, such as renovation works, could cause short-term spikes in lead release. Orthophosphate was able to suppress total lead levels below 10 µg · L(-1), but caused "blue water" problems. When orthophosphate addition was ceased, total lead levels began to spike within one week, implying that a continuous supply of orthophosphate was required to control total lead levels. Occasional total lead spikes were observed in one-day stagnation samples throughout the course of the experiments.
NASA Astrophysics Data System (ADS)
Disch, C.
2014-09-01
Mobile surveillance systems are used to find lost radioactive sources and possible nuclear threats in urban areas. The REWARD collaboration [1] aims to develop such a complete radiation monitoring system that can be installed in mobile or stationary setups across a wide area. The scenarios include nuclear terrorism threats, lost radioactive sources, radioactive contamination and nuclear accidents. This paper will show the performance capabilities of the REWARD system in different scnarios. The results include both Monte Carlo simulations as well as neutron and gamma-ray detection performances in terms of efficiency and nuclide identification. The outcomes of several radiation mapping survey with the entire REWARD system will also be presented.
Effects of experimental nitrogen deposition on peatland carbon pools and fluxes: a modeling analysis
NASA Astrophysics Data System (ADS)
Wu, Y.; Blodau, C.; Moore, T. R.; Bubier, J. L.; Juutinen, S.; Larmola, T.
2014-07-01
Nitrogen (N) pollution of peatlands alters their carbon (C) balances, yet long-term effects and controls are poorly understood. We applied the model PEATBOG to analyze impacts of long-term nitrogen (N) fertilization on C cycling in an ombrotrophic bog. Simulations of summer gross ecosystem production (GEP), ecosystem respiration (ER) and net ecosystem exchange (NEE) were evaluated against 8 years of observations and extrapolated for 80 years to identify potential effects of N fertilization and factors influencing model behavior. The model successfully simulated moss decline and raised GEP, ER and NEE on fertilized plots. GEP was systematically overestimated in the model compared to the field data due to high tolerance of Sphagnum to N deposition in the model. Model performance regarding the 8 year response of GEP and NEE to N was improved by introducing an N content threshold shifting the response of photosynthesis capacity to N content in shrubs and graminoids from positive to negative at high N contents. Such changes also eliminated the competitive advantages of vascular species and led to resilience of mosses in the long-term. Regardless of the large changes of C fluxes over the short-term, the simulated GEP, ER and NEE after 80 years depended on whether a graminoid- or shrub-dominated system evolved. When the peatland remained shrub-Sphagnum dominated, it shifted to a C source after only 10 years of fertilization at 6.4 g N m-2 yr-1, whereas this was not the case when it became graminoid-dominated. The modeling results thus highlight the importance of ecosystem adaptation and reaction of plant functional types to N deposition, when predicting the future C balance of N-polluted cool temperate bogs.
The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.
Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre
2016-10-01
Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.
OpenMC In Situ Source Convergence Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aldrich, Garrett Allen; Dutta, Soumya; Woodring, Jonathan Lee
2016-05-07
We designed and implemented an in situ version of particle source convergence for the OpenMC particle transport simulator. OpenMC is a Monte Carlo based-particle simulator for neutron criticality calculations. For the transport simulation to be accurate, source particles must converge on a spatial distribution. Typically, convergence is obtained by iterating the simulation by a user-settable, fixed number of steps, and it is assumed that convergence is achieved. We instead implement a method to detect convergence, using the stochastic oscillator for identifying convergence of source particles based on their accumulated Shannon Entropy. Using our in situ convergence detection, we are ablemore » to detect and begin tallying results for the full simulation once the proper source distribution has been confirmed. Our method ensures that the simulation is not started too early, by a user setting too optimistic parameters, or too late, by setting too conservative a parameter.« less
Rapid Monte Carlo Simulation of Gravitational Wave Galaxies
NASA Astrophysics Data System (ADS)
Breivik, Katelyn; Larson, Shane L.
2015-01-01
With the detection of gravitational waves on the horizon, astrophysical catalogs produced by gravitational wave observatories can be used to characterize the populations of sources and validate different galactic population models. Efforts to simulate gravitational wave catalogs and source populations generally focus on population synthesis models that require extensive time and computational power to produce a single simulated galaxy. Monte Carlo simulations of gravitational wave source populations can also be used to generate observation catalogs from the gravitational wave source population. Monte Carlo simulations have the advantes of flexibility and speed, enabling rapid galactic realizations as a function of galactic binary parameters with less time and compuational resources required. We present a Monte Carlo method for rapid galactic simulations of gravitational wave binary populations.
Climate Variability Impacts on Watershed Nutrient Delivery and Reservoir Production
NASA Astrophysics Data System (ADS)
White, J. D.; Prochnow, S. J.; Zygo, L. M.; Byars, B. W.
2005-05-01
Reservoirs in agricultural dominated watersheds tend to exhibit pulse-system behavior especially if located in climates dominated by summer convective precipitation inputs. Concentration and bulk mass of nutrient and sediment inputs into reservoir systems vary in terms of timing and magnitude of delivery from watershed sources to reservoirs under these climate conditions. Reservoir management often focuses on long-term average inputs without considering short and long-term impacts of variation in loading. In this study we modeled a watershed-reservoir system to assess how climate variability affects reservoir primary production through shifts in external loading and internal recycling of limiting nutrients. The Bosque watershed encompasses 423,824 ha in central Texas which delivers water to Lake Waco, a 2900 ha reservoir that is the primary water source for the city of Waco and surrounding areas. Utilizing the Soil Water Assessment Tool for the watershed and river simulations and the CE-Qual-2e model for the reservoir, hydrologic and nutrient dynamics were simulated for a 10 year period encompassing two ENSO cycles. The models were calibrated based on point measurement of water quality attributes for a two year time period. Results indicated that watershed delivery of nutrients was affected by the presence and density of small flood-control structure in the watershed. However, considerable nitrogen and phosphorus loadings were derived from soils in the upper watershed which have had long-term waste-application from concentrated animal feeding operations. During El Niño years, nutrient and sediment loads increased by 3 times above non-El Niño years. The simulated response within the reservoir to these nutrient and sediment loads had both direct and indirect. Productivity evaluated from chlorophyll a and algal biomass increased under El Niño conditions, however species composition shifts were found with an increase in cyanobacteria dominance. In non-El Niño years, species composition was more evenly distributed. At the longer time scale, El Niño events with accompanying increase in nutrient loads were followed by years in which productivity declined below levels predicted solely by nutrient ratios. This was due to subtle shifts in organic matter decomposition where productive years are followed by increases in refractory material which sequesters nutrients and reduces internal loading.
Simulation-based artifact correction (SBAC) for metrological computed tomography
NASA Astrophysics Data System (ADS)
Maier, Joscha; Leinweber, Carsten; Sawall, Stefan; Stoschus, Henning; Ballach, Frederic; Müller, Tobias; Hammer, Michael; Christoph, Ralf; Kachelrieß, Marc
2017-06-01
Computed tomography (CT) is a valuable tool for the metrolocical assessment of industrial components. However, the application of CT to the investigation of highly attenuating objects or multi-material components is often restricted by the presence of CT artifacts caused by beam hardening, x-ray scatter, off-focal radiation, partial volume effects or the cone-beam reconstruction itself. In order to overcome this limitation, this paper proposes an approach to calculate a correction term that compensates for the contribution of artifacts and thus enables an appropriate assessment of these components using CT. Therefore, we make use of computer simulations of the CT measurement process. Based on an appropriate model of the object, e.g. an initial reconstruction or a CAD model, two simulations are carried out. One simulation considers all physical effects that cause artifacts using dedicated analytic methods as well as Monte Carlo-based models. The other one represents an ideal CT measurement i.e. a measurement in parallel beam geometry with a monochromatic, point-like x-ray source and no x-ray scattering. Thus, the difference between these simulations is an estimate for the present artifacts and can be used to correct the acquired projection data or the corresponding CT reconstruction, respectively. The performance of the proposed approach is evaluated using simulated as well as measured data of single and multi-material components. Our approach yields CT reconstructions that are nearly free of artifacts and thereby clearly outperforms commonly used artifact reduction algorithms in terms of image quality. A comparison against tactile reference measurements demonstrates the ability of the proposed approach to increase the accuracy of the metrological assessment significantly.
NASA Astrophysics Data System (ADS)
Kang, Daiwen
In this research, the sources, distributions, transport, ozone formation potential, and biogenic emissions of VOCs are investigated focusing on three Southeast United States National Parks: Shenandoah National Park, Big Meadows site (SHEN), Great Smoky Mountains National Park at Cove Mountain (GRSM) and Mammoth Cave National Park (MACA). A detailed modeling analysis is conducted using the Multiscale Air Quality SImulation Platform (MAQSIP) focusing on nonmethane hydrocarbons and ozone characterized by high O3 surface concentrations. Nine emissions perturbation using the Multiscale Air Quality SImulation Platform (MAQSIP) focusing on nonmethane hydrocarbons and ozone characterized by high O 3 surface concentrations. In the observation-based analysis, source classification techniques based on correlation coefficient, chemical reactivity, and certain ratios were developed and applied to the data set. Anthropogenic VOCs from automobile exhaust dominate at Mammoth Cave National Park, and at Cove Mountain, Great Smoky Mountains National Park, while at Big Meadows, Shenandoah National Park, the source composition is complex and changed from 1995 to 1996. The dependence of isoprene concentrations on ambient temperatures is investigated, and similar regressional relationships are obtained for all three monitoring locations. Propylene-equivalent concentrations are calculated to account for differences in reaction rates between the OH and individual hydrocarbons, and to thereby estimate their relative contributions to ozone formation. Isoprene fluxes were also estimated for all these rural areas. Model predictions (base scenario) tend to give lower daily maximum O 3 concentrations than observations by 10 to 30%. Model predicted concentrations of lumped paraffin compounds are of the same order of magnitude as the observed values, while the observed concentrations for other species (isoprene, ethene, surrogate olefin, surrogate toluene, and surrogate xylene) are usually an order of magnitude higher than the predictions. Detailed sensitivity and process analyses in terms of ozone and VOC scenarios including the base scenario are designed and utilized in the model simulations. Model predictions are compared with the observed values at the three locations for the same time period. Detailed sensitivity and process analyses in terms of ozone and VOC budgets, and relative importance of various VOCs species are provided. (Abstract shortened by UMI.)
Smith, Richard L.; Repert, Deborah A.; Barber, Larry B.; LeBlanc, Denis R.
2013-01-01
The consequences of groundwater contamination can remain long after a contaminant source has been removed. Documentation of natural aquifer recoveries and empirical tools to predict recovery time frames and associated geochemical changes are generally lacking. This study characterized the long-term natural attenuation of a groundwater contaminant plume in a sand and gravel aquifer on Cape Cod, Massachusetts, after the removal of the treated-wastewater source. Although concentrations of dissolved organic carbon (DOC) and other soluble constituents have decreased substantially in the 15 years since the source was removed, the core of the plume remains anoxic and has sharp redox gradients and elevated concentrations of nitrate and ammonium. Aquifer sediment was collected from near the former disposal site at several points in time and space along a 0.5-km-long transect extending downgradient from the disposal site and analyses of the sediment was correlated with changes in plume composition. Total sediment carbon content was generally low (< 8 to 55.8 μmol (g dry wt)− 1) but was positively correlated with oxygen consumption rates in laboratory incubations, which ranged from 11.6 to 44.7 nmol (g dry wt)− 1 day− 1. Total water extractable organic carbon was < 10–50% of the total carbon content but was the most biodegradable portion of the carbon pool. Carbon/nitrogen (C/N) ratios in the extracts increased more than 10-fold with time, suggesting that organic carbon degradation and oxygen consumption could become N-limited as the sorbed C and dissolved inorganic nitrogen (DIN) pools produced by the degradation separate with time by differential transport. A 1-D model using total degradable organic carbon values was constructed to simulate oxygen consumption and transport and calibrated by using observed temporal changes in oxygen concentrations at selected wells. The simulated travel velocity of the oxygen gradient was 5–13% of the groundwater velocity. This suggests that the total sorbed carbon pool is large relative to the rate of oxygen entrainment and will be impacting groundwater geochemistry for many decades. This has implications for long-term oxidation of reduced constituents, such as ammonium, that are being transported downgradient away from the infiltration beds toward surface and coastal discharge zones.
NASA Astrophysics Data System (ADS)
Sheng, Cheng; Bol, Roland; Vetterlein, Doris; Vanderborght, Jan; Schnepf, Andrea
2017-04-01
Different types of root exudates and their effect on soil/rhizosphere properties have received a lot of attention. Since their influence of rhizosphere properties and processes depends on their concentration in the soil, the assessment of the spatial-temporal exudate concentration distribution around roots is of key importance for understanding the functioning of the rhizosphere. Different root systems have different root architectures. Different types of root exudates diffuse in the rhizosphere with different diffusion coefficient. Both of them are responsible for the dynamics of exudate concentration distribution in the rhizosphere. Hence, simulations of root exudation involving four kinds of plant root systems (Vicia faba, Lupinus albus, Triticum aestivum and Zea mays) and two kinds of root exudates (citrate and mucilage) were conducted. We consider a simplified root architecture where each root is represented by a straight line. Assuming that root tips move at a constant velocity and that mucilage transport is linear, concentration distributions can be obtained from a convolution of the analytical solution of the transport equation in a stationary flow field for an instantaneous point source injection with the spatial-temporal distribution of the source strength. By coupling the analytical equation with a root growth model that delivers the spatial-temporal source term, we simulated exudate concentration distributions for citrate and mucilage with MATLAB. From the simulation results, we inferred the following information about the rhizosphere: (a) the dynamics of the root architecture development is the main effect of exudate distribution in the root zone; (b) a steady rhizosphere with constant width is more likely to develop for individual roots when the diffusion coefficient is small. The simulations suggest that rhizosphere development depends in the following way on the root and exudate properties: the dynamics of the root architecture result in various development patterns of the rhizosphere. Meanwhile, Results improve our understanding of the impact of the spatial and temporal heterogeneity of exudate input on rhizosphere development for different root system types and substances. In future work, we will use the simulation tool to infer critical parameters that determine the spatial-temporal extent of the rhizosphere from experimental data.
Numerical simulations and parameterizations of volcanic plumes observed at Reunion Island
NASA Astrophysics Data System (ADS)
Gurwinder Sivia, Sandra; Gheusi, Francois; Mari, Celine; DiMuro, Andrea; Tulet, Pierre
2013-04-01
Volcanoes are natural composite hazards. The volcanic ejecta can have considerable impact on human health. Volcanic gases and ash, can be especially harmful to people with lung disease such as asthma. Volcanic gases that pose the greatest potential hazards are sulfur dioxide, carbon dioxide, and hydrogen fluoride. Locally, sulfur dioxide gas can lead to acid rain and air pollution downwind from a volcano. These gases can come from lava flows as well as volcano eruptive plumes. This acidic pollution can be transported by wind over large distances. To comply with regulatory rules, modeling tools are needed to accurately predict the contribution of volcanic emissions to air quality degradation. Unfortunately, the ability of existing models to simulate volcanic plume production and dispersion is currently limited by inaccurate volcanic emissions and uncertainties in plume-rise estimates. The present work is dedicated to the study of deep injections of volcanic emissions into the troposphere developed as consequence of intense but localized input of heat near eruptive mouths. This work covers three aspects. First a precise quantification of heat sources in terms of surface, geometry and heat source intensity is done for the Piton de la Fournaise volcano. Second, large eddy simulation (LES) are performed with the Meso-NH model to determine the dynamics and vertical development of volcanic plumes. The estimated energy fluxes and the geometry of the heat source is used at the bottom boundary to generate and sustain the plume, while, passive tracers are used to represent volcanic gases and their injection into the atmosphere. The realism of the simulated plumes is validated on the basis of plume observations. The LES simulations finally serve as references for the development of column parameterizations for the coarser resolution version of the model which is the third aspect of the present work. At spatial resolution coarser than ~1km, buoyant volcanic plumes are sub-grid processes. A new parameterization for the injection height is presented which is based on a modified version of the eddy-diffusivity/mass-flux scheme initially developed for the simulation of convective boundary layer.
Guided wave imaging of oblique reflecting interfaces in pipes using common-source synthetic focusing
NASA Astrophysics Data System (ADS)
Sun, Zeqing; Sun, Anyu; Ju, Bing-Feng
2018-04-01
Cross-mode-family mode conversion and secondary reflection of guided waves in pipes complicate the processing of guided waves signals, and can cause false detection. In this paper, filters operating in the spectral domain of wavenumber, circumferential order and frequency are designed to suppress the signal components of unwanted mode-family and unwanted traveling direction. Common-source synthetic focusing is used to reconstruct defect images from the guided wave signals. Simulations of the reflections from linear oblique defects and a semicircle defect are separately implemented. Defect images, which are reconstructed from the simulation results under different excitation conditions, are comparatively studied in terms of axial resolution, reflection amplitude, detectable oblique angle and so on. Further, the proposed method is experimentally validated by detecting linear cracks with various oblique angles (10-40°). The proposed method relies on the guided wave signals that are captured during 2-D scanning of a cylindrical area on the pipe. The redundancy of the signals is analyzed to reduce the time-consumption of the scanning process and to enhance the practicability of the proposed method.
Modeling Energy Efficiency As A Green Logistics Component In Vehicle Assembly Line
NASA Astrophysics Data System (ADS)
Oumer, Abduaziz; Mekbib Atnaw, Samson; Kie Cheng, Jack; Singh, Lakveer
2016-11-01
This paper uses System Dynamics (SD) simulation to investigate the concept green logistics in terms of energy efficiency in automotive industry. The car manufacturing industry is considered to be one of the highest energy consuming industries. An efficient decision making model is proposed that capture the impacts of strategic decisions on energy consumption and environmental sustainability. The sources of energy considered in this research are electricity and fuel; which are the two main types of energy sources used in a typical vehicle assembly plant. The model depicts the performance measurement for process- specific energy measures of painting, welding, and assembling processes. SD is the chosen simulation method and the main green logistics issues considered are Carbon Dioxide (CO2) emission and energy utilization. The model will assist decision makers acquire an in-depth understanding of relationship between high level planning and low level operation activities on production, environmental impacts and costs associated. The results of the SD model signify the existence of positive trade-offs between green practices of energy efficiency and the reduction of CO2 emission.
Numerical simulation of incoherent optical wave propagation in nonlinear fibers
NASA Astrophysics Data System (ADS)
Fernandez, Arnaud; Balac, Stéphane; Mugnier, Alain; Mahé, Fabrice; Texier-Picard, Rozenn; Chartier, Thierry; Pureur, David
2013-11-01
The present work concerns the study of pulsed laser systems containing a fiber amplifier for boosting optical output power. In this paper, this fiber amplification device is included into a MOPFA laser, a master oscillator coupled with fiber amplifier, usually a cladding-pumped high-power amplifier often based on an ytterbium-doped fiber. An experimental study has established that the observed nonlinear effects (such as Kerr effect, four waves mixing, Raman effect) could behave very differently depending on the characteristics of the optical source emitted by the master laser. However, it has not yet been possible to determine from the experimental data if the statistics of the photons is alone responsible for the various nonlinear scenarios observed. Therefore, we have developed a numerical simulation software for solving the generalized nonlinear Schrödinger equation with a stochastic source term in order to validate the hypothesis that the coherence properties of the master laser are mainly liable for the behavior of the observed nonlinear effects. Contribution to the Topical Issue "Numelec 2012", Edited by Adel Razek.
Development of Standardized Lunar Regolith Simulant Materials
NASA Technical Reports Server (NTRS)
Carpenter, P.; Sibille, L.; Meeker, G.; Wilson, S.
2006-01-01
Lunar exploration requires scientific and engineering studies using standardized testing procedures that ultimately support flight certification of technologies and hardware. It is necessary to anticipate the range of source materials and environmental constraints that are expected on the Moon and Mars, and to evaluate in-situ resource utilization (ISRU) coupled with testing and development. We describe here the development of standardized lunar regolith simulant (SLRS) materials that are traceable inter-laboratory standards for testing and technology development. These SLRS materials must simulate the lunar regolith in terms of physical, chemical, and mineralogical properties. A summary of these issues is contained in the 2005 Workshop on Lunar Regolith Simulant Materials [l]. Lunar mare basalt simulants MLS-1 and JSC-1 were developed in the late 1980s. MLS-1 approximates an Apollo 11 high-Ti basalt, and was produced by milling of a holocrystalline, coarse-grained intrusive gabbro (Fig. 1). JSC-1 approximates an Apollo 14 basalt with a relatively low-Ti content, and was obtained from a glassy volcanic ash (Fig. 2). Supplies of MLS-1 and JSC-1 have been exhausted and these materials are no longer available. No highland anorthosite simulant was previously developed. Upcoming lunar polar missions thus require the identification, assessment, and development of both mare and highland simulants. A lunar regolith simulant is manufactured from terrestrial components for the purpose of simulating the physical and chemical properties of the lunar regolith. Significant challenges exist in the identification of appropriate terrestrial source materials. Lunar materials formed under comparatively reducing conditions in the absence of water, and were modified by meteorite impact events. Terrestrial materials formed under more oxidizing conditions with significantly greater access to water, and were modified by a wide range of weathering processes. The composition space of lunar materials can be modeled by mixing programs utilizing a low-Ti basalt, ilmenite, KREEP component, high-Ca anorthosite, and meteoritic components. This approach has been used for genetic studies of lunar samples via chemical and modal analysis. A reduced composition space may be appropriate for simulant development, but it is necessary to determine the controlling properties that affect the physical, chemical and mineralogical components of the simulant.
Luciano, Antonella; Torretta, Vincenzo; Mancini, Giuseppe; Eleuteri, Andrea; Raboni, Massimo; Viotti, Paolo
2017-03-01
Two scenarios in terms of odour impact assessment were studied during the phase of upgrading of an existing waste treatment plant: CALPUFF was used for the simulation of odour dispersion. Olfactometric measures, carried out over different periods and different positions in the plant, were used for model calibration. Results from simulations were reported in terms of statistics of odour concentrations and isopleths maps of the 98th percentile of the hourly peak concentrations, as requested from the European legislation and standards. The excess perception thresholds and emissions were utilized to address the plant upgrade options. The hourly evaluation of odours was performed to determine the most impacting period of the day. An inverse application of the numerical simulation starting from defining the odour threshold at the receptor was made to allow the definition of the required abatement efficiency at the odours source location. Results from the proposed approach confirmed the likelihood to adopt odour dispersion modelling, not only in the authorization phase, but also as a tool for driving technical and managing actions in plant upgrade so to reduce impacts and improve the public acceptance. The upgrade actions in order to achieve the expected efficiency are reported as well.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balsa Terzic, Gabriele Bassi
In this paper we discuss representations of charge particle densities in particle-in-cell (PIC) simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2d code of Bassi, designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methodsmore » are employed to approximate particle distributions: (i) truncated fast cosine transform (TFCT); and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into Bassi's CSR code, and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.« less
Hongbo Guo; Xiaowei He; Muhan Liu; Zeyu Zhang; Zhenhua Hu; Jie Tian
2017-06-01
Cerenkov luminescence tomography (CLT) provides a novel technique for 3-D noninvasive detection of radiopharmaceuticals in living subjects. However, because of the severe scattering of Cerenkov light, the reconstruction accuracy and stability of CLT is still unsatisfied. In this paper, a modified weight multispectral CLT (wmCLT) reconstruction strategy was developed which split the Cerenkov radiation spectrum into several sub-spectral bands and weighted the sub-spectral results to obtain the final result. To better evaluate the property of the wmCLT reconstruction strategy in terms of accuracy, stability and practicability, several numerical simulation experiments and in vivo experiments were conducted and the results obtained were compared with the traditional multispectral CLT (mCLT) and hybrid-spectral CLT (hCLT) reconstruction strategies. The numerical simulation results indicated that wmCLT strategy significantly improved the accuracy of Cerenkov source localization and intensity quantitation and exhibited good stability in suppressing noise in numerical simulation experiments. And the comparison of the results achieved from different in vivo experiments further indicated significant improvement of the wmCLT strategy in terms of the shape recovery of the bladder and the spatial resolution of imaging xenograft tumors. Overall the strategy reported here will facilitate the development of nuclear and optical molecular tomography in theoretical study.
NASA Astrophysics Data System (ADS)
Console, R.; Vannoli, P.; Carluccio, R.
2016-12-01
The application of a physics-based earthquake simulation algorithm to the central Apennines region, where the 24 August 2016 Amatrice earthquake occurred, allowed the compilation of a synthetic seismic catalog lasting 100 ky, and containing more than 500,000 M ≥ 4.0 events, without the limitations that real catalogs suffer in terms of completeness, homogeneity and time duration. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate for every single fault in the investigated fault systems, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one fault are allowed to expand into neighboring faults, even belonging to a different fault system, if they are separated by less than a given maximum distance. The seismogenic model upon which we applied the simulator code, was derived from the DISS 3.2.0 database (http://diss.rm.ingv.it/diss/), selecting all the fault systems that are recognized in the central Apennines region, for a total of 24 fault systems. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which are comparable with those of real observations. These features include long-term periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the linear Gutenberg-Richter distribution in the moderate and higher magnitude range. The statistical distribution of earthquakes with M ≥ 6.0 on single faults exhibits a fairly clear pseudo-periodic behavior, with a coefficient of variation Cv of the order of 0.3-0.6. We found in our synthetic catalog a clear trend of long-term acceleration of seismic activity preceding M ≥ 6.0 earthquakes and quiescence following those earthquakes. Lastly, as an example of a possible use of synthetic catalogs, an attenuation law was applied to all the events reported in the synthetic catalog for the production of maps showing the exceedence probability of given values of peak acceleration (PGA) on the territory under investigation. The application of a physics-based earthquake simulation algorithm to the central Apennines region, where the 24 August 2016 Amatrice earthquake occurred, allowed the compilation of a synthetic seismic catalog lasting 100 ky, and containing more than 500,000 M ≥ 4.0 events, without the limitations that real catalogs suffer in terms of completeness, homogeneity and time duration. The algorithm on which this simulator is based is constrained by several physical elements as: (a) an average slip rate for every single fault in the investigated fault systems, (b) the process of rupture growth and termination, leading to a self-organized earthquake magnitude distribution, and (c) interaction between earthquake sources, including small magnitude events. Events nucleated in one fault are allowed to expand into neighboring faults, even belonging to a different fault system, if they are separated by less than a given maximum distance. The seismogenic model upon which we applied the simulator code, was derived from the DISS 3.2.0 database (http://diss.rm.ingv.it/diss/), selecting all the fault systems that are recognized in the central Apennines region, for a total of 24 fault systems. The application of our simulation algorithm provides typical features in time, space and magnitude behavior of the seismicity, which are comparable with those of real observations. These features include long-term periodicity and clustering of strong earthquakes, and a realistic earthquake magnitude distribution departing from the linear Gutenberg-Richter distribution in the moderate and higher magnitude range. The statistical distribution of earthquakes with M ≥ 6.0 on single faults exhibits a fairly clear pseudo-periodic behavior, with a coefficient of variation Cv of the order of 0.3-0.6. We found in our synthetic catalog a clear trend of long-term acceleration of seismic activity preceding M ≥ 6.0 earthquakes and quiescence following those earthquakes. Lastly, as an example of a possible use of synthetic catalogs, an attenuation law was applied to all the events reported in the synthetic catalog for the production of maps showing the exceedence probability of given values of peak acceleration (PGA) on the territory under investigation.
Multicompare tests of the performance of different metaheuristics in EEG dipole source localization.
Escalona-Vargas, Diana Irazú; Lopez-Arevalo, Ivan; Gutiérrez, David
2014-01-01
We study the use of nonparametric multicompare statistical tests on the performance of simulated annealing (SA), genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE), when used for electroencephalographic (EEG) source localization. Such task can be posed as an optimization problem for which the referred metaheuristic methods are well suited. Hence, we evaluate the localization's performance in terms of metaheuristics' operational parameters and for a fixed number of evaluations of the objective function. In this way, we are able to link the efficiency of the metaheuristics with a common measure of computational cost. Our results did not show significant differences in the metaheuristics' performance for the case of single source localization. In case of localizing two correlated sources, we found that PSO (ring and tree topologies) and DE performed the worst, then they should not be considered in large-scale EEG source localization problems. Overall, the multicompare tests allowed to demonstrate the little effect that the selection of a particular metaheuristic and the variations in their operational parameters have in this optimization problem.
NASA Astrophysics Data System (ADS)
Jolliff, Jason Keith; Smith, Travis A.; Ladner, Sherwin; Arnone, Robert A.
2014-03-01
The U.S. Naval Research Laboratory (NRL) is developing nowcast/forecast software systems designed to combine satellite ocean color data streams with physical circulation models in order to produce prognostic fields of ocean surface materials. The Deepwater Horizon oil spill in the Gulf of Mexico provided a test case for the Bio-Optical Forecasting (BioCast) system to rapidly combine the latest satellite imagery of the oil slick distribution with surface circulation fields in order to produce oil slick transport scenarios and forecasts. In one such sequence of experiments, MODIS satellite true color images were combined with high-resolution ocean circulation forecasts from the Coupled Ocean-Atmosphere Mesoscale Prediction System (COAMPS®) to produce 96-h oil transport simulations. These oil forecasts predicted a major oil slick landfall at Grand Isle, Louisiana, USA that was subsequently observed. A key driver of the landfall scenario was the development of a coastal buoyancy current associated with Mississippi River Delta freshwater outflow. In another series of experiments, longer-term regional circulation model results were combined with oil slick source/sink scenarios to simulate the observed containment of surface oil within the Gulf of Mexico. Both sets of experiments underscore the importance of identifying and simulating potential hydrodynamic conduits of surface oil transport. The addition of explicit sources and sinks of surface oil concentrations provides a framework for increasingly complex oil spill modeling efforts that extend beyond horizontal trajectory analysis.
NASA Astrophysics Data System (ADS)
Kwok, Roger Hiu Fung
Air pollution in Hong Kong (HK) causes problems in visibility and public health, which are worsening over past few years. Out of particulate matters (PM) inhalable into respiratory system, 30% is contributed by sulfate (SO4), 40% by organic carbon (OC), and 10% by elemental carbon (EC). A meso-scale numerical modeling system CMAQ is devised to simulate the air quality in January (winter), April (spring), July (summer) and October (autumn) 2004, driven by meteorology simulated by MM5 and emission sources in China including Hong Kong. Observational and measurement data from Hong Kong Environmental Protection Department Air Quality network are compared with the model results. With respect to pollutant concentration level, model-observation agreement is reasonably well, especially in PM species sulfate, organic carbon (OC) and elemental carbon (EC); and gaseous species SO2, NOx and ozone. In terms of PM composition, the model agrees with the measurement in fractions of sulfate, OC and EC. Higher PM level in autumn and winter is associated with northeasterly winds due to continental outflow. To further investigate emission sources contributing to HK, a source apportioning method called Tagged Species Source Apportionment (TSSA) algorithm is applied to study contributions to level of SO4, SO2 and EC in HK. It is found that while sources beyond PRD are observed in entire HK during January and October 2004, emitting sectors are different among western HK, downtown area, and the east countryside. Specifically, power plants and vehicles from HK and Shenzhen affect the western new towns, while power plants, vehicles and ships within HK determine the downtown pollutants' level. The countryside is mainly influenced by sources beyond PRD.
NASA Astrophysics Data System (ADS)
Volpe, M.; Selva, J.; Tonini, R.; Romano, F.; Lorito, S.; Brizuela, B.; Argyroudis, S.; Salzano, E.; Piatanesi, A.
2016-12-01
Seismic Probabilistic Tsunami Hazard Analysis (SPTHA) is a methodology to assess the exceedance probability for different thresholds of tsunami hazard intensity, at a specific site or region in a given time period, due to a seismic source. A large amount of high-resolution inundation simulations is typically required for taking into account the full variability of potential seismic sources and their slip distributions. Starting from regional SPTHA offshore results, the computational cost can be reduced by considering for inundation calculations only a subset of `important' scenarios. We here use a method based on an event tree for the treatment of the seismic source aleatory variability; a cluster analysis on the offshore results to define the important sources; epistemic uncertainty treatment through an ensemble modeling approach. We consider two target sites in the Mediterranean (Milazzo, Italy, and Thessaloniki, Greece) where coastal (non nuclear) critical infrastructures (CIs) are located. After performing a regional SPTHA covering the whole Mediterranean, for each target site, few hundreds of representative scenarios are filtered out of all the potential seismic sources and the tsunami inundation is explicitly modeled, obtaining a site-specific SPTHA, with a complete characterization of the tsunami hazard in terms of flow depth and velocity time histories. Moreover, we also explore the variability of SPTHA at the target site accounting for coseismic deformation (i.e. uplift or subsidence) due to near field sources located in very shallow water. The results are suitable and will be applied for subsequent multi-hazard risk analysis for the CIs. These applications have been developed in the framework of the Italian Flagship Project RITMARE, EC FP7 ASTARTE (Grant agreement 603839) and STREST (Grant agreement 603389) projects, and of the INGV-DPC Agreement.
MOSES: A Matlab-based open-source stochastic epidemic simulator.
Varol, Huseyin Atakan
2016-08-01
This paper presents an open-source stochastic epidemic simulator. Discrete Time Markov Chain based simulator is implemented in Matlab. The simulator capable of simulating SEQIJR (susceptible, exposed, quarantined, infected, isolated and recovered) model can be reduced to simpler models by setting some of the parameters (transition probabilities) to zero. Similarly, it can be extended to more complicated models by editing the source code. It is designed to be used for testing different control algorithms to contain epidemics. The simulator is also designed to be compatible with a network based epidemic simulator and can be used in the network based scheme for the simulation of a node. Simulations show the capability of reproducing different epidemic model behaviors successfully in a computationally efficient manner.
Simulation of a beam rotation system for a spallation source
NASA Astrophysics Data System (ADS)
Reiss, Tibor; Reggiani, Davide; Seidel, Mike; Talanov, Vadim; Wohlmuther, Michael
2015-04-01
With a nominal beam power of nearly 1 MW on target, the Swiss Spallation Neutron Source (SINQ), ranks among the world's most powerful spallation neutron sources. The proton beam transport to the SINQ target is carried out exclusively by means of linear magnetic elements. In the transport line to SINQ the beam is scattered in two meson production targets and as a consequence, at the SINQ target entrance the beam shape can be described by Gaussian distributions in transverse x and y directions with tails cut short by collimators. This leads to a highly nonuniform power distribution inside the SINQ target, giving rise to thermal and mechanical stresses. In view of a future proton beam intensity upgrade, the possibility of homogenizing the beam distribution by means of a fast beam rotation system is currently under investigation. Important aspects which need to be studied are the impact of a rotating proton beam on the resulting neutron spectra, spatial flux distributions and additional—previously not present—proton losses causing unwanted activation of accelerator components. Hence a new source description method was developed for the radiation transport code MCNPX. This new feature makes direct use of the results from the proton beam optics code TURTLE. Its advantage to existing MCNPX source options is that all phase space information and correlations of each primary beam particle computed with TURTLE are preserved and transferred to MCNPX. Simulations of the different beam distributions together with their consequences in terms of neutron production are presented in this publication. Additionally, a detailed description of the coupling method between TURTLE and MCNPX is provided.
Slat Cove Noise Modeling: A Posteriori Analysis of Unsteady RANS Simulations
NASA Technical Reports Server (NTRS)
Choudhari, Meelan; Khorrami, Mehdi R.; Lockard, David P.; Atkins, Harold L.; Lilley, Geoffrey M.
2002-01-01
A companion paper by Khorrami et al demonstrates the feasibility of simulating the (nominally) self-sustained, large-scale unsteadiness within the leading-edge slat-cove region of multi-element airfoils using unsteady Reynolds-Averaged Navier-Stokes (URANS) equations, provided that the turbulence production term in the underlying two-equation turbulence model is switched off within the cove region. In conjunction with a FfowesWilliams-Hawkings solver, the URANS computations were shown to capture the dominant portion of the acoustic spectrum attributed to slat noise, as well as reproducing the increased intensity of slat cove motions (and, correspondingly, far-field noise as well) at the lower angles of attack. This paper examines that simulation database, augmented by additional simulations, with the objective of transitioning this apparent success to aeroacoustic predictions in an engineering context. As a first step towards this goal, the simulated flow and acoustic fields are compared with experiment and simplified analytical model. Rather intense near-field fluctuations in the simulated flow are found to be associated with unsteady separation along the slat bottom surface, relatively close to the slat cusp. Accuracy of the laminar-cove simulations in this near-wall region is raised to be an open issue. The adjoint Green's function approach is also explored in an attempt to identify the most efficient noise source locations.
Numerical simulations of internal wave generation by convection in water.
Lecoanet, Daniel; Le Bars, Michael; Burns, Keaton J; Vasil, Geoffrey M; Brown, Benjamin P; Quataert, Eliot; Oishi, Jeffrey S
2015-06-01
Water's density maximum at 4°C makes it well suited to study internal gravity wave excitation by convection: an increasing temperature profile is unstable to convection below 4°C, but stably stratified above 4°C. We present numerical simulations of a waterlike fluid near its density maximum in a two-dimensional domain. We successfully model the damping of waves in the simulations using linear theory, provided we do not take the weak damping limit typically used in the literature. To isolate the physical mechanism exciting internal waves, we use the spectral code dedalus to run several simplified model simulations of our more detailed simulation. We use data from the full simulation as source terms in two simplified models of internal-wave excitation by convection: bulk excitation by convective Reynolds stresses, and interface forcing via the mechanical oscillator effect. We find excellent agreement between the waves generated in the full simulation and the simplified simulation implementing the bulk excitation mechanism. The interface forcing simulations overexcite high-frequency waves because they assume the excitation is by the "impulsive" penetration of plumes, which spreads energy to high frequencies. However, we find that the real excitation is instead by the "sweeping" motion of plumes parallel to the interface. Our results imply that the bulk excitation mechanism is a very accurate heuristic for internal-wave generation by convection.
NASA Astrophysics Data System (ADS)
Dunlap, L.; Li, C.; Dickerson, R. R.; Krotkov, N. A.
2015-12-01
Weather systems, particularly mid-latitude wave cyclones, have been known to play an important role in the short-term variation of near-surface air pollution. Ground measurements and model simulations have demonstrated that stagnant air and minimal precipitation associated with high pressure systems are conducive to pollutant accumulation. With the passage of a cold front, built up pollution is transported downwind of the emission sources or washed out by precipitation. This concept is important to note when studying long-term changes in spatio-temporal pollution distribution, but has not been studied in detail from space. In this study, we focus on East Asia (especially the industrialized eastern China), where numerous large power plants and other point sources as well as area sources emit large amounts of SO2, an important gaseous pollutant and a precursor of aerosols. Using data from the Aura Ozone Monitoring Instrument (OMI) we show that such weather driven distribution can indeed be discerned from satellite data by utilizing probability distribution functions (PDFs) of SO2 column content. These PDFs are multimodal and give insight into the background pollution level at a given location and contribution from local and upwind emission sources. From these PDFs it is possible to determine the frequency for a given region to have SO2 loading that exceeds the background amount. By comparing OMI-observed long-term change in the frequency with meteorological data, we can gain insights into the effects of climate change (e.g., the weakening of Asian monsoon) on regional air quality. Such insight allows for better interpretation of satellite measurements as well as better prediction of future pollution distribution as a changing climate gives way to changing weather patterns.
Kim, Sangroh; Yoshizumi, Terry T; Yin, Fang-Fang; Chetty, Indrin J
2013-04-21
Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan-scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the 'ISource = 8: Phase-Space Source Incident from Multiple Directions' in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral CT scan dose in the BEAMnrc/EGSnrc system.
NASA Astrophysics Data System (ADS)
Kim, Sangroh; Yoshizumi, Terry T.; Yin, Fang-Fang; Chetty, Indrin J.
2013-04-01
Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan—scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the ‘ISource = 8: Phase-Space Source Incident from Multiple Directions’ in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral CT scan dose in the BEAMnrc/EGSnrc system.
Wesolowski, Edwin A.
1996-01-01
Two separate studies to simulate the effects of discharging treated wastewater to the Red River of the North at Fargo, North Dakota, and Moorhead, Minnesota, have been completed. In the first study, the Red River at Fargo Water-Quality Model was calibrated and verified for icefree conditions. In the second study, the Red River at Fargo Ice-Cover Water-Quality Model was verified for ice-cover conditions.To better understand and apply the Red River at Fargo Water-Quality Model and the Red River at Fargo Ice-Cover Water-Quality Model, the uncertainty associated with simulated constituent concentrations and property values was analyzed and quantified using the Enhanced Stream Water Quality Model-Uncertainty Analysis. The Monte Carlo simulation and first-order error analysis methods were used to analyze the uncertainty in simulated values for six constituents and properties at sites 5, 10, and 14 (upstream to downstream order). The constituents and properties analyzed for uncertainty are specific conductance, total organic nitrogen (reported as nitrogen), total ammonia (reported as nitrogen), total nitrite plus nitrate (reported as nitrogen), 5-day carbonaceous biochemical oxygen demand for ice-cover conditions and ultimate carbonaceous biochemical oxygen demand for ice-free conditions, and dissolved oxygen. Results are given in detail for both the ice-cover and ice-free conditions for specific conductance, total ammonia, and dissolved oxygen.The sensitivity and uncertainty of the simulated constituent concentrations and property values to input variables differ substantially between ice-cover and ice-free conditions. During ice-cover conditions, simulated specific-conductance values are most sensitive to the headwatersource specific-conductance values upstream of site 10 and the point-source specific-conductance values downstream of site 10. These headwater-source and point-source specific-conductance values also are the key sources of uncertainty. Simulated total ammonia concentrations are most sensitive to the point-source total ammonia concentrations at all three sites. Other input variables that contribute substantially to the variability of simulated total ammonia concentrations are the headwater-source total ammonia and the instream reaction coefficient for biological decay of total ammonia to total nitrite. Simulated dissolved-oxygen concentrations at all three sites are most sensitive to headwater-source dissolved-oxygen concentration. This input variable is the key source of variability for simulated dissolved-oxygen concentrations at sites 5 and 10. Headwatersource and point-source dissolved-oxygen concentrations are the key sources of variability for simulated dissolved-oxygen concentrations at site 14.During ice-free conditions, simulated specific-conductance values at all three sites are most sensitive to the headwater-source specific-conductance values. Headwater-source specificconductance values also are the key source of uncertainty. The input variables to which total ammonia and dissolved oxygen are most sensitive vary from site to site and may or may not correspond to the input variables that contribute the most to the variability. The input variables that contribute the most to the variability of simulated total ammonia concentrations are pointsource total ammonia, instream reaction coefficient for biological decay of total ammonia to total nitrite, and Manning's roughness coefficient. The input variables that contribute the most to the variability of simulated dissolved-oxygen concentrations are reaeration rate, sediment oxygen demand rate, and headwater-source algae as chlorophyll a.
Basin-scale heterogeneity in Antarctic precipitation and its impact on surface mass variability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fyke, Jeremy; Lenaerts, Jan T. M.; Wang, Hailong
Annually averaged precipitation in the form of snow, the dominant term of the Antarctic Ice Sheet surface mass balance, displays large spatial and temporal variability. Here we present an analysis of spatial patterns of regional Antarctic precipitation variability and their impact on integrated Antarctic surface mass balance variability simulated as part of a preindustrial 1800-year global, fully coupled Community Earth System Model simulation. Correlation and composite analyses based on this output allow for a robust exploration of Antarctic precipitation variability. We identify statistically significant relationships between precipitation patterns across Antarctica that are corroborated by climate reanalyses, regional modeling and icemore » core records. These patterns are driven by variability in large-scale atmospheric moisture transport, which itself is characterized by decadal- to centennial-scale oscillations around the long-term mean. We suggest that this heterogeneity in Antarctic precipitation variability has a dampening effect on overall Antarctic surface mass balance variability, with implications for regulation of Antarctic-sourced sea level variability, detection of an emergent anthropogenic signal in Antarctic mass trends and identification of Antarctic mass loss accelerations.« less
Basin-scale heterogeneity in Antarctic precipitation and its impact on surface mass variability
Fyke, Jeremy; Lenaerts, Jan T. M.; Wang, Hailong
2017-11-15
Annually averaged precipitation in the form of snow, the dominant term of the Antarctic Ice Sheet surface mass balance, displays large spatial and temporal variability. Here we present an analysis of spatial patterns of regional Antarctic precipitation variability and their impact on integrated Antarctic surface mass balance variability simulated as part of a preindustrial 1800-year global, fully coupled Community Earth System Model simulation. Correlation and composite analyses based on this output allow for a robust exploration of Antarctic precipitation variability. We identify statistically significant relationships between precipitation patterns across Antarctica that are corroborated by climate reanalyses, regional modeling and icemore » core records. These patterns are driven by variability in large-scale atmospheric moisture transport, which itself is characterized by decadal- to centennial-scale oscillations around the long-term mean. We suggest that this heterogeneity in Antarctic precipitation variability has a dampening effect on overall Antarctic surface mass balance variability, with implications for regulation of Antarctic-sourced sea level variability, detection of an emergent anthropogenic signal in Antarctic mass trends and identification of Antarctic mass loss accelerations.« less
NASA One-Dimensional Combustor Simulation--User Manual for S1D_ML
NASA Technical Reports Server (NTRS)
Stueber, Thomas J.; Paxson, Daniel E.
2014-01-01
The work presented in this paper is to promote research leading to a closed-loop control system to actively suppress thermo-acoustic instabilities. To serve as a model for such a closed-loop control system, a one-dimensional combustor simulation composed using MATLAB software tools has been written. This MATLAB based process is similar to a precursor one-dimensional combustor simulation that was formatted as FORTRAN 77 source code. The previous simulation process requires modification to the FORTRAN 77 source code, compiling, and linking when creating a new combustor simulation executable file. The MATLAB based simulation does not require making changes to the source code, recompiling, or linking. Furthermore, the MATLAB based simulation can be run from script files within the MATLAB environment or with a compiled copy of the executable file running in the Command Prompt window without requiring a licensed copy of MATLAB. This report presents a general simulation overview. Details regarding how to setup and initiate a simulation are also presented. Finally, the post-processing section describes the two types of files created while running the simulation and it also includes simulation results for a default simulation included with the source code.
Masterson, John P.; Granato, Gregory E.
2013-01-01
The Rhode Island Water Resources Board is considering use of groundwater resources from the Big River Management Area in central Rhode Island because increasing water demands in Rhode Island may exceed the capacity of current sources. Previous water-resources investigations in this glacially derived, valley-fill aquifer system have focused primarily on the effects of potential groundwater-pumping scenarios on streamflow depletion; however, the effects of groundwater withdrawals on wetlands have not been assessed, and such assessments are a requirement of the State’s permitting process to develop a water supply in this area. A need for an assessment of the potential effects of pumping on wetlands in the Big River Management Area led to a cooperative agreement in 2008 between the Rhode Island Water Resources Board, the U.S. Geological Survey, and the University of Rhode Island. This partnership was formed with the goal of developing methods for characterizing wetland vegetation, soil type, and hydrologic conditions, and monitoring and modeling water levels for pre- and post-water-supply development to assess potential effects of groundwater withdrawals on wetlands. This report describes the hydrogeology of the area and the numerical simulations that were used to analyze the interaction between groundwater and surface water in response to simulated groundwater withdrawals. The results of this analysis suggest that, given the hydrogeologic conditions in the Big River Management Area, a standard 5-day aquifer test may not be sufficient to determine the effects of pumping on water levels in nearby wetlands. Model simulations showed water levels beneath Reynolds Swamp declined by about 0.1 foot after 5 days of continuous pumping, but continued to decline by an additional 4 to 6 feet as pumping times were increased from a 5-day simulation period to a simulation period representative of long-term average monthly conditions. This continued decline in water levels with increased pumping time is related to the shift from the primary source of water to the pumped wells being derived from aquifer storage during the early-time (5 days) simulation to being derived more from induced infiltration from the flooded portion of the Big River (southernmost extent of the Flat River Reservoir) during the months of March through October or from captured groundwater discharge to this portion of the Big River when the downstream Flat River Reservoir is drained for weed control during the months of November through February, as was the case for the long-term monthly conditions.
NASA Astrophysics Data System (ADS)
Katata, G.; Chino, M.; Kobayashi, T.; Terada, H.; Ota, M.; Nagai, H.; Kajino, M.; Draxler, R.; Hort, M. C.; Malo, A.; Torii, T.; Sanada, Y.
2015-01-01
Temporal variations in the amount of radionuclides released into the atmosphere during the Fukushima Daiichi Nuclear Power Station (FNPS1) accident and their atmospheric and marine dispersion are essential to evaluate the environmental impacts and resultant radiological doses to the public. In this paper, we estimate the detailed atmospheric releases during the accident using a reverse estimation method which calculates the release rates of radionuclides by comparing measurements of air concentration of a radionuclide or its dose rate in the environment with the ones calculated by atmospheric and oceanic transport, dispersion and deposition models. The atmospheric and oceanic models used are WSPEEDI-II (Worldwide version of System for Prediction of Environmental Emergency Dose Information) and SEA-GEARN-FDM (Finite difference oceanic dispersion model), both developed by the authors. A sophisticated deposition scheme, which deals with dry and fog-water depositions, cloud condensation nuclei (CCN) activation, and subsequent wet scavenging due to mixed-phase cloud microphysics (in-cloud scavenging) for radioactive iodine gas (I2 and CH3I) and other particles (CsI, Cs, and Te), was incorporated into WSPEEDI-II to improve the surface deposition calculations. The results revealed that the major releases of radionuclides due to the FNPS1 accident occurred in the following periods during March 2011: the afternoon of 12 March due to the wet venting and hydrogen explosion at Unit 1, midnight of 14 March when the SRV (safety relief valve) was opened three times at Unit 2, the morning and night of 15 March, and the morning of 16 March. According to the simulation results, the highest radioactive contamination areas around FNPS1 were created from 15 to 16 March by complicated interactions among rainfall, plume movements, and the temporal variation of release rates. The simulation by WSPEEDI-II using the new source term reproduced the local and regional patterns of cumulative surface deposition of total 131I and 137Cs and air dose rate obtained by airborne surveys. The new source term was also tested using three atmospheric dispersion models (Modèle Lagrangien de Dispersion de Particules d'ordre zéro: MLDP0, Hybrid Single Particle Lagrangian Integrated Trajectory Model: HYSPLIT, and Met Office's Numerical Atmospheric-dispersion Modelling Environment: NAME) for regional and global calculations, and the calculated results showed good agreement with observed air concentration and surface deposition of 137Cs in eastern Japan.
NASA Astrophysics Data System (ADS)
Dufour, Gaëlle; Albergel, Armand; Balkanski, Yves; Beekmann, Matthias; Cai, Zhaonan; Fortems-Cheiney, Audrey; Cuesta, Juan; Derognat, Claude; Eremenko, Maxim; Foret, Gilles; Hauglustaine, Didier; Lachatre, Matthieu; Laurent, Benoit; Liu, Yi; Meng, Fan; Siour, Guillaume; Tao, Shu; Velay-Lasry, Fanny; Zhang, Qijie; Zhang, Yuli
2017-04-01
The rapid economic development and urbanization of China during the last decades resulted in rising pollutant emissions leading to amongst the largest pollutant concentrations in the world for the major pollutants (ozone, PM2.5, and PM10). Robust monitoring and forecasting systems associated with downstream services providing comprehensive risk indicators are highly needed to establish efficient pollution mitigation strategies. In addition, a precise evaluation of the present and future impacts of Chinese pollutant emissions is of importance to quantify: first, the consequences of pollutants export on atmospheric composition and air quality all over the globe; second, the additional radiative forcing induced by the emitted and produced short-lived climate forcers (ozone and aerosols); third, the long-term health consequences of pollution exposure. To achieve this, a detailed understanding of East Asian pollution is necessary. The French PolEASIA project aims at addressing these different issues by providing a better quantification of major pollutants sources and distributions as well as of their recent and future evolution. The main objectives, methodologies and tools of this starting 4-year project will be presented. An ambitious synergistic and multi-scale approach coupling innovative satellite observations, in situ measurements and chemical transport model simulations will be developed to characterize the spatial distribution, the interannual to daily variability and the trends of the major pollutants (ozone and aerosols) and their sources over East Asia, and to quantify the role of the different processes (emissions, transport, chemical transformation) driving the observed pollutant distributions. A particular attention will be paid to assess the natural and anthropogenic contributions to East Asian pollution. Progress made with the understanding of pollutant sources, especially in terms of modeling of pollution over East Asia and advanced numerical approaches such as inverse modeling will serve the development of an efficient and marketable forecasting system for regional outdoor air pollution. The performances of this upgraded forecasting system will be evaluated and promoted to ensure a good visibility of the French technology. In addition, the contribution of Chinese pollution to the regional and global atmospheric composition, as well as the resulting radiative forcing of short-lived species will be determined using both satellite observations and model simulations. Health Impact Assessment (HIA) methods coupled with model simulations will be used to estimate the long-term impacts of exposure to pollutants (PM2.5 and ozone) on cardiovascular and respiratory mortality. First results obtained in this framework will be presented.
NASA Astrophysics Data System (ADS)
Piper, S. C.; Keeling, R. F.; Patra, P. K.; Welp, L. R.
2011-12-01
We present an analysis of the trends and interannual variations in the phase and amplitude of the seasonal cycle of atmospheric CO2 at Northern Hemisphere stations of the Scripps network from 1958 to 2010. The seasonal cycle here primarily reflects biospheric activity over large land regions and provides a strong constraint on NEE. The analysis includes observational records at Pt. Barrow (71°N), La Jolla (33°N), and Kumukahi (20°N), in addition to Mauna Loa (20°N), Station Papa (50°N), and Alert, Canada (82°N). We compare observations with forward atmospheric transport simulations which employ interannually-varying reanalyzed winds with seasonally variable terrestrial biospheric, oceanic and fossil fuel sources to account for atmospheric transport. The observed increase in seasonal amplitude since 1958 has varied among stations and with time at each station. The temporal changes often have not been coherent among stations. The amplitude increased less than 10% at Mauna Loa and 45% at Barrow, Alaska from the 1960s. The record at Alert, which started in 1986, appears to match variations at Barrow, and recent measurements at Station Papa in the Alaskan Gyre suggest an increase intermediate between that of Mauna Loa and Point Barrow. The most striking increase has been at midlatitudes at La Jolla, about 60% since the late 1950s in part resulting from changes in local meteorological conditions. For Barrow and Mauna Loa, the amplitude increased rapidly from 1970 to 1990, after which it slowed significantly at Barrow, and decreased at Mauna Loa. The variations at Alert were similar to those at Barrow suggesting that both records are representative of large-scale Arctic air masses. Kumukahi and Mauna Loa are located at the same latitude but different altitudes. For common years of record in 1980-2000, the amplitude at both stations varied interannually but without a long term trend. After 2000, however, the amplitude at Mauna Loa increased dramatically to 2004 and decreased to 2009, while the amplitude at Kumukahi increased slowly. These differences reflect different influences of source regions and transport at the two stations. Climate variations are an important driver for both the long term trend and shorter term interannual variations in the seasonal amplitude. However, several studies for short periods suggest that atmospheric transport has an important influence. Model simulations with interannually-varying winds for the entire Mauna Loa record, from 1958 to 2010, indicate that the long-term advance in the observed phase at Mauna Loa, by about 8 days in 50 years, is produced by atmospheric transport up until 1990, but not afterward. Observed variations in the seasonal amplitude however are poorly simulated suggesting that variations in terrestrial sources, perhaps driven by temperature before 1990 and drought afterwards may be important as suggested in previous studies. Findings for the remaining stations will be presented. As a whole, temporal and spatial variations in amplitude and phase reflect a complex interplay of climate-driven changes in sources and atmospheric transport.
Numerical modelling of multiphase liquid-vapor-gas flows with interfaces and cavitation
NASA Astrophysics Data System (ADS)
Pelanti, Marica
2017-11-01
We are interested in the simulation of multiphase flows where the dynamical appearance of vapor cavities and evaporation fronts in a liquid is coupled to the dynamics of a third non-condensable gaseous phase. We describe these flows by a single-velocity three-phase compressible flow model composed of the phasic mass and total energy equations, the volume fraction equations, and the mixture momentum equation. The model includes stiff mechanical and thermal relaxation source terms for all the phases, and chemical relaxation terms to describe mass transfer between the liquid and vapor phases of the species that may undergo transition. The flow equations are solved by a mixture-energy-consistent finite volume wave propagation scheme, combined with simple and robust procedures for the treatment of the stiff relaxation terms. An analytical study of the characteristic wave speeds of the hierarchy of relaxed models associated to the parent model system is also presented. We show several numerical experiments, including two-dimensional simulations of underwater explosive phenomena where highly pressurized gases trigger cavitation processes close to a rigid surface or to a free surface. This work was supported by the French Government Grant DGA N. 2012.60.0011.00.470.75.01, and partially by the Norwegian Grant RCN N. 234126/E30.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griffin, Brian M.; Larson, Vincent E.
Microphysical processes, such as the formation, growth, and evaporation of precipitation, interact with variability and covariances (e.g., fluxes) in moisture and heat content. For instance, evaporation of rain may produce cold pools, which in turn may trigger fresh convection and precipitation. These effects are usually omitted or else crudely parameterized at subgrid scales in weather and climate models.A more formal approach is pursued here, based on predictive, horizontally averaged equations for the variances, covariances, and fluxes of moisture and heat content. These higher-order moment equations contain microphysical source terms. The microphysics terms can be integrated analytically, given a suitably simplemore » warm-rain microphysics scheme and an approximate assumption about the multivariate distribution of cloud-related and precipitation-related variables. Performing the integrations provides exact expressions within an idealized context.A large-eddy simulation (LES) of a shallow precipitating cumulus case is performed here, and it indicates that the microphysical effects on (co)variances and fluxes can be large. In some budgets and altitude ranges, they are dominant terms. The analytic expressions for the integrals are implemented in a single-column, higher-order closure model. Interactive single-column simulations agree qualitatively with the LES. The analytic integrations form a parameterization of microphysical effects in their own right, and they also serve as benchmark solutions that can be compared to non-analytic integration methods.« less
Evaluation of a Consistent LES/PDF Method Using a Series of Experimental Spray Flames
NASA Astrophysics Data System (ADS)
Heye, Colin; Raman, Venkat
2012-11-01
A consistent method for the evolution of the joint-scalar probability density function (PDF) transport equation is proposed for application to large eddy simulation (LES) of turbulent reacting flows containing evaporating spray droplets. PDF transport equations provide the benefit of including the chemical source term in closed form, however, additional terms describing LES subfilter mixing must be modeled. The recent availability of detailed experimental measurements provide model validation data for a wide range of evaporation rates and combustion regimes, as is well-known to occur in spray flames. In this work, the experimental data will used to investigate the impact of droplet mass loading and evaporation rates on the subfilter scalar PDF shape in comparison with conventional flamelet models. In addition, existing model term closures in the PDF transport equations are evaluated with a focus on their validity in the presence of regime changes.
Emergent Constraints for Cloud Feedbacks and Climate Sensitivity
Klein, Stephen A.; Hall, Alex
2015-10-26
Emergent constraints are physically explainable empirical relationships between characteristics of the current climate and long-term climate prediction that emerge in collections of climate model simulations. With the prospect of constraining long-term climate prediction, scientists have recently uncovered several emergent constraints related to long-term cloud feedbacks. We review these proposed emergent constraints, many of which involve the behavior of low-level clouds, and discuss criteria to assess their credibility. With further research, some of the cases we review may eventually become confirmed emergent constraints, provided they are accompanied by credible physical explanations. Because confirmed emergent constraints identify a source of model errormore » that projects onto climate predictions, they deserve extra attention from those developing climate models and climate observations. While a systematic bias cannot be ruled out, it is noteworthy that the promising emergent constraints suggest larger cloud feedback and hence climate sensitivity.« less
A consistent modelling methodology for secondary settling tanks: a reliable numerical method.
Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena
2013-01-01
The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.
A probabilistic analysis of cumulative carbon emissions and long-term planetary warming
Fyke, Jeremy Garmeson; Matthews, H. Damon
2015-11-16
Efforts to mitigate and adapt to long-term climate change could benefit greatly from probabilistic estimates of cumulative carbon emissions due to fossil fuel burning and resulting CO 2-induced planetary warming. Here we demonstrate the use of a reduced-form model to project these variables. We performed simulations using a large-ensemble framework with parametric uncertainty sampled to produce distributions of future cumulative emissions and consequent planetary warming. A hind-cast ensemble of simulations captured 1980–2012 historical CO 2 emissions trends and an ensemble of future projection simulations generated a distribution of emission scenarios that qualitatively resembled the suite of Representative and Extended Concentrationmore » Pathways. The resulting cumulative carbon emission and temperature change distributions are characterized by 5–95th percentile ranges of 0.96–4.9 teratonnes C (Tt C) and 1.4 °C–8.5 °C, respectively, with 50th percentiles at 3.1 Tt C and 4.7 °C. Within the wide range of policy-related parameter combinations that produced these distributions, we found that low-emission simulations were characterized by both high carbon prices and low costs of non-fossil fuel energy sources, suggesting the importance of these two policy levers in particular for avoiding dangerous levels of climate warming. With this analysis we demonstrate a probabilistic approach to the challenge of identifying strategies for limiting cumulative carbon emissions and assessing likelihoods of surpassing dangerous temperature thresholds.« less
EEG source localization: Sensor density and head surface coverage.
Song, Jasmine; Davey, Colin; Poulsen, Catherine; Luu, Phan; Turovets, Sergei; Anderson, Erik; Li, Kai; Tucker, Don
2015-12-30
The accuracy of EEG source localization depends on a sufficient sampling of the surface potential field, an accurate conducting volume estimation (head model), and a suitable and well-understood inverse technique. The goal of the present study is to examine the effect of sampling density and coverage on the ability to accurately localize sources, using common linear inverse weight techniques, at different depths. Several inverse methods are examined, using the popular head conductivity. Simulation studies were employed to examine the effect of spatial sampling of the potential field at the head surface, in terms of sensor density and coverage of the inferior and superior head regions. In addition, the effects of sensor density and coverage are investigated in the source localization of epileptiform EEG. Greater sensor density improves source localization accuracy. Moreover, across all sampling density and inverse methods, adding samples on the inferior surface improves the accuracy of source estimates at all depths. More accurate source localization of EEG data can be achieved with high spatial sampling of the head surface electrodes. The most accurate source localization is obtained when the voltage surface is densely sampled over both the superior and inferior surfaces. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reed, J; Micka, J; Culberson, W
Purpose: To determine the in-air azimuthal anisotropy and in-water dose distribution for the 1 cm length of the CivaString {sup 103}Pd brachytherapy source through measurements and Monte Carlo (MC) simulations. American Association of Physicists in Medicine Task Group No. 43 (TG-43) dosimetry parameters were also determined for this source. Methods: The in-air azimuthal anisotropy of the source was measured with a NaI scintillation detector and simulated with the MCNP5 radiation transport code. Measured and simulated results were normalized to their respective mean values and compared. The TG-43 dose-rate constant, line-source radial dose function, and 2D anisotropy function for this sourcemore » were determined from LiF:Mg,Ti thermoluminescent dosimeter (TLD) measurements and MC simulations. The impact of {sup 103}Pd well-loading variability on the in-water dose distribution was investigated using MC simulations by comparing the dose distribution for a source model with four wells of equal strength to that for a source model with strengths increased by 1% for two of the four wells. Results: NaI scintillation detector measurements and MC simulations of the in-air azimuthal anisotropy showed that ≥95% of the normalized data were within 1.2% of the mean value. TLD measurements and MC simulations of the TG-43 dose-rate constant, line-source radial dose function, and 2D anisotropy function agreed to within the experimental TLD uncertainties (k=2). MC simulations showed that a 1% variability in {sup 103}Pd well-loading resulted in changes of <0.1%, <0.1%, and <0.3% in the TG-43 dose-rate constant, radial dose distribution, and polar dose distribution, respectively. Conclusion: The CivaString source has a high degree of azimuthal symmetry as indicated by the NaI scintillation detector measurements and MC simulations of the in-air azimuthal anisotropy. TG-43 dosimetry parameters for this source were determined from TLD measurements and MC simulations. {sup 103}Pd well-loading variability results in minimal variations in the in-water dose distribution according to MC simulations. This work was partially supported by CivaTech Oncology, Inc. through an educational grant for Joshua Reed, John Micka, Wesley Culberson, and Larry DeWerd and through research support for Mark Rivard.« less
Density and white light brightness in looplike coronal mass ejections - Temporal evolution
NASA Technical Reports Server (NTRS)
Steinolfson, R. S.; Hundhausen, A. J.
1988-01-01
Three ambient coronal models suitable for studies of time-dependent phenomena were used to investigate the propagation of coronal mass ejections initiated in each atmosphere by an identical energy source. These models included those of a static corona with a dipole magnetic field, developed by Dryer et al. (1979); a steady polytropic corona with an equatorial coronal streamer, developed by Steinolfson et al. (1982); and Steinolfson's (1988) model of heated corona with an equatorial coronal streamer. The results indicated that the first model does not adequately represent the general characteristics of observed looplike mass ejections, and the second model simulated only some of the observed features. Only the third model, which included a heating term and a streamer, was found to yield accurate simulation of the mess ejection observations.
Computation of nonlinear ultrasound fields using a linearized contrast source method.
Verweij, Martin D; Demi, Libertario; van Dongen, Koen W A
2013-08-01
Nonlinear ultrasound is important in medical diagnostics because imaging of the higher harmonics improves resolution and reduces scattering artifacts. Second harmonic imaging is currently standard, and higher harmonic imaging is under investigation. The efficient development of novel imaging modalities and equipment requires accurate simulations of nonlinear wave fields in large volumes of realistic (lossy, inhomogeneous) media. The Iterative Nonlinear Contrast Source (INCS) method has been developed to deal with spatiotemporal domains measuring hundreds of wavelengths and periods. This full wave method considers the nonlinear term of the Westervelt equation as a nonlinear contrast source, and solves the equivalent integral equation via the Neumann iterative solution. Recently, the method has been extended with a contrast source that accounts for spatially varying attenuation. The current paper addresses the problem that the Neumann iterative solution converges badly for strong contrast sources. The remedy is linearization of the nonlinear contrast source, combined with application of more advanced methods for solving the resulting integral equation. Numerical results show that linearization in combination with a Bi-Conjugate Gradient Stabilized method allows the INCS method to deal with fairly strong, inhomogeneous attenuation, while the error due to the linearization can be eliminated by restarting the iterative scheme.
Design and Performance of a Triple Source Air Mass Zero Solar Simulator
NASA Technical Reports Server (NTRS)
Jenkins, Phillip; Scheiman, David; Snyder, David
2005-01-01
Simulating the sun in a laboratory for the purpose of measuring solar cells has long been a challenge for engineers and scientists. Multi-junction cells demand higher fidelity of a solar simulator than do single junction cells, due to a need for close spectral matching as well as AM0 intensity. A GaInP/GaAs/Ge solar cell for example, requires spectral matching in three distinct spectral bands (figure 1). A commercial single source high-pressure xenon arc solar simulator such as the Spectrolab X-25 at NASA Glenn Research Center, can match the top two junctions of a GaInP/GaAs/Ge cell to within 1.3% mismatch, with the GaAs cell receiving slightly more current than required. The Ge bottom cell however, is mismatched +8.8%. Multi source simulators are designed to match the current for all junctions but typically have small illuminated areas, less uniformity and less beam collimation compared to an X-25 simulator. It was our intent when designing a multi source simulator to preserve as many aspects of the X-25 while adding multi-source capability.
Long-term monitoring on environmental disasters using multi-source remote sensing technique
NASA Astrophysics Data System (ADS)
Kuo, Y. C.; Chen, C. F.
2017-12-01
Environmental disasters are extreme events within the earth's system that cause deaths and injuries to humans, as well as causing damages and losses of valuable assets, such as buildings, communication systems, farmlands, forest and etc. In disaster management, a large amount of multi-temporal spatial data is required. Multi-source remote sensing data with different spatial, spectral and temporal resolutions is widely applied on environmental disaster monitoring. With multi-source and multi-temporal high resolution images, we conduct rapid, systematic and seriate observations regarding to economic damages and environmental disasters on earth. It is based on three monitoring platforms: remote sensing, UAS (Unmanned Aircraft Systems) and ground investigation. The advantages of using UAS technology include great mobility and availability in real-time rapid and more flexible weather conditions. The system can produce long-term spatial distribution information from environmental disasters, obtaining high-resolution remote sensing data and field verification data in key monitoring areas. It also supports the prevention and control on ocean pollutions, illegally disposed wastes and pine pests in different scales. Meanwhile, digital photogrammetry can be applied on the camera inside and outside the position parameters to produce Digital Surface Model (DSM) data. The latest terrain environment information is simulated by using DSM data, and can be used as references in disaster recovery in the future.
A dual-porosity model for simulating solute transport in oil shale
Glover, K.C.
1987-01-01
A model is described for simulating three-dimensional groundwater flow and solute transport in oil shale and associated geohydrologic units. The model treats oil shale as a dual-porosity medium by simulating flow and transport within fractures using the finite-element method. Diffusion of solute between fractures and the essentially static water of the shale matrix is simulated by including an analytical solution that acts as a source-sink term to the differential equation of solute transport. While knowledge of fracture orientation and spacing is needed to effectively use the model, it is not necessary to map the locations of individual fractures. The computer program listed in the report incorporates many of the features of previous dual-porosity models while retaining a practical approach to solving field problems. As a result the theory of solute transport is not extended in any appreciable way. The emphasis is on bringing together various aspects of solute transport theory in a manner that is particularly suited to the unusual groundwater flow and solute transport characteristics of oil shale systems. (Author 's abstract)
Da Silva, David; Qin, Liangchun; DeBuse, Carolyn; DeJong, Theodore M
2014-09-01
Developing a conceptual and functional framework for simulating annual long-term carbohydrate storage and mobilization in trees has been a weak point for virtually all tree models. This paper provides a novel approach for solving this problem using empirical field data and details of structural components of simulated trees to estimate the total carbohydrate stored over a dormant season and available for mobilization during spring budbreak. The seasonal patterns of mobilization and storage of non-structural carbohydrates in bark and wood of the scion and rootstock crowns of the trunks of peach (Prunus persica) trees were analysed subsequent to treatments designed to maximize differences in source-sink behaviour during the growing season. Mature peach trees received one of three treatments (defruited and no pruning, severe pruning to 1·0 m, and unthinned with no pruning) in late winter, just prior to budbreak. Selected trees of each treatment were harvested at four times (March, June, August and November) and slices of trunk and root crown tissue above and below the graft union were removed for carbohydrate analysis. Inner bark and xylem tissues from the first to fifth rings were separated and analysed for non-structural carbohydrates. Data from these experiments were then used to estimate the amount of non-structural carbohydrates available for mobilization and to parameterize a carbohydrate storage sub-model in the functional-structural L-PEACH model. The mass fraction of carbohydrates in all sample tissues decreased from March to June, but the decrease was greatest in the severely pruned and unthinned treatments. November carbohydrate mass fractions in all tissues recovered to values similar to those in the previous March, except in the older xylem rings of the severely pruned and unthinned treatment. Carbohydrate storage sink capacity in trunks was empirically estimated from the mean maximum measured trunk non-structural carbohydrate mass fractions. The carbohydrate storage source available for mobilization was estimated from these maximum mass fractions and the early summer minimum mass fractions remaining in these tissues in the severe treatments that maximized mobilization of stored carbohydrates. The L-PEACH sink-source carbohydrate distribution framework was then used along with simulated tree structure to successfully simulate annual carbohydrate storage sink and source behaviour over years. The sink-source concept of carbohydrate distribution within a tree was extended to include winter carbohydrate storage and spring mobilization by considering the storage sink and source as a function of the collective capacity of active xylem and phloem tissue of the tree, and its annual behaviour was effectively simulated using the L-PEACH functional-structural plant model.
Verification of target motion effects on SAR imagery using the Gotcha GMTI challenge dataset
NASA Astrophysics Data System (ADS)
Hack, Dan E.; Saville, Michael A.
2010-04-01
This paper investigates the relationship between a ground moving target's kinematic state and its SAR image. While effects such as cross-range offset, defocus, and smearing appear well understood, their derivations in the literature typically employ simplifications of the radar/target geometry and assume point scattering targets. This study adopts a geometrical model for understanding target motion effects in SAR imagery, termed the target migration path, and focuses on experimental verification of predicted motion effects using both simulated and empirical datasets based on the Gotcha GMTI challenge dataset. Specifically, moving target imagery is generated from three data sources: first, simulated phase history for a moving point target; second, simulated phase history for a moving vehicle derived from a simulated Mazda MPV X-band signature; and third, empirical phase history from the Gotcha GMTI challenge dataset. Both simulated target trajectories match the truth GPS target position history from the Gotcha GMTI challenge dataset, allowing direct comparison between all three imagery sets and the predicted target migration path. This paper concludes with a discussion of the parallels between the target migration path and the measurement model within a Kalman filtering framework, followed by conclusions.
AxonPacking: An Open-Source Software to Simulate Arrangements of Axons in White Matter
Mingasson, Tom; Duval, Tanguy; Stikov, Nikola; Cohen-Adad, Julien
2017-01-01
HIGHLIGHTS AxonPacking: Open-source software for simulating white matter microstructure.Validation on a theoretical disk packing problem.Reproducible and stable for various densities and diameter distributions.Can be used to study interplay between myelin/fiber density and restricted fraction. Quantitative Magnetic Resonance Imaging (MRI) can provide parameters that describe white matter microstructure, such as the fiber volume fraction (FVF), the myelin volume fraction (MVF) or the axon volume fraction (AVF) via the fraction of restricted water (fr). While already being used for clinical application, the complex interplay between these parameters requires thorough validation via simulations. These simulations required a realistic, controlled and adaptable model of the white matter axons with the surrounding myelin sheath. While there already exist useful algorithms to perform this task, none of them combine optimisation of axon packing, presence of myelin sheath and availability as free and open source software. Here, we introduce a novel disk packing algorithm that addresses these issues. The performance of the algorithm is tested in term of reproducibility over 50 runs, resulting density, and stability over iterations. This tool was then used to derive multiple values of FVF and to study the impact of this parameter on fr and MVF in light of the known microstructure based on histology sample. The standard deviation of the axon density over runs was lower than 10−3 and the expected hexagonal packing for monodisperse disks was obtained with a density close to the optimal density (obtained: 0.892, theoretical: 0.907). Using an FVF ranging within [0.58, 0.82] and a mean inter-axon gap ranging within [0.1, 1.1] μm, MVF ranged within [0.32, 0.44] and fr ranged within [0.39, 0.71], which is consistent with the histology. The proposed algorithm is implemented in the open-source software AxonPacking (https://github.com/neuropoly/axonpacking) and can be useful for validating diffusion models as well as for enabling researchers to study the interplay between microstructure parameters when evaluating qMRI methods. PMID:28197091
Analysis of the SPS Long Term Orbit Drifts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Velotti, Francesco; Bracco, Chiara; Cornelis, Karel
2016-06-01
The Super Proton Synchrotron (SPS) is the last accelerator in the Large Hadron Collider (LHC) injector chain, and has to deliver the two high-intensity 450 GeV proton beams to the LHC. The transport from SPS to LHC is done through the two Transfer Lines (TL), TI2 and TI8, for Beam 1 (B1) and Beam 2 (B2) respectively. During the first LHC operation period Run 1, a long term drift of the SPS orbit was observed, causing changes in the LHC injection due to the resulting changes in the TL trajectories. This translated into longer LHC turnaround because of the necessitymore » to periodically correct the TL trajectories in order to preserve the beam quality at injection into the LHC. Different sources for the SPS orbit drifts have been investigated: each of them can account only partially for the total orbit drift observed. In this paper, the possible sources of such drift are described, together with the simulated and measured effect they cause. Possible solutions and countermeasures are also discussed.« less
Balanced Central Schemes for the Shallow Water Equations on Unstructured Grids
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron
2004-01-01
We present a two-dimensional, well-balanced, central-upwind scheme for approximating solutions of the shallow water equations in the presence of a stationary bottom topography on triangular meshes. Our starting point is the recent central scheme of Kurganov and Petrova (KP) for approximating solutions of conservation laws on triangular meshes. In order to extend this scheme from systems of conservation laws to systems of balance laws one has to find an appropriate discretization of the source terms. We first show that for general triangulations there is no discretization of the source terms that corresponds to a well-balanced form of the KP scheme. We then derive a new variant of a central scheme that can be balanced on triangular meshes. We note in passing that it is straightforward to extend the KP scheme to general unstructured conformal meshes. This extension allows us to recover our previous well-balanced scheme on Cartesian grids. We conclude with several simulations, verifying the second-order accuracy of our scheme as well as its well-balanced properties.
Polarization and long-term variability of Sgr A* X-ray echo
NASA Astrophysics Data System (ADS)
Churazov, E.; Khabibullin, I.; Ponti, G.; Sunyaev, R.
2017-06-01
We use a model of the molecular gas distribution within ˜100 pc from the centre of the Milky Way (Kruijssen, Dale & Longmore) to simulate time evolution and polarization properties of the reflected X-ray emission, associated with the past outbursts from Sgr A*. While this model is too simple to describe the complexity of the true gas distribution, it illustrates the importance and power of long-term observations of the reflected emission. We show that the variable part of X-ray emission observed by Chandra and XMM-Newton from prominent molecular clouds is well described by a pure reflection model, providing strong support of the reflection scenario. While the identification of Sgr A* as a primary source for this reflected emission is already a very appealing hypothesis, a decisive test of this model can be provided by future X-ray polarimetric observations, which will allow placing constraints on the location of the primary source. In addition, X-ray polarimeters (like, e.g. XIPE) have sufficient sensitivity to constrain the line-of-sight positions of molecular complexes, removing major uncertainty in the model.
Becker, H; Albera, L; Comon, P; Nunes, J-C; Gribonval, R; Fleureau, J; Guillotel, P; Merlet, I
2017-08-15
Over the past decades, a multitude of different brain source imaging algorithms have been developed to identify the neural generators underlying the surface electroencephalography measurements. While most of these techniques focus on determining the source positions, only a small number of recently developed algorithms provides an indication of the spatial extent of the distributed sources. In a recent comparison of brain source imaging approaches, the VB-SCCD algorithm has been shown to be one of the most promising algorithms among these methods. However, this technique suffers from several problems: it leads to amplitude-biased source estimates, it has difficulties in separating close sources, and it has a high computational complexity due to its implementation using second order cone programming. To overcome these problems, we propose to include an additional regularization term that imposes sparsity in the original source domain and to solve the resulting optimization problem using the alternating direction method of multipliers. Furthermore, we show that the algorithm yields more robust solutions by taking into account the temporal structure of the data. We also propose a new method to automatically threshold the estimated source distribution, which permits to delineate the active brain regions. The new algorithm, called Source Imaging based on Structured Sparsity (SISSY), is analyzed by means of realistic computer simulations and is validated on the clinical data of four patients. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Usang, M. D., E-mail: mark-dennis@nuclearmalaysia.gov.my; Hamzah, N. S., E-mail: mark-dennis@nuclearmalaysia.gov.my; Abi, M. J. B., E-mail: mark-dennis@nuclearmalaysia.gov.my
ORIGEN 2.2 are employed to obtain data regarding γ source term and the radio-activity of irradiated TRIGA fuel. The fuel composition are specified in grams for use as input data. Three types of fuel are irradiated in the reactor, each differs from the other in terms of the amount of Uranium compared to the total weight. Each fuel are irradiated for 365 days with 50 days time step. We obtain results on the total radioactivity of the fuel, the composition of activated materials, composition of fission products and the photon spectrum of the burned fuel. We investigate the differences ofmore » results using BWR and PWR library for ORIGEN. Finally, we compare the composition of major nuclides after 1 year irradiation of both ORIGEN library with results from WIMS. We found only minor disagreements between the yields of PWR and BWR libraries. In comparison with WIMS, the errors are a little bit more pronounced. To overcome this errors, the irradiation power used in ORIGEN could be increased a little, so that the differences in the yield of ORIGEN and WIMS could be reduced. A more permanent solution is to use a different code altogether to simulate burnup such as DRAGON and ORIGEN-S. The result of this study are essential for the design of radiation shielding from the fuel.« less
Cannon, Robert C; Gleeson, Padraig; Crook, Sharon; Ganapathy, Gautham; Marin, Boris; Piasini, Eugenio; Silver, R Angus
2014-01-01
Computational models are increasingly important for studying complex neurophysiological systems. As scientific tools, it is essential that such models can be reproduced and critically evaluated by a range of scientists. However, published models are currently implemented using a diverse set of modeling approaches, simulation tools, and computer languages making them inaccessible and difficult to reproduce. Models also typically contain concepts that are tightly linked to domain-specific simulators, or depend on knowledge that is described exclusively in text-based documentation. To address these issues we have developed a compact, hierarchical, XML-based language called LEMS (Low Entropy Model Specification), that can define the structure and dynamics of a wide range of biological models in a fully machine readable format. We describe how LEMS underpins the latest version of NeuroML and show that this framework can define models of ion channels, synapses, neurons and networks. Unit handling, often a source of error when reusing models, is built into the core of the language by specifying physical quantities in models in terms of the base dimensions. We show how LEMS, together with the open source Java and Python based libraries we have developed, facilitates the generation of scripts for multiple neuronal simulators and provides a route for simulator free code generation. We establish that LEMS can be used to define models from systems biology and map them to neuroscience-domain specific simulators, enabling models to be shared between these traditionally separate disciplines. LEMS and NeuroML 2 provide a new, comprehensive framework for defining computational models of neuronal and other biological systems in a machine readable format, making them more reproducible and increasing the transparency and accessibility of their underlying structure and properties.
Golabian, A; Hosseini, M A; Ahmadi, M; Soleimani, B; Rezvanifard, M
2018-01-01
Miniature neutron source reactors (MNSRs) are among the safest and economic research reactors with potentials to be used for neutron studies. This manuscript explores the feasibility of 177 Lu production in Isfahan MNSR reactor using direct production route. In this study, to assess the specific activity of the produced radioisotope, a simulation was carried out through the MCNPX2.6 code. The simulation was validated by irradiating a lutetium disc-like (99.98 chemical purity) at the thermal neutron flux of 5 × 10 11 ncm 2 s -1 and an irradiation time of 4min. After the spectrometry of the irradiated sample, the experimental results of 177 Lu production were compared with the simulation results. In addition, factor from the simulation was extracted by replacing it in the related equations in order to calculate specific activity through a multi-stage approach, and by using different irradiation techniques. The results showed that the simulation technique designed in this study is in agreement with the experimental approach (with a difference of approximately 3%). It was also found that the maximum 177 Lu production at the maximum flux and irradiation time allows access to 723.5mCi/g after 27 cycles. Furthermore, the comparison of irradiation techniques showed that increasing the irradiation time is more effective in 177 Lu production efficiency than increasing the number of irradiation cycles. In a way that increasing the irradiation time would postpone the saturation of the productions. On the other hand, it was shown that the choice of an appropriate irradiation technique for 177 Lu production can be economically important in term of the effective fuel consumption in the reactor. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cannon, Robert C.; Gleeson, Padraig; Crook, Sharon; Ganapathy, Gautham; Marin, Boris; Piasini, Eugenio; Silver, R. Angus
2014-01-01
Computational models are increasingly important for studying complex neurophysiological systems. As scientific tools, it is essential that such models can be reproduced and critically evaluated by a range of scientists. However, published models are currently implemented using a diverse set of modeling approaches, simulation tools, and computer languages making them inaccessible and difficult to reproduce. Models also typically contain concepts that are tightly linked to domain-specific simulators, or depend on knowledge that is described exclusively in text-based documentation. To address these issues we have developed a compact, hierarchical, XML-based language called LEMS (Low Entropy Model Specification), that can define the structure and dynamics of a wide range of biological models in a fully machine readable format. We describe how LEMS underpins the latest version of NeuroML and show that this framework can define models of ion channels, synapses, neurons and networks. Unit handling, often a source of error when reusing models, is built into the core of the language by specifying physical quantities in models in terms of the base dimensions. We show how LEMS, together with the open source Java and Python based libraries we have developed, facilitates the generation of scripts for multiple neuronal simulators and provides a route for simulator free code generation. We establish that LEMS can be used to define models from systems biology and map them to neuroscience-domain specific simulators, enabling models to be shared between these traditionally separate disciplines. LEMS and NeuroML 2 provide a new, comprehensive framework for defining computational models of neuronal and other biological systems in a machine readable format, making them more reproducible and increasing the transparency and accessibility of their underlying structure and properties. PMID:25309419
Ciobanu, O
2009-01-01
The objective of this study was to obtain three-dimensional (3D) images and to perform biomechanical simulations starting from DICOM images obtained by computed tomography (CT). Open source software were used to prepare digitized 2D images of tissue sections and to create 3D reconstruction from the segmented structures. Finally, 3D images were used in open source software in order to perform biomechanic simulations. This study demonstrates the applicability and feasibility of open source software developed in our days for the 3D reconstruction and biomechanic simulation. The use of open source software may improve the efficiency of investments in imaging technologies and in CAD/CAM technologies for implants and prosthesis fabrication which need expensive specialized software.
NASA Astrophysics Data System (ADS)
Davoine, X.; Bocquet, M.
2007-03-01
The reconstruction of the Chernobyl accident source term has been previously carried out using core inventories, but also back and forth confrontations between model simulations and activity concentration or deposited activity measurements. The approach presented in this paper is based on inverse modelling techniques. It relies both on the activity concentration measurements and on the adjoint of a chemistry-transport model. The location of the release is assumed to be known, and one is looking for a source term available for long-range transport that depends both on time and altitude. The method relies on the maximum entropy on the mean principle and exploits source positivity. The inversion results are mainly sensitive to two tuning parameters, a mass scale and the scale of the prior errors in the inversion. To overcome this hardship, we resort to the statistical L-curve method to estimate balanced values for these two parameters. Once this is done, many of the retrieved features of the source are robust within a reasonable range of parameter values. Our results favour the acknowledged three-step scenario, with a strong initial release (26 to 27 April), followed by a weak emission period of four days (28 April-1 May) and again a release, longer but less intense than the initial one (2 May-6 May). The retrieved quantities of iodine-131, caesium-134 and caesium-137 that have been released are in good agreement with the latest reported estimations. Yet, a stronger apportionment of the total released activity is ascribed to the first period and less to the third one. Finer chronological details are obtained, such as a sequence of eruptive episodes in the first two days, likely related to the modulation of the boundary layer diurnal cycle. In addition, the first two-day release surges are found to have effectively reached an altitude up to the top of the domain (5000 m).
Cockpit display of hazardous weather information
NASA Technical Reports Server (NTRS)
Hansman, R. John, Jr.; Wanke, Craig
1990-01-01
Information transfer and display issues associated with the dissemination of hazardous weather warnings are studied in the context of windshear alerts. Operational and developmental windshear detection systems are briefly reviewed. The July 11, 1988 microburst events observed as part of the Denver Terminal Doppler Weather Radar (TDWR) operational evaluation are analyzed in terms of information transfer and the effectiveness of the microburst alerts. Information transfer, message content and display issues associated with microburst alerts generated from ground based sources are evaluated by means of pilot opinion surveys and part task simulator studies.
Cockpit display of hazardous weather information
NASA Technical Reports Server (NTRS)
Hansman, R. John, Jr.; Wanke, Craig
1989-01-01
Information transfer and display issues associated with the dissemination of hazardous-weather warnings are studied in the context of wind-shear alerts. Operational and developmental wind-shear detection systems are briefly reviewed. The July 11, 1988 microburst events observed as part of the Denver TDWR operational evaluation are analyzed in terms of information transfer and the effectiveness of the microburst alerts. Information transfer, message content, and display issues associated with microburst alerts generated from ground-based sources (Doppler radars, LLWAS, and PIREPS) are evaluated by means of pilot opinion surveys and part-task simulator studies.
Behavioral, psychiatric, and sociological problems of long-duration space missions
NASA Technical Reports Server (NTRS)
Kanas, N. A.; Fedderson, W. E.
1971-01-01
A literature search was conducted in an effort to isolate the problems that might be expected on long-duration space missions. Primary sources of the search include short-term space flights, submarine tours, Antarctic expeditions, isolation-chamber tests, space-flight simulators, and hypodynamia studies. Various stressors are discussed including weightlessness and low sensory input; circadian rhythms (including sleep); confinement, isolation, and monotony; and purely psychiatric and sociological considerations. Important aspects of crew selection are also mentioned. An attempt is made to discuss these factors with regard to a prototype mission to Mars.
Sources, Transport, and Climate Impacts of Biomass Burning Aerosols
NASA Technical Reports Server (NTRS)
Chin, Mian
2010-01-01
In this presentation, I will first talk about fundamentals of modeling of biomass burning emissions of aerosols, then show the results of GOCART model simulated biomass burning aerosols. I will compare the model results with observations of satellite and ground-based network in terms of total aerosol optical depth, aerosol absorption optical depth, and vertical distributions. Finally the long-range transport of biomass burning aerosols and the climate effects will be addressed. I will also discuss the uncertainties associated with modeling and observations of biomass burning aerosols
Dictionary-Based Tensor Canonical Polyadic Decomposition
NASA Astrophysics Data System (ADS)
Cohen, Jeremy Emile; Gillis, Nicolas
2018-04-01
To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images.
Atmospheric Tracer Inverse Modeling Using Markov Chain Monte Carlo (MCMC)
NASA Astrophysics Data System (ADS)
Kasibhatla, P.
2004-12-01
In recent years, there has been an increasing emphasis on the use of Bayesian statistical estimation techniques to characterize the temporal and spatial variability of atmospheric trace gas sources and sinks. The applications have been varied in terms of the particular species of interest, as well as in terms of the spatial and temporal resolution of the estimated fluxes. However, one common characteristic has been the use of relatively simple statistical models for describing the measurement and chemical transport model error statistics and prior source statistics. For example, multivariate normal probability distribution functions (pdfs) are commonly used to model these quantities and inverse source estimates are derived for fixed values of pdf paramaters. While the advantage of this approach is that closed form analytical solutions for the a posteriori pdfs of interest are available, it is worth exploring Bayesian analysis approaches which allow for a more general treatment of error and prior source statistics. Here, we present an application of the Markov Chain Monte Carlo (MCMC) methodology to an atmospheric tracer inversion problem to demonstrate how more gereral statistical models for errors can be incorporated into the analysis in a relatively straightforward manner. The MCMC approach to Bayesian analysis, which has found wide application in a variety of fields, is a statistical simulation approach that involves computing moments of interest of the a posteriori pdf by efficiently sampling this pdf. The specific inverse problem that we focus on is the annual mean CO2 source/sink estimation problem considered by the TransCom3 project. TransCom3 was a collaborative effort involving various modeling groups and followed a common modeling and analysis protocoal. As such, this problem provides a convenient case study to demonstrate the applicability of the MCMC methodology to atmospheric tracer source/sink estimation problems.
Compound simulator IR radiation characteristics test and calibration
NASA Astrophysics Data System (ADS)
Li, Yanhong; Zhang, Li; Li, Fan; Tian, Yi; Yang, Yang; Li, Zhuo; Shi, Rui
2015-10-01
The Hardware-in-the-loop simulation can establish the target/interference physical radiation and interception of product flight process in the testing room. In particular, the simulation of environment is more difficult for high radiation energy and complicated interference model. Here the development in IR scene generation produced by a fiber array imaging transducer with circumferential lamp spot sources is introduced. The IR simulation capability includes effective simulation of aircraft signatures and point-source IR countermeasures. Two point-sources as interference can move in two-dimension random directions. For simulation the process of interference release, the radiation and motion characteristic is tested. Through the zero calibration for optical axis of simulator, the radiation can be well projected to the product detector. The test and calibration results show the new type compound simulator can be used in the hardware-in-the-loop simulation trial.
Particle-in-cell code library for numerical simulation of the ECR source plasma
NASA Astrophysics Data System (ADS)
Shirkov, G.; Alexandrov, V.; Preisendorf, V.; Shevtsov, V.; Filippov, A.; Komissarov, R.; Mironov, V.; Shirkova, E.; Strekalovsky, O.; Tokareva, N.; Tuzikov, A.; Vatulin, V.; Vasina, E.; Fomin, V.; Anisimov, A.; Veselov, R.; Golubev, A.; Grushin, S.; Povyshev, V.; Sadovoi, A.; Donskoi, E.; Nakagawa, T.; Yano, Y.
2003-05-01
The project ;Numerical simulation and optimization of ion accumulation and production in multicharged ion sources; is funded by the International Science and Technology Center (ISTC). A summary of recent project development and the first version of a computer code library for simulation of electron-cyclotron resonance (ECR) source plasmas based on the particle-in-cell method are presented.
NASA Astrophysics Data System (ADS)
Becker, J. G.; Seagren, E. A.
2006-12-01
The presence of dense non-aqueous phase liquids (DNAPLs) at many chlorinated ethene-contaminated sites can greatly extend the time frames needed to reduce dissolved contaminants to regulatory levels using bioremediation. However, it has been demonstrated that mass removal from chlorinated ethene DNAPLs can potentially be enhanced through dehalorespiration of dissolved contaminants near the NAPL-water interface. Although promising, the amount of "bioenhancement" that can be achieved under optimal conditions is currently not known, and the real significance and engineering potential of this phenomenon currently are not well understood, in part because it can be influenced by a complex set of factors, including DNAPL properties, hydrodynamics, substrate concentrations, and microbial competition for growth substrates. In this study it is hypothesized that: (1) different chlorinated ethene-respiring strains may dominate within different zones of a contaminant plume emanating from a DNAPL source zone due to variations in substrate availability, and microbial competition for chlorinated ethenes and/or electron donors; and (2) the outcome of competitive interactions near the DNAPL source zone will affect the longevity of DNAPL source zones by influencing the degree of dissolution bioenhancement, while the outcome of competitive interactions further downgradient will determine the extent of contaminant dechlorination. To demonstrate the validity of the proposed hypothesis, a series of simple, "proof-of-concept," mathematical simulations evaluating the effects of competitive interactions on the distribution of dehalorespirers at the DNAPL-water interface, the dissolution of tetrachloroethene (PCE), and extent of PCE detoxification were performed in a model competition scenario, in which Dehalococcoides ethenogenes and another dehalorespirer (Desulfuromonas michiganensis) compete for the electron acceptor (PCE) and/or electron donor. The model domain for this evaluation simulates a contaminant-source zone consisting of DNAPL ganglia trapped in a subsurface porous medium that slowly releases organic pollutants into the groundwater flowing past it. The model used in the simulations was based on a biokinetic model recently developed by Becker [Environ. Sci. Technol. 40(14):4473-4480] to describe competition among PCE-respiring populations in a homogenous continuously-stirred tank reactor. Becker's model was expanded by adding terms for chlorinated ethene partitioning between the DNAPL and aqueous phases, as well as advection and dispersion of aqueous chlorinated ethenes. The results of these preliminary simulations demonstrate that the outcome of competition between populations for growth substrates can have a significant impact on bioenhancement and, thus, on DNAPL source zone longevity. Although these proof-of- concept simulations do not incorporate all of the complexity of actual field systems, the modeling results are useful for identifying which parameters are important in determining the outcome of competition in the different scenarios and its impact on DNAPL dissolution. This information is needed to understand how biostimulation and bioaugmentation affect bioenhancement by stimulating different populations and develop bioremediation strategies that incorporate these treatment technologies while balancing the twin clean-up goals of reduced source longevity and complete detoxification.
NASA Astrophysics Data System (ADS)
Handayani, Noer Abyor; Luthfansyah, M.; Krisanti, Elsa; Kartohardjono, Sutrasno; Mulia, Kamarza
2017-11-01
Dietary modification, supplementation and food fortification are common strategies to alleviate iron deficiencies. Fortification of food is an effective long-term approach to improve iron status of populations. Fortification by adding iron directly to food will cause sensory problems and decrease its bioavailability. The purpose of iron encapsulation is: (1) to improve iron bioavailability, by preventing oxidation and contact with inhibitors and competitors; and (2) to disguise the rancid aroma and flavor of iron. A microcapsule formulation of two suitable iron compounds (iron II fumarate and iron II gluconate) using chitosan as a biodegradable polymer will be very important. Freeze dryer was also used for completing the iron microencapsulation process. The main objective of the present study was to prepare and characterize the iron-chitosan microcapsules. Physical characterization, i.e. encapsulation efficiency, iron loading capacity, and SEM, were also discussed in this paper. The stability of microencapsulated iron under simulated gastrointestinal conditions was also investigated, as well. Both iron sources were highly encapsulated, ranging from 71.5% to 98.5%. Furthermore, the highest ferrous fumarate and ferrous gluconate loaded were 1.9% and 4.8%, respectively. About 1.04% to 9.17% and 45.17% to 75.19% of Fe II and total Fe, were released in simulated gastric fluid for two hours and in simulated intestinal fluid for six hours, respectively.
Signal to noise quantification of regional climate projections
NASA Astrophysics Data System (ADS)
Li, S.; Rupp, D. E.; Mote, P.
2016-12-01
One of the biggest challenges in interpreting climate model outputs for impacts studies and adaptation planning is understanding the sources of disagreement among models (which is often used imperfectly as a stand-in for system uncertainty). Internal variability is a primary source of uncertainty in climate projections, especially for precipitation, for which models disagree about even the sign of changes in large areas like the continental US. Taking advantage of a large initial-condition ensemble of regional climate simulations, this study quantifies the magnitude of changes forced by increasing greenhouse gas concentrations relative to internal variability. Results come from a large initial-condition ensemble of regional climate model simulations generated by weather@home, a citizen science computing platform, where the western United States climate was simulated for the recent past (1985-2014) and future (2030-2059) using a 25-km horizontal resolution regional climate model (HadRM3P) nested in global atmospheric model (HadAM3P). We quantify grid point level signal-to-noise not just in temperature and precipitation responses, but also the energy and moisture flux terms that are related to temperature and precipitation responses, to provide important insights regarding uncertainty in climate change projections at local and regional scales. These results will aid modelers in determining appropriate ensemble sizes for different climate variables and help users of climate model output with interpreting climate model projections.
George, D.L.
2011-01-01
The simulation of advancing flood waves over rugged topography, by solving the shallow-water equations with well-balanced high-resolution finite volume methods and block-structured dynamic adaptive mesh refinement (AMR), is described and validated in this paper. The efficiency of block-structured AMR makes large-scale problems tractable, and allows the use of accurate and stable methods developed for solving general hyperbolic problems on quadrilateral grids. Features indicative of flooding in rugged terrain, such as advancing wet-dry fronts and non-stationary steady states due to balanced source terms from variable topography, present unique challenges and require modifications such as special Riemann solvers. A well-balanced Riemann solver for inundation and general (non-stationary) flow over topography is tested in this context. The difficulties of modeling floods in rugged terrain, and the rationale for and efficacy of using AMR and well-balanced methods, are presented. The algorithms are validated by simulating the Malpasset dam-break flood (France, 1959), which has served as a benchmark problem previously. Historical field data, laboratory model data and other numerical simulation results (computed on static fitted meshes) are shown for comparison. The methods are implemented in GEOCLAW, a subset of the open-source CLAWPACK software. All the software is freely available at. Published in 2010 by John Wiley & Sons, Ltd.
Proxy system modeling of tree-ring isotope chronologies over the Common Era
NASA Astrophysics Data System (ADS)
Anchukaitis, K. J.; LeGrande, A. N.
2017-12-01
The Asian monsoon can be characterized in terms of both precipitation variability and atmospheric circulation across a range of spatial and temporal scales. While multicentury time series of tree-ring widths at hundreds of sites across Asia provide estimates of past rainfall, the oxygen isotope ratios of annual rings may reveal broader regional hydroclimate and atmosphere-ocean dynamics. Tree-ring oxygen isotope chronologies from Monsoon Asia have been interpreted to reflect a local 'amount effect', relative humidity, source water and seasonality, and winter snowfall. Here, we use an isotope-enabled general circulation model simulation from the NASA Goddard Institute for Space Science (GISS) Model E and a proxy system model of the oxygen isotope composition of tree-ring cellulose to interpret the large-scale and local climate controls on δ 18O chronologies. Broad-scale dominant signals are associated with a suite of covarying hydroclimate variables including growing season rainfall amounts, relative humidity, and vapor pressure deficit. Temperature and source water influences are region-dependent, as are the simulated tree-ring isotope signals associated with the El Nino Southern Oscillation (ENSO) and large-scale indices of the Asian monsoon circulation. At some locations, including southern coastal Viet Nam, local precipitation isotope ratios and the resulting simulated δ 18O tree-ring chronologies reflect upstream rainfall amounts and atmospheric circulation associated with monsoon strength and wind anomalies.
PENTACLE: Parallelized particle-particle particle-tree code for planet formation
NASA Astrophysics Data System (ADS)
Iwasawa, Masaki; Oshino, Shoichi; Fujii, Michiko S.; Hori, Yasunori
2017-10-01
We have newly developed a parallelized particle-particle particle-tree code for planet formation, PENTACLE, which is a parallelized hybrid N-body integrator executed on a CPU-based (super)computer. PENTACLE uses a fourth-order Hermite algorithm to calculate gravitational interactions between particles within a cut-off radius and a Barnes-Hut tree method for gravity from particles beyond. It also implements an open-source library designed for full automatic parallelization of particle simulations, FDPS (Framework for Developing Particle Simulator), to parallelize a Barnes-Hut tree algorithm for a memory-distributed supercomputer. These allow us to handle 1-10 million particles in a high-resolution N-body simulation on CPU clusters for collisional dynamics, including physical collisions in a planetesimal disc. In this paper, we show the performance and the accuracy of PENTACLE in terms of \\tilde{R}_cut and a time-step Δt. It turns out that the accuracy of a hybrid N-body simulation is controlled through Δ t / \\tilde{R}_cut and Δ t / \\tilde{R}_cut ˜ 0.1 is necessary to simulate accurately the accretion process of a planet for ≥106 yr. For all those interested in large-scale particle simulations, PENTACLE, customized for planet formation, will be freely available from https://github.com/PENTACLE-Team/PENTACLE under the MIT licence.
NASA Astrophysics Data System (ADS)
Ulrich, T.; Gabriel, A. A.
2016-12-01
The geometry of faults is subject to a large degree of uncertainty. As buried structures being not directly observable, their complex shapes may only be inferred from surface traces, if available, or through geophysical methods, such as reflection seismology. As a consequence, most studies aiming at assessing the potential hazard of faults rely on idealized fault models, based on observable large-scale features. Yet, real faults are known to be wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. The influence of roughness on the earthquake rupture process is currently a driving topic in the computational seismology community. From the numerical point of view, rough faults problems are challenging problems that require optimized codes able to run efficiently on high-performance computing infrastructure and simultaneously handle complex geometries. Physically, simulated ruptures hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Incorporating fault geometry on all scales may thus be crucial to model realistic earthquake source processes and to estimate more accurately seismic hazard. In this study, we use the software package SeisSol, based on an ADER-Discontinuous Galerkin scheme, to run our numerical simulations. SeisSol allows solving the spontaneous dynamic earthquake rupture problem and the wave propagation problem with high-order accuracy in space and time efficiently on large-scale machines. In this study, the influence of fault roughness on dynamic rupture style (e.g. onset of supershear transition, rupture front coherence, propagation of self-healing pulses, etc) at different length scales is investigated by analyzing ruptures on faults of varying roughness spectral content. In particular, we investigate the existence of a minimum roughness length scale in terms of rupture inherent length scales below which the rupture ceases to be sensible. Finally, the effect of fault geometry on ground-motions, in the near-field, is considered. Our simulations feature a classical linear slip weakening on the fault and a viscoplastic constitutive model off the fault. The benefits of using a more elaborate fast velocity-weakening friction law will also be considered.
NASA Astrophysics Data System (ADS)
Pappas, E. P.; Moutsatsos, A.; Pantelis, E.; Zoros, E.; Georgiou, E.; Torrens, M.; Karaiskos, P.
2016-02-01
This work presents a comprehensive Monte Carlo (MC) simulation model for the Gamma Knife Perfexion (PFX) radiosurgery unit. Model-based dosimetry calculations were benchmarked in terms of relative dose profiles (RDPs) and output factors (OFs), against corresponding EBT2 measurements. To reduce the rather prolonged computational time associated with the comprehensive PFX model MC simulations, two approximations were explored and evaluated on the grounds of dosimetric accuracy. The first consists in directional biasing of the 60Co photon emission while the second refers to the implementation of simplified source geometric models. The effect of the dose scoring volume dimensions in OF calculations accuracy was also explored. RDP calculations for the comprehensive PFX model were found to be in agreement with corresponding EBT2 measurements. Output factors of 0.819 ± 0.004 and 0.8941 ± 0.0013 were calculated for the 4 mm and 8 mm collimator, respectively, which agree, within uncertainties, with corresponding EBT2 measurements and published experimental data. Volume averaging was found to affect OF results by more than 0.3% for scoring volume radii greater than 0.5 mm and 1.4 mm for the 4 mm and 8 mm collimators, respectively. Directional biasing of photon emission resulted in a time efficiency gain factor of up to 210 with respect to the isotropic photon emission. Although no considerable effect on relative dose profiles was detected, directional biasing led to OF overestimations which were more pronounced for the 4 mm collimator and increased with decreasing emission cone half-angle, reaching up to 6% for a 5° angle. Implementation of simplified source models revealed that omitting the sources’ stainless steel capsule significantly affects both OF results and relative dose profiles, while the aluminum-based bushing did not exhibit considerable dosimetric effect. In conclusion, the results of this work suggest that any PFX simulation model should be benchmarked in terms of both RDP and OF results.
NASA Astrophysics Data System (ADS)
Galeano, D. C.; Cavalcante, F. R.; Carvalho, A. B.; Hunt, J.
2014-02-01
The dose conversion coefficient (DCC) is important to quantify and assess effective doses associated with medical, professional and public exposures. The calculation of DCCs using anthropomorphic simulators and radiation transport codes is justified since in-vivo measurement of effective dose is extremely difficult and not practical for occupational dosimetry. DCCs have been published by the ICRP using simulators in a standing posture, which is not always applicable to all exposure scenarios, providing an inaccurate dose estimation. The aim of this work was to calculate DCCs for equivalent dose in terms of air kerma (H/Kair) using the Visual Monte Carlo (VMC) code and the VOXTISS8 adult male voxel simulator in sitting and standing postures. In both postures, the simulator was irradiated by a plane source of monoenergetic photons in antero-posterior (AP) geometry. The photon energy ranged from 15 keV to 2 MeV. The DCCs for both postures were compared and the DCCs for the standing simulator were higher. For certain organs, the difference of DCCs were more significant, as in gonads (48% higher), bladder (16% higher) and colon (11% higher). As these organs are positioned in the abdominal region, the posture of the anthropomorphic simulator modifies the form in which the radiation is transported and how the energy is deposited. It was also noted that the average percentage difference of conversion coefficients was 33% for the bone marrow, 11% for the skin, 13% for the bone surface and 31% for the muscle. For other organs, the percentage difference of the DCCs for both postures was not relevant (less than 5%) due to no anatomical changes in the organs of the head, chest and upper abdomen. We can conclude that is important to obtain DCCs using different postures from those present in the scientific literature.
Guarendi, Andrew N; Chandy, Abhilash J
2013-01-01
Numerical simulations of magnetohydrodynamic (MHD) hypersonic flow over a cylinder are presented for axial- and transverse-oriented dipoles with different strengths. ANSYS CFX is used to carry out calculations for steady, laminar flows at a Mach number of 6.1, with a model for electrical conductivity as a function of temperature and pressure. The low magnetic Reynolds number (<1) calculated based on the velocity and length scales in this problem justifies the quasistatic approximation, which assumes negligible effect of velocity on magnetic fields. Therefore, the governing equations employed in the simulations are the compressible Navier-Stokes and the energy equations with MHD-related source terms such as Lorentz force and Joule dissipation. The results demonstrate the ability of the magnetic field to affect the flowfield around the cylinder, which results in an increase in shock stand-off distance and reduction in overall temperature. Also, it is observed that there is a noticeable decrease in drag with the addition of the magnetic field.
NASA Astrophysics Data System (ADS)
Semenov, Z. V.; Labusov, V. A.
2017-11-01
Results of studying the errors of indirect monitoring by means of computer simulations are reported. The monitoring method is based on measuring spectra of reflection from additional monitoring substrates in a wide spectral range. Special software (Deposition Control Simulator) is developed, which allows one to estimate the influence of the monitoring system parameters (noise of the photodetector array, operating spectral range of the spectrometer and errors of its calibration in terms of wavelengths, drift of the radiation source intensity, and errors in the refractive index of deposited materials) on the random and systematic errors of deposited layer thickness measurements. The direct and inverse problems of multilayer coatings are solved using the OptiReOpt library. Curves of the random and systematic errors of measurements of the deposited layer thickness as functions of the layer thickness are presented for various values of the system parameters. Recommendations are given on using the indirect monitoring method for the purpose of reducing the layer thickness measurement error.
Guarendi, Andrew N.; Chandy, Abhilash J.
2013-01-01
Numerical simulations of magnetohydrodynamic (MHD) hypersonic flow over a cylinder are presented for axial- and transverse-oriented dipoles with different strengths. ANSYS CFX is used to carry out calculations for steady, laminar flows at a Mach number of 6.1, with a model for electrical conductivity as a function of temperature and pressure. The low magnetic Reynolds number (≪1) calculated based on the velocity and length scales in this problem justifies the quasistatic approximation, which assumes negligible effect of velocity on magnetic fields. Therefore, the governing equations employed in the simulations are the compressible Navier-Stokes and the energy equations with MHD-related source terms such as Lorentz force and Joule dissipation. The results demonstrate the ability of the magnetic field to affect the flowfield around the cylinder, which results in an increase in shock stand-off distance and reduction in overall temperature. Also, it is observed that there is a noticeable decrease in drag with the addition of the magnetic field. PMID:24307870
X-ray optics simulation and beamline design for the APS upgrade
NASA Astrophysics Data System (ADS)
Shi, Xianbo; Reininger, Ruben; Harder, Ross; Haeffner, Dean
2017-08-01
The upgrade of the Advanced Photon Source (APS) to a Multi-Bend Achromat (MBA) will increase the brightness of the APS by between two and three orders of magnitude. The APS upgrade (APS-U) project includes a list of feature beamlines that will take full advantage of the new machine. Many of the existing beamlines will be also upgraded to profit from this significant machine enhancement. Optics simulations are essential in the design and optimization of these new and existing beamlines. In this contribution, the simulation tools used and developed at APS, ranging from analytical to numerical methods, are summarized. Three general optical layouts are compared in terms of their coherence control and focusing capabilities. The concept of zoom optics, where two sets of focusing elements (e.g., CRLs and KB mirrors) are used to provide variable beam sizes at a fixed focal plane, is optimized analytically. The effects of figure errors on the vertical spot size and on the local coherence along the vertical direction of the optimized design are investigated.
Stochastic Time Models of Syllable Structure
Shaw, Jason A.; Gafos, Adamantios I.
2015-01-01
Drawing on phonology research within the generative linguistics tradition, stochastic methods, and notions from complex systems, we develop a modelling paradigm linking phonological structure, expressed in terms of syllables, to speech movement data acquired with 3D electromagnetic articulography and X-ray microbeam methods. The essential variable in the models is syllable structure. When mapped to discrete coordination topologies, syllabic organization imposes systematic patterns of variability on the temporal dynamics of speech articulation. We simulated these dynamics under different syllabic parses and evaluated simulations against experimental data from Arabic and English, two languages claimed to parse similar strings of segments into different syllabic structures. Model simulations replicated several key experimental results, including the fallibility of past phonetic heuristics for syllable structure, and exposed the range of conditions under which such heuristics remain valid. More importantly, the modelling approach consistently diagnosed syllable structure proving resilient to multiple sources of variability in experimental data including measurement variability, speaker variability, and contextual variability. Prospects for extensions of our modelling paradigm to acoustic data are also discussed. PMID:25996153
NASA Astrophysics Data System (ADS)
Heimann, F. U. M.; Rickenmann, D.; Turowski, J. M.; Kirchner, J. W.
2014-07-01
Especially in mountainuous environments, the prediction of sediment dynamics is important for managing natural hazards, assessing in-stream habitats, and understanding geomorphic evolution. We present the new modelling tool sedFlow for simulating fractional bedload transport dynamics in mountain streams. The model can deal with the effects of adverse slopes and uses state of the art approaches for quantifying macro-roughness effects in steep channels. Local grain size distributions are dynamically adjusted according to the transport dynamics of each grain size fraction. The tool sedFlow features fast calculations and straightforward pre- and postprocessing of simulation data. The model is provided together with its complete source code free of charge under the terms of the GNU General Public License (www.wsl.ch/sedFlow). Examples of the application of sedFlow are given in a companion article by Heimann et al. (2014).
FLUKA Monte Carlo simulations and benchmark measurements for the LHC beam loss monitors
NASA Astrophysics Data System (ADS)
Sarchiapone, L.; Brugger, M.; Dehning, B.; Kramer, D.; Stockner, M.; Vlachoudis, V.
2007-10-01
One of the crucial elements in terms of machine protection for CERN's Large Hadron Collider (LHC) is its beam loss monitoring (BLM) system. On-line loss measurements must prevent the superconducting magnets from quenching and protect the machine components from damages due to unforeseen critical beam losses. In order to ensure the BLM's design quality, in the final design phase of the LHC detailed FLUKA Monte Carlo simulations were performed for the betatron collimation insertion. In addition, benchmark measurements were carried out with LHC type BLMs installed at the CERN-EU high-energy Reference Field facility (CERF). This paper presents results of FLUKA calculations performed for BLMs installed in the collimation region, compares the results of the CERF measurement with FLUKA simulations and evaluates related uncertainties. This, together with the fact that the CERF source spectra at the respective BLM locations are comparable with those at the LHC, allows assessing the sensitivity of the performed LHC design studies.
Large historical growth in global terrestrial gross primary production
Campbell, J. E.; Berry, J. A.; Seibt, U.; ...
2017-04-05
Growth in terrestrial gross primary production (GPP) may provide a negative feedback for climate change. It remains uncertain, however, to what extent biogeochemical processes can suppress global GPP growth. In consequence, model estimates of terrestrial carbon storage and carbon cycle –climate feedbacks remain poorly constrained. Here we present a global, measurement-based estimate of GPP growth during the twentieth century based on long-term atmospheric carbonyl sulphide (COS) records derived from ice core, firn, and ambient air samples. Here, we interpret these records using a model that simulates changes in COS concentration due to changes in its sources and sinks, including amore » large sink that is related to GPP. We find that the COS record is most consistent with climate-carbon cycle model simulations that assume large GPP growth during the twentieth century (31% ± 5%; mean ± 95% confidence interval). Finally, while this COS analysis does not directly constrain estimates of future GPP growth it provides a global-scale benchmark for historical carbon cycle simulations.« less
CONVECTIVE BABCOCK-LEIGHTON DYNAMO MODELS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miesch, Mark S.; Brown, Benjamin P., E-mail: miesch@ucar.edu
We present the first global, three-dimensional simulations of solar/stellar convection that take into account the influence of magnetic flux emergence by means of the Babcock-Leighton (BL) mechanism. We have shown that the inclusion of a BL poloidal source term in a convection simulation can promote cyclic activity in an otherwise steady dynamo. Some cycle properties are reminiscent of solar observations, such as the equatorward propagation of toroidal flux near the base of the convection zone. However, the cycle period in this young sun (rotating three times faster than the solar rate) is very short ({approx}6 months) and it is unclearmore » whether much longer cycles may be achieved within this modeling framework, given the high efficiency of field generation and transport by the convection. Even so, the incorporation of mean-field parameterizations in three-dimensional convection simulations to account for elusive processes such as flux emergence may well prove useful in the future modeling of solar and stellar activity cycles.« less
Large historical growth in global terrestrial gross primary production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, J. E.; Berry, J. A.; Seibt, U.
Growth in terrestrial gross primary production (GPP) may provide a negative feedback for climate change. It remains uncertain, however, to what extent biogeochemical processes can suppress global GPP growth. In consequence, model estimates of terrestrial carbon storage and carbon cycle –climate feedbacks remain poorly constrained. Here we present a global, measurement-based estimate of GPP growth during the twentieth century based on long-term atmospheric carbonyl sulphide (COS) records derived from ice core, firn, and ambient air samples. Here, we interpret these records using a model that simulates changes in COS concentration due to changes in its sources and sinks, including amore » large sink that is related to GPP. We find that the COS record is most consistent with climate-carbon cycle model simulations that assume large GPP growth during the twentieth century (31% ± 5%; mean ± 95% confidence interval). Finally, while this COS analysis does not directly constrain estimates of future GPP growth it provides a global-scale benchmark for historical carbon cycle simulations.« less
Schiestl-Aalto, Pauliina; Kulmala, Liisa; Mäkinen, Harri; Nikinmaa, Eero; Mäkelä, Annikki
2015-04-01
The control of tree growth vs environment by carbon sources or sinks remains unresolved although it is widely studied. This study investigates growth of tree components and carbon sink-source dynamics at different temporal scales. We constructed a dynamic growth model 'carbon allocation sink source interaction' (CASSIA) that calculates tree-level carbon balance from photosynthesis, respiration, phenology and temperature-driven potential structural growth of tree organs and dynamics of stored nonstructural carbon (NSC) and their modifying influence on growth. With the model, we tested hypotheses that sink demand explains the intra-annual growth dynamics of the meristems, and that the source supply is further needed to explain year-to-year growth variation. The predicted intra-annual dimensional growth of shoots and needles and the number of cells in xylogenesis phases corresponded with measurements, whereas NSC hardly limited the growth, supporting the first hypothesis. Delayed GPP influence on potential growth was necessary for simulating the yearly growth variation, indicating also at least an indirect source limitation. CASSIA combines seasonal growth and carbon balance dynamics with long-term source dynamics affecting growth and thus provides a first step to understanding the complex processes regulating intra- and interannual growth and sink-source dynamics. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
Four receptor-oriented source apportionment models were evaluated by applying them to simulated personal exposure data for select volatile organic compounds (VOCs) that were generated by Monte Carlo sampling from known source contributions and profiles. The exposure sources mo...
Computer model to simulate ionizing radiation effects correlates with experimental data
NASA Astrophysics Data System (ADS)
Perez-Poch, Antoni
Exposure to radiation from high energy protons and particles with ionizing properties is a major challenge for long-term space missions. The specific effect of such radiation on hematopoietic cells is still not fully understood. A number of experiments have been conducted on ground and in space. Those experiments on one hand, measure the extent of damage on blood markers. On the other hand, they intend to quantify the correlation between dose and energy from the radiation particles, with their ability to impair the hematopoietic stem and progenitor function. We present a computer model based on a neural network that intends to assess the relationship between dose, energy and number of hits on a particular cell, to the damage incurred to the human marrow cells. Calibration of the network is performed with the existing experimental data available in bibliography. Different sources of ionizing radiation at different doses (0-90 cGy) and along different patterns of a long-term exposure scenarios are simulated. Results are shown for a continuous variation of doses and are compared with specific data available in the literature. Some predictions are inferred for long-term scenarios of spaceflight, and the risk of jeopardizing a mission due to a major disfunction of the bone marrow is calculated. The method has proved successful in reproducing specific experimental data. We also discuss the significance and validity of the predicted ionizing radiation effects in situations such as long-term missions for a continuous range of dose.
NASA Astrophysics Data System (ADS)
Oshima, Kazuhiro; Ogata, Koto; Park, Hotaek; Tachibana, Yoshihiro
2018-05-01
River discharges from Siberia are a large source of freshwater into the Arctic Ocean, whereas the cause of the long-term variation in Siberian discharges is still unclear. The observed river discharges of the Lena in the east and the Ob in the west indicated different relationships in each of the epochs during the past 7 decades. The correlations between the two river discharges were negative during the 1980s to mid-1990s, positive during the mid-1950s to 1960s, and became weak after the mid-1990s. More long-term records of tree-ring-reconstructed discharges have also shown differences in the correlations in each of the epochs. It is noteworthy that the correlations obtained from the reconstructions tend to be negative during the past 2 centuries. Such tendency has also been obtained from precipitations in observations, and in simulations with an atmospheric general circulation model (AGCM) and fully coupled atmosphere-ocean GCMs conducted for the Fourth Assessment Report of the IPCC. The AGCM control simulation further demonstrated that an east-west seesaw pattern of summertime large-scale atmospheric circulation frequently emerges over Siberia as an atmospheric internal variability. This results in an opposite anomaly of precipitation over the Lena and Ob and the negative correlation. Consequently, the summertime atmospheric internal variability in the east-west seesaw pattern over Siberia is a key factor influencing the long-term variation in precipitation and river discharge, i.e., the water cycle in this region.
A large and ubiquitous source of atmospheric formic acid
NASA Astrophysics Data System (ADS)
Millet, D. B.; Baasandorj, M.; Farmer, D. K.; Thornton, J. A.; Baumann, K.; Brophy, P.; Chaliyakunnel, S.; de Gouw, J. A.; Graus, M.; Hu, L.; Koss, A.; Lee, B. H.; Lopez-Hilfiker, F. D.; Neuman, J. A.; Paulot, F.; Peischl, J.; Pollack, I. B.; Ryerson, T. B.; Warneke, C.; Williams, B. J.; Xu, J.
2015-06-01
Formic acid (HCOOH) is one of the most abundant acids in the atmosphere, with an important influence on precipitation chemistry and acidity. Here we employ a chemical transport model (GEOS-Chem CTM) to interpret recent airborne and ground-based measurements over the US Southeast in terms of the constraints they provide on HCOOH sources and sinks. Summertime boundary layer concentrations average several parts-per-billion, 2-3× larger than can be explained based on known production and loss pathways. This indicates one or more large missing HCOOH sources, and suggests either a key gap in current understanding of hydrocarbon oxidation or a large, unidentified, direct flux of HCOOH. Model-measurement comparisons implicate biogenic sources (e.g., isoprene oxidation) as the predominant HCOOH source. Resolving the unexplained boundary layer concentrations based (i) solely on isoprene oxidation would require a 3× increase in the model HCOOH yield, or (ii) solely on direct HCOOH emissions would require approximately a 25× increase in its biogenic flux. However, neither of these can explain the high HCOOH amounts seen in anthropogenic air masses and in the free troposphere. The overall indication is of a large biogenic source combined with ubiquitous chemical production of HCOOH across a range of precursors. Laboratory work is needed to better quantify the rates and mechanisms of carboxylic acid production from isoprene and other prevalent organics. Stabilized Criegee intermediates (SCIs) provide a large model source of HCOOH, while acetaldehyde tautomerization accounts for ~ 15% of the simulated global burden. Because carboxylic acids also react with SCIs and catalyze the reverse tautomerization reaction, HCOOH buffers against its own production by both of these pathways. Based on recent laboratory results, reaction between CH3O2 and OH could provide a major source of atmospheric HCOOH; however, including this chemistry degrades the model simulation of CH3OOH and NOx : CH3OOH. Developing better constraints on SCI and RO2 + OH chemistry is a high priority for future work. The model neither captures the large diurnal amplitude in HCOOH seen in surface air, nor its inverted vertical gradient at night. This implies a substantial bias in our current representation of deposition as modulated by boundary layer dynamics, and may indicate an HCOOH sink underestimate and thus an even larger missing source. A more robust treatment of surface deposition is a key need for improving simulations of HCOOH and related trace gases, and our understanding of their budgets.
Water Vapor Tracers as Diagnostics of the Regional Hydrologic Cycle
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Schubert, Siegfried; Einaudi, Franco (Technical Monitor)
2001-01-01
Numerous studies suggest that local feedback of evaporation on precipitation, or recycling, is a significant source of water for precipitation. Quantitative results on the exact amount of recycling have been difficult to obtain in view of the inherent limitations of diagnostic recycling calculations. The current study describes a calculation of the amount of local and remote sources of water for precipitation, based on the implementation of passive constituent tracers of water vapor (termed water vapor tracers, WVT) in a general circulation model. In this case, the major limitation on the accuracy of the recycling estimates is the veracity of the numerically simulated hydrological cycle, though we note that this approach can also be implemented within the context of a data assimilation system. In this approach, each WVT is associated with an evaporative source region, and tracks the water until it precipitates from the atmosphere. By assuming that the regional water is well mixed with water from other sources, the physical processes that act on the WVT are determined in proportion to those that act on the model's prognostic water vapor. In this way, the local and remote sources of water for precipitation can be computed within the model simulation, and can be validated against the model's prognostic water vapor. Furthermore, estimates of precipitation recycling can be compared with bulk diagnostic approaches. As a demonstration of the method, the regional hydrologic cycles for North America and India are evaluated for six summers (June, July and August) of model simulation. More than 50% of the precipitation in the Midwestern United States came from continental regional tracers, and the local source was the largest of the regional tracers (14%). The Gulf of Mexico and Atlantic 2 regions contributed 18% of the water for Midwestern precipitation, but further analysis suggests that the greater region of the Tropical Atlantic Ocean may also contribute significantly. In general, most North American land regions showed a positive correlation between evaporation and recycling ratio (except the Southeast United States) and negative correlations of recycling ratio with precipitation and moisture transport (except the Southwestern United States). The Midwestern local source is positively correlated with local evaporation, but it is not correlated with water vapor transport. This is contrary to bulk diagnostic estimates of precipitation recycling. In India, the local source of precipitation is a small percentage of the precipitation owing to the dominance of the atmospheric transport of oceanic water. The southern Indian Ocean provides a key source of water for both the Indian continent and the Sahelian region.
NASA Astrophysics Data System (ADS)
Lange, Jacob; O'Shaughnessy, Richard; Healy, James; Lousto, Carlos; Shoemaker, Deirdre; Lovelace, Geoffrey; Scheel, Mark; Ossokine, Serguei
2016-03-01
In this talk, we describe a procedure to reconstruct the parameters of sufficiently massive coalescing compact binaries via direct comparison with numerical relativity simulations. For sufficiently massive sources, existing numerical relativity simulations are long enough to cover the observationally accessible part of the signal. Due to the signal's brevity, the posterior parameter distribution it implies is broad, simple, and easily reconstructed from information gained by comparing to only the sparse sample of existing numerical relativity simulations. We describe how followup simulations can corroborate and improve our understanding of a detected source. Since our method can include all physics provided by full numerical relativity simulations of coalescing binaries, it provides a valuable complement to alternative techniques which employ approximations to reconstruct source parameters. Supported by NSF Grant PHY-1505629.
Research on starlight hardware-in-the-loop simulator
NASA Astrophysics Data System (ADS)
Zhang, Ying; Gao, Yang; Qu, Huiyang; Liu, Dongfang; Du, Huijie; Lei, Jie
2016-10-01
The starlight navigation is considered to be one of the most important methods for spacecraft navigation. Starlight simulation system is a high-precision system with large fields of view, designed to test the starlight navigation sensor performance on the ground. A complete hardware-in-the-loop simulation of the system has been built. The starlight simulator is made up of light source, light source controller, light filter, LCD, collimator and control computer. LCD is the key display component of the system, and is installed at the focal point of the collimator. For the LCD cannot emit light itself, so light source and light source power controller is specially designed for the brightness demanded by the LCD. Light filter is designed for the dark background which is also needed in the simulation.
NASA Astrophysics Data System (ADS)
Steenhuis, T. S.; Azzaino, Z.; Hoang, L.; Pacenka, S.; Worqlul, A. W.; Mukundan, R.; Stoof, C.; Owens, E. M.; Richards, B. K.
2017-12-01
The New York City source watersheds in the Catskill Mountains' humid, temperate climate has long-term hydrological and water quality monitoring data It is one of the few catchments where implementation of source and landscape management practices has led to decreased phosphorus concentration in the receiving surface waters. One of the reasons is that landscape measures correctly targeted the saturated variable source runoff areas (VSA) in the valley bottoms as the location where most of the runoff and other nonpoint pollutants originated. Measures targeting these areas were instrumental in lowering phosphorus concentration. Further improvements in water quality can be made based on a better understanding of the flow processes and water table fluctuations in the VSA. For that reason, we instrumented a self-contained upland variable source watershed with a landscape characteristic of a soil underlain by glacial till at shallow depth similar to the Catskill watersheds. In this presentation, we will discuss our experimental findings and present a mathematical model. Variable source areas have a small slope making gravity the driving force for the flow, greatly simplifying the simulation of the flow processes. The experimental data and the model simulations agreed for both outflow and water table fluctuations. We found that while the flows to the outlet were similar throughout the year, the discharge of the VSA varies greatly. This was due to transpiration by the plants which became active when soil temperatures were above 10oC. We found that shortly after the temperature increased above 10oC the baseflow stopped and only surface runoff occurred when rainstorms exceeded the storage capacity of the soil in at least a portion of the variable source area. Since plant growth in the variable source area was a major variable determining the base flow behavior, changes in temperature in the future - affecting the duration of the growing season - will affect baseflow and related transport of nutrient and other chemicals many times more than small temperature related increases in potential evaporation rate. This in turn will directly change the water availability and pollutant transport in the many surface source watersheds with variable source area hydrology.
Modeling, Simulation, and Forecasting of Subseasonal Variability
NASA Technical Reports Server (NTRS)
Waliser, Duane; Schubert, Siegfried; Kumar, Arun; Weickmann, Klaus; Dole, Randall
2003-01-01
A planning workshop on "Modeling, Simulation and Forecasting of Subseasonal Variability" was held in June 2003. This workshop was the first of a number of meetings planned to follow the NASA-sponsored workshop entitled "Prospects For Improved Forecasts Of Weather And Short-Term Climate Variability On Sub-Seasonal Time Scales" that was held April 2002. The 2002 workshop highlighted a number of key sources of unrealized predictability on subseasonal time scales including tropical heating, soil wetness, the Madden Julian Oscillation (MJO) [a.k.a Intraseasonal Oscillation (ISO)], the Arctic Oscillation (AO) and the Pacific/North American (PNA) pattern. The overarching objective of the 2003 follow-up workshop was to proceed with a number of recommendations made from the 2002 workshop, as well as to set an agenda and collate efforts in the areas of modeling, simulation and forecasting intraseasonal and short-term climate variability. More specifically, the aims of the 2003 workshop were to: 1) develop a baseline of the "state of the art" in subseasonal prediction capabilities, 2) implement a program to carry out experimental subseasonal forecasts, and 3) develop strategies for tapping the above sources of predictability by focusing research, model development, and the development/acquisition of new observations on the subseasonal problem. The workshop was held over two days and was attended by over 80 scientists, modelers, forecasters and agency personnel. The agenda of the workshop focused on issues related to the MJO and tropicalextratropical interactions as they relate to the subseasonal simulation and prediction problem. This included the development of plans for a coordinated set of GCM hindcast experiments to assess current model subseasonal prediction capabilities and shortcomings, an emphasis on developing a strategy to rectify shortcomings associated with tropical intraseasonal variability, namely diabatic processes, and continuing the implementation of an experimental forecast and model development program that focuses on one of the key sources of untapped predictability, namely the MJO. The tangible outcomes of the meeting included: 1) the development of a recommended framework for a set of multi-year ensembles of 45-day hindcasts to be carried out by a number of GCMs so that they can be analyzed in regards to their representations of subseasonal variability, predictability and forecast skill, 2) an assessment of the present status of GCM representations of the MJO and recommendations for future steps to take in order to remedy the remaining shortcomings in these representations, and 3) a final implementation plan for a multi-institute/multi-nation Experimental MJO Prediction Program.
Simulation of dissolved nutrient export from the Dongjiang river basin with a grid-based NEWS model
NASA Astrophysics Data System (ADS)
Rong, Qiangqiang; Su, Meirong; Yang, Zhifeng; Cai, Yanpeng; Yue, Wencong; Dang, Zhi
2018-06-01
In this research, a grid-based NEWS model was proposed through coupling the geographic information system (GIS) with the Global NEWS model framework. The model was then applied to the Dongjiang River basin to simulate the dissolved nutrient export from this area. The model results showed that the total amounts of the dissolved nitrogen and phosphorus exported from the Dongjiang River basin were approximately 27154.87 and 1389.33 t, respectively. 90 % of the two loads were inorganic forms (i.e. dissolved inorganic nitrogen and phosphorus, DIN and DIP). Also, the nutrient export loads did not evenly distributed in the basin. The main stream watershed of the Dongjiang River basin has the largest DIN and DIP export loads, while the largest dissolved organic nitrogen and phosphorus (DON and DOP) loads were observed in the middle and upper stream watersheds of the basin, respectively. As for the nutrient exported from each subbasin, different sources had different influences on the output of each nutrient form. For the DIN load in each subbasin, fertilization application, atmospheric deposition and biological fixation were the three main contributors, while eluviation was the most important source for DON. In terms of DIP load, fertilizer application and breeding wastewater were the main contributors, while eluviation and fertilizer application were the two main sources for DOP.
NASA Astrophysics Data System (ADS)
Dalguer, Luis A.; Fukushima, Yoshimitsu; Irikura, Kojiro; Wu, Changjiang
2017-09-01
Inspired by the first workshop on Best Practices in Physics-Based Fault Rupture Models for Seismic Hazard Assessment of Nuclear Installations (BestPSHANI) conducted by the International Atomic Energy Agency (IAEA) on 18-20 November, 2015 in Vienna (http://www-pub.iaea.org/iaeameetings/50896/BestPSHANI), this PAGEOPH topical volume collects several extended articles from this workshop as well as several new contributions. A total of 17 papers have been selected on topics ranging from the seismological aspects of earthquake cycle simulations for source-scaling evaluation, seismic source characterization, source inversion and ground motion modeling (based on finite fault rupture using dynamic, kinematic, stochastic and empirical Green's functions approaches) to the engineering application of simulated ground motion for the analysis of seismic response of structures. These contributions include applications to real earthquakes and description of current practice to assess seismic hazard in terms of nuclear safety in low seismicity areas, as well as proposals for physics-based hazard assessment for critical structures near large earthquakes. Collectively, the papers of this volume highlight the usefulness of physics-based models to evaluate and understand the physical causes of observed and empirical data, as well as to predict ground motion beyond the range of recorded data. Relevant importance is given on the validation and verification of the models by comparing synthetic results with observed data and empirical models.