Collaborative modeling: the missing piece of distributed simulation
NASA Astrophysics Data System (ADS)
Sarjoughian, Hessam S.; Zeigler, Bernard P.
1999-06-01
The Department of Defense overarching goal of performing distributed simulation by overcoming geographic and time constraints has brought the problem of distributed modeling to the forefront. The High Level Architecture standard is primarily intended for simulation interoperability. However, as indicated, the existence of a distributed modeling infrastructure plays a fundamental and central role in supporting the development of distributed simulations. In this paper, we describe some fundamental distributed modeling concepts and their implications for constructing successful distributed simulations. In addition, we discuss the Collaborative DEVS Modeling environment that has been devised to enable graphically dispersed modelers to collaborate and synthesize modular and hierarchical models. We provide an actual example of the use of Collaborative DEVS Modeler in application to a project involving corporate partners developing an HLA-compliant distributed simulation exercise.
On the deterministic and stochastic use of hydrologic models
Farmer, William H.; Vogel, Richard M.
2016-01-01
Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.
The impacts of precipitation amount simulation on hydrological modeling in Nordic watersheds
NASA Astrophysics Data System (ADS)
Li, Zhi; Brissette, Fancois; Chen, Jie
2013-04-01
Stochastic modeling of daily precipitation is very important for hydrological modeling, especially when no observed data are available. Precipitation is usually modeled by two component model: occurrence generation and amount simulation. For occurrence simulation, the most common method is the first-order two-state Markov chain due to its simplification and good performance. However, various probability distributions have been reported to simulate precipitation amount, and spatiotemporal differences exist in the applicability of different distribution models. Therefore, assessing the applicability of different distribution models is necessary in order to provide more accurate precipitation information. Six precipitation probability distributions (exponential, Gamma, Weibull, skewed normal, mixed exponential, and hybrid exponential/Pareto distributions) are directly and indirectly evaluated on their ability to reproduce the original observed time series of precipitation amount. Data from 24 weather stations and two watersheds (Chute-du-Diable and Yamaska watersheds) in the province of Quebec (Canada) are used for this assessment. Various indices or statistics, such as the mean, variance, frequency distribution and extreme values are used to quantify the performance in simulating the precipitation and discharge. Performance in reproducing key statistics of the precipitation time series is well correlated to the number of parameters of the distribution function, and the three-parameter precipitation models outperform the other models, with the mixed exponential distribution being the best at simulating daily precipitation. The advantage of using more complex precipitation distributions is not as clear-cut when the simulated time series are used to drive a hydrological model. While the advantage of using functions with more parameters is not nearly as obvious, the mixed exponential distribution appears nonetheless as the best candidate for hydrological modeling. The implications of choosing a distribution function with respect to hydrological modeling and climate change impact studies are also discussed.
Cyber Physical System Modelling of Distribution Power Systems for Dynamic Demand Response
NASA Astrophysics Data System (ADS)
Chu, Xiaodong; Zhang, Rongxiang; Tang, Maosen; Huang, Haoyi; Zhang, Lei
2018-01-01
Dynamic demand response (DDR) is a package of control methods to enhance power system security. A CPS modelling and simulation platform for DDR in distribution power systems is presented in this paper. CPS modelling requirements of distribution power systems are analyzed. A coupled CPS modelling platform is built for assessing DDR in the distribution power system, which combines seamlessly modelling tools of physical power networks and cyber communication networks. Simulations results of IEEE 13-node test system demonstrate the effectiveness of the modelling and simulation platform.
Theory of quantized systems: formal basis for DEVS/HLA distributed simulation environment
NASA Astrophysics Data System (ADS)
Zeigler, Bernard P.; Lee, J. S.
1998-08-01
In the context of a DARPA ASTT project, we are developing an HLA-compliant distributed simulation environment based on the DEVS formalism. This environment will provide a user- friendly, high-level tool-set for developing interoperable discrete and continuous simulation models. One application is the study of contract-based predictive filtering. This paper presents a new approach to predictive filtering based on a process called 'quantization' to reduce state update transmission. Quantization, which generates state updates only at quantum level crossings, abstracts a sender model into a DEVS representation. This affords an alternative, efficient approach to embedding continuous models within distributed discrete event simulations. Applications of quantization to message traffic reduction are discussed. The theory has been validated by DEVSJAVA simulations of test cases. It will be subject to further test in actual distributed simulations using the DEVS/HLA modeling and simulation environment.
Electric Power Distribution System Model Simplification Using Segment Substitution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat
Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). In contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less
Cellular Automata Simulation for Wealth Distribution
NASA Astrophysics Data System (ADS)
Lo, Shih-Ching
2009-08-01
Wealth distribution of a country is a complicate system. A model, which is based on the Epstein & Axtell's "Sugars cape" model, is presented in Netlogo. The model considers the income, age, working opportunity and salary as control variables. There are still other variables should be considered while an artificial society is established. In this study, a more complicate cellular automata model for wealth distribution model is proposed. The effects of social welfare, tax, economical investment and inheritance are considered and simulated. According to the cellular automata simulation for wealth distribution, we will have a deep insight of financial policy of the government.
Modeling the Effects of Solar Cell Distribution on Optical Cross Section for Solar Panel Simulation
2012-09-01
cell material. The solar panel was created as a CAD model and simulated with the imaging facility parameters with TASAT. TASAT uses a BRDF to apply...1 MODELING THE EFFECTS OF SOLAR CELL DISTRIBUTION ON OPTICAL CROSS SECTION FOR SOLAR PANEL SIMULATION Kelly Feirstine Meiling Klein... model of a solar panel with various solar cell tip and tilt distribution statistics. Modeling a solar panel as a single sheet of “solar cell” material
Challenges in reducing the computational time of QSTS simulations for distribution system analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deboever, Jeremiah; Zhang, Xiaochen; Reno, Matthew J.
The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10more » to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lagerlöf, Jakob H., E-mail: Jakob@radfys.gu.se; Kindblom, Jon; Bernhardt, Peter
2014-09-15
Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO{sub 2})]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumormore » oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO{sub 2}), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO{sub 2} were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO{sub 2} distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the lower end, due to anoxia, but smaller tumors showed undisturbed oxygen distributions. The six different models with correlated parameters generated three classes of oxygen distributions. The first was a hypothetical, negative covariance between vessel proximity and pO{sub 2} (VPO-C scenario); the second was a hypothetical positive covariance between vessel proximity and pO{sub 2} (VPO+C scenario); and the third was the hypothesis of no correlation between vessel proximity and pO{sub 2} (UP scenario). The VPO-C scenario produced a distinctly different oxygen distribution than the two other scenarios. The shape of the VPO-C scenario was similar to that of the nonvariable DOC model, and the larger the tumor, the greater the similarity between the two models. For all simulations, the mean oxygen tension decreased and the hypoxic fraction increased with tumor size. The absorbed dose required for definitive tumor control was highest for the VPO+C scenario, followed by the UP and VPO-C scenarios. Conclusions: A novel MC algorithm was presented which simulated oxygen distributions and radiation response for various biological parameter values. The analysis showed that the VPO-C scenario generated a clearly different oxygen distribution from the VPO+C scenario; the former exhibited a lower hypoxic fraction and higher radiosensitivity. In future studies, this modeling approach might be valuable for qualitative analyses of factors that affect oxygen distribution as well as analyses of specific experimental and clinical situations.« less
Lagerlöf, Jakob H; Kindblom, Jon; Bernhardt, Peter
2014-09-01
To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO2)]. A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO2), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO2 were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO2 distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the lower end, due to anoxia, but smaller tumors showed undisturbed oxygen distributions. The six different models with correlated parameters generated three classes of oxygen distributions. The first was a hypothetical, negative covariance between vessel proximity and pO2 (VPO-C scenario); the second was a hypothetical positive covariance between vessel proximity and pO2 (VPO+C scenario); and the third was the hypothesis of no correlation between vessel proximity and pO2 (UP scenario). The VPO-C scenario produced a distinctly different oxygen distribution than the two other scenarios. The shape of the VPO-C scenario was similar to that of the nonvariable DOC model, and the larger the tumor, the greater the similarity between the two models. For all simulations, the mean oxygen tension decreased and the hypoxic fraction increased with tumor size. The absorbed dose required for definitive tumor control was highest for the VPO+C scenario, followed by the UP and VPO-C scenarios. A novel MC algorithm was presented which simulated oxygen distributions and radiation response for various biological parameter values. The analysis showed that the VPO-C scenario generated a clearly different oxygen distribution from the VPO+C scenario; the former exhibited a lower hypoxic fraction and higher radiosensitivity. In future studies, this modeling approach might be valuable for qualitative analyses of factors that affect oxygen distribution as well as analyses of specific experimental and clinical situations.
Applying simulation model to uniform field space charge distribution measurements by the PEA method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y.; Salama, M.M.A.
1996-12-31
Signals measured under uniform fields by the Pulsed Electroacoustic (PEA) method have been processed by the deconvolution procedure to obtain space charge distributions since 1988. To simplify data processing, a direct method has been proposed recently in which the deconvolution is eliminated. However, the surface charge cannot be represented well by the method because the surface charge has a bandwidth being from zero to infinity. The bandwidth of the charge distribution must be much narrower than the bandwidths of the PEA system transfer function in order to apply the direct method properly. When surface charges can not be distinguished frommore » space charge distributions, the accuracy and the resolution of the obtained space charge distributions decrease. To overcome this difficulty a simulation model is therefore proposed. This paper shows their attempts to apply the simulation model to obtain space charge distributions under plane-plane electrode configurations. Due to the page limitation for the paper, the charge distribution originated by the simulation model is compared to that obtained by the direct method with a set of simulated signals.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Udhay Ravishankar; Milos manic
2013-08-01
This paper presents a micro-grid simulator tool useful for implementing and testing multi-agent controllers (SGridSim). As a common engineering practice it is important to have a tool that simplifies the modeling of the salient features of a desired system. In electric micro-grids, these salient features are the voltage and power distributions within the micro-grid. Current simplified electric power grid simulator tools such as PowerWorld, PowerSim, Gridlab, etc, model only the power distribution features of a desired micro-grid. Other power grid simulators such as Simulink, Modelica, etc, use detailed modeling to accommodate the voltage distribution features. This paper presents a SGridSimmore » micro-grid simulator tool that simplifies the modeling of both the voltage and power distribution features in a desired micro-grid. The SGridSim tool accomplishes this simplified modeling by using Effective Node-to-Node Complex Impedance (EN2NCI) models of components that typically make-up a micro-grid. The term EN2NCI models means that the impedance based components of a micro-grid are modeled as single impedances tied between their respective voltage nodes on the micro-grid. Hence the benefit of the presented SGridSim tool are 1) simulation of a micro-grid is performed strictly in the complex-domain; 2) faster simulation of a micro-grid by avoiding the simulation of detailed transients. An example micro-grid model was built using the SGridSim tool and tested to simulate both the voltage and power distribution features with a total absolute relative error of less than 6%.« less
Electric Power Distribution System Model Simplification Using Segment Substitution
Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; ...
2017-09-20
Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less
Electric Power Distribution System Model Simplification Using Segment Substitution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat
Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less
Performance of distributed multiscale simulations
Borgdorff, J.; Ben Belgacem, M.; Bona-Casas, C.; Fazendeiro, L.; Groen, D.; Hoenen, O.; Mizeranschi, A.; Suter, J. L.; Coster, D.; Coveney, P. V.; Dubitzky, W.; Hoekstra, A. G.; Strand, P.; Chopard, B.
2014-01-01
Multiscale simulations model phenomena across natural scales using monolithic or component-based code, running on local or distributed resources. In this work, we investigate the performance of distributed multiscale computing of component-based models, guided by six multiscale applications with different characteristics and from several disciplines. Three modes of distributed multiscale computing are identified: supplementing local dependencies with large-scale resources, load distribution over multiple resources, and load balancing of small- and large-scale resources. We find that the first mode has the apparent benefit of increasing simulation speed, and the second mode can increase simulation speed if local resources are limited. Depending on resource reservation and model coupling topology, the third mode may result in a reduction of resource consumption. PMID:24982258
Skin fluorescence model based on the Monte Carlo technique
NASA Astrophysics Data System (ADS)
Churmakov, Dmitry Y.; Meglinski, Igor V.; Piletsky, Sergey A.; Greenhalgh, Douglas A.
2003-10-01
The novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account spatial distribution of fluorophores following the collagen fibers packing, whereas in epidermis and stratum corneum the distribution of fluorophores assumed to be homogeneous. The results of simulation suggest that distribution of auto-fluorescence is significantly suppressed in the NIR spectral region, while fluorescence of sensor layer embedded in epidermis is localized at the adjusted depth. The model is also able to simulate the skin fluorescence spectra.
Diagnostic and model dependent uncertainty of simulated Tibetan permafrost area
NASA Astrophysics Data System (ADS)
Wang, W.; Rinke, A.; Moore, J. C.; Cui, X.; Ji, D.; Li, Q.; Zhang, N.; Wang, C.; Zhang, S.; Lawrence, D. M.; McGuire, A. D.; Zhang, W.; Delire, C.; Koven, C.; Saito, K.; MacDougall, A.; Burke, E.; Decharme, B.
2015-03-01
We perform a land surface model intercomparison to investigate how the simulation of permafrost area on the Tibetan Plateau (TP) varies between 6 modern stand-alone land surface models (CLM4.5, CoLM, ISBA, JULES, LPJ-GUESS, UVic). We also examine the variability in simulated permafrost area and distribution introduced by 5 different methods of diagnosing permafrost (from modeled monthly ground temperature, mean annual ground and air temperatures, air and surface frost indexes). There is good agreement (99-135 x 104 km2) between the two diagnostic methods based on air temperature which are also consistent with the best current observation-based estimate of actual permafrost area (101 x 104 km2). However the uncertainty (1-128 x 104 km2) using the three methods that require simulation of ground temperature is much greater. Moreover simulated permafrost distribution on TP is generally only fair to poor for these three methods (diagnosis of permafrost from monthly, and mean annual ground temperature, and surface frost index), while permafrost distribution using air temperature based methods is generally good. Model evaluation at field sites highlights specific problems in process simulations likely related to soil texture specification and snow cover. Models are particularly poor at simulating permafrost distribution using definition that soil temperature remains at or below 0°C for 24 consecutive months, which requires reliable simulation of both mean annual ground temperatures and seasonal cycle, and hence is relatively demanding. Although models can produce better permafrost maps using mean annual ground temperature and surface frost index, analysis of simulated soil temperature profiles reveals substantial biases. The current generation of land surface models need to reduce biases in simulated soil temperature profiles before reliable contemporary permafrost maps and predictions of changes in permafrost distribution can be made for the Tibetan Plateau.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ainsworth, Nathan; Hariri, Ali; Prabakar, Kumaraguru
Power hardware-in-the-loop (PHIL) simulation, where actual hardware under text is coupled with a real-time digital model in closed loop, is a powerful tool for analyzing new methods of control for emerging distributed power systems. However, without careful design and compensation of the interface between the simulated and actual systems, PHIL simulations may exhibit instability and modeling inaccuracies. This paper addresses issues that arise in the PHIL simulation of a hardware battery inverter interfaced with a simulated distribution feeder. Both the stability and accuracy issues are modeled and characterized, and a methodology for design of PHIL interface compensation to ensure stabilitymore » and accuracy is presented. The stability and accuracy of the resulting compensated PHIL simulation is then shown by experiment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabakar, Kumaraguru; Ainsworth, Nathan; Pratt, Annabelle
Power hardware-in-the-loop (PHIL) simulation, where actual hardware under text is coupled with a real-time digital model in closed loop, is a powerful tool for analyzing new methods of control for emerging distributed power systems. However, without careful design and compensation of the interface between the simulated and actual systems, PHIL simulations may exhibit instability and modeling inaccuracies. This paper addresses issues that arise in the PHIL simulation of a hardware battery inverter interfaced with a simulated distribution feeder. Both the stability and accuracy issues are modeled and characterized, and a methodology for design of PHIL interface compensation to ensure stabilitymore » and accuracy is presented. The stability and accuracy of the resulting compensated PHIL simulation is then shown by experiment.« less
2014-09-18
and full/scale experimental verifications towards ground/ satellite quantum key distribution0 Oat Qhotonics 4235>9+7,=5;9!អ \\58^ Zin K. Dao Z. Miu T...Conceptual Modeling of a Quantum Key Distribution Simulation Framework Using the Discrete Event System Specification DISSERTATION Jeffrey D. Morris... QUANTUM KEY DISTRIBUTION SIMULATION FRAMEWORK USING THE DISCRETE EVENT SYSTEM SPECIFICATION DISSERTATION Presented to the Faculty Department of Systems
NASA Astrophysics Data System (ADS)
Ravazzani, G.; Montaldo, N.; Mancini, M.; Rosso, R.
2003-04-01
Event-based hydrologic models need the antecedent soil moisture condition, as critical boundary initial condition for flood simulation. Land-surface models (LSMs) have been developed to simulate mass and energy transfers, and to update the soil moisture condition through time from the solution of water and energy balance equations. They are recently used in distributed hydrologic modeling for flood prediction systems. Recent developments have made LSMs more complex by inclusion of more processes and controlling variables, increasing parameter number and uncertainty of their estimates. This also led to increasing of computational burden and parameterization of the distributed hydrologic models. In this study we investigate: 1) the role of soil moisture initial conditions in the modeling of Alpine basin floods; 2) the adequate complexity level of LSMs for the distributed hydrologic modeling of Alpine basin floods. The Toce basin is the case study; it is located in the North Piedmont (Italian Alps), and it has a total drainage area of 1534 km2 at Candoglia section. Three distributed hydrologic models of different level of complexity are developed and compared: two (TDLSM and SDLSM) are continuous models, one (FEST02) is an event model based on the simplified SCS-CN method for rainfall abstractions. In the TDLSM model a two-layer LSM computes both saturation and infiltration excess runoff, and simulates the evolution of the water table spatial distribution using the topographic index; in the SDLSM model a simplified one-layer distributed LSM only computes hortonian runoff, and doesn’t simulate the water table dynamic. All the three hydrologic models simulate the surface runoff propagation through the Muskingum-Cunge method. TDLSM and SDLSM models have been applied for the two-year (1996 and 1997) simulation period, during which two major floods occurred in the November 1996 and in the June 1997. The models have been calibrated and tested comparing simulated and observed hydrographs at Candoglia. Sensitivity analysis of the models to significant LSM parameters were also performed. The performances of the three models in the simulation of the two major floods are compared. Interestingly, the results indicate that the SDLSM model is able to sufficiently well predict the major floods of this Alpine basin; indeed, this model is a good compromise between the over-parameterized and too complex TDLSM model and the over-simplified FEST02 model.
NASA Astrophysics Data System (ADS)
Shaposhnikov, Dmitry S.; Rodin, Alexander V.; Medvedev, Alexander S.; Fedorova, Anna A.; Kuroda, Takeshi; Hartogh, Paul
2018-02-01
We present a new implementation of the hydrological cycle scheme into a general circulation model of the Martian atmosphere. The model includes a semi-Lagrangian transport scheme for water vapor and ice and accounts for microphysics of phase transitions between them. The hydrological scheme includes processes of saturation, nucleation, particle growth, sublimation, and sedimentation under the assumption of a variable size distribution. The scheme has been implemented into the Max Planck Institute Martian general circulation model and tested assuming monomodal and bimodal lognormal distributions of ice condensation nuclei. We present a comparison of the simulated annual variations, horizontal and vertical distributions of water vapor, and ice clouds with the available observations from instruments on board Mars orbiters. The accounting for bimodality of aerosol particle distribution improves the simulations of the annual hydrological cycle, including predicted ice clouds mass, opacity, number density, and particle radii. The increased number density and lower nucleation rates bring the simulated cloud opacities closer to observations. Simulations show a weak effect of the excess of small aerosol particles on the simulated water vapor distributions.
NASA Technical Reports Server (NTRS)
Afjeh, Abdollah A.; Reed, John A.
2003-01-01
The following reports are presented on this project:A first year progress report on: Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; A second year progress report on: Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; An Extensible, Interchangeable and Sharable Database Model for Improving Multidisciplinary Aircraft Design; Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration; and Improving the Aircraft Design Process Using Web-based Modeling and Simulation.
Su, Peiran; Eri, Qitai; Wang, Qiang
2014-04-10
Optical roughness was introduced into the bidirectional reflectance distribution function (BRDF) model to simulate the reflectance characteristics of thermal radiation. The optical roughness BRDF model stemmed from the influence of surface roughness and wavelength on the ray reflectance calculation. This model was adopted to simulate real metal emissivity. The reverse Monte Carlo method was used to display the distribution of reflectance rays. The numerical simulations showed that the optical roughness BRDF model can calculate the wavelength effect on emissivity and simulate the real metal emissivity variance with incidence angles.
NASA Astrophysics Data System (ADS)
Lei, Fan; Li, Xiaoping; Liu, Yanming; Liu, Donglin; Yang, Min; Yu, Yuanyuan
2018-01-01
A two-dimensional axisymmetric inductively coupled plasma (ICP) model with its implementation in the COMSOL (Multi-physics simulation software) platform is described. Specifically, a large size ICP generator filled with argon is simulated in this study. Distributions of the number density and temperature of electrons are obtained for various input power and pressure settings and compared. In addition, the electron trajectory distribution is obtained in simulation. Finally, using experimental data, the results from simulations are compared to assess the veracity of the two-dimensional fluid model. The purpose of this comparison is to validate the veracity of the simulation model. An approximate agreement was found (variation tendency is the same). The main reasons for the numerical magnitude discrepancies are the assumption of a Maxwellian distribution and a Druyvesteyn distribution for the electron energy and the lack of cross sections of collision frequencies and reaction rates for argon plasma.
NASA Astrophysics Data System (ADS)
Gusman, A. R.; Satake, K.; Goto, T.; Takahashi, T.
2016-12-01
Estimating tsunami amplitude from tsunami sand deposit has been a challenge. The grain size distribution of tsunami sand deposit may have correlation with tsunami inundation process, and further with its source characteristics. In order to test this hypothesis, we need a tsunami sediment transport model that can accurately estimate grain size distribution of tsunami deposit. Here, we built and validate a tsunami sediment transport model that can simulate grain size distribution. Our numerical model has three layers which are suspended load layer, active bed layer, and parent bed layer. The two bed layers contain information about the grain size distribution. This numerical model can handle a wide range of grain sizes from 0.063 (4 ϕ) to 5.657 mm (-2.5 ϕ). We apply the numerical model to simulate the sedimentation process during the 2011 Tohoku earthquake in Numanohama, Iwate prefecture, Japan. The grain size distributions at 15 sample points along a 900 m transect from the beach are used to validate the tsunami sediment transport model. The tsunami deposits are dominated by coarse sand with diameter of 0.5 - 1 mm and their thickness are up to 25 cm. Our tsunami model can well reproduce the observed tsunami run-ups that are ranged from 16 to 34 m along the steep valley in Numanohama. The shapes of the simulated grain size distributions at many sample points located within 300 m from the shoreline are similar to the observations. The differences between observed and simulated peak of grain size distributions are less than 1 ϕ. Our result also shows that the simulated sand thickness distribution along the transect is consistent with the observation.
APPLICATION OF A FULLY DISTRIBUTED WASHOFF AND TRANSPORT MODEL FOR A GULF COAST WATERSHED
Advances in hydrologic modeling have been shown to improve the accuracy of rainfall runoff simulation and prediction. Building on the capabilities of distributed hydrologic modeling, a water quality model was developed to simulate buildup, washoff, and advective transport of a co...
Sequential Computerized Mastery Tests--Three Simulation Studies
ERIC Educational Resources Information Center
Wiberg, Marie
2006-01-01
A simulation study of a sequential computerized mastery test is carried out with items modeled with the 3 parameter logistic item response theory model. The examinees' responses are either identically distributed, not identically distributed, or not identically distributed together with estimation errors in the item characteristics. The…
NASA Astrophysics Data System (ADS)
Lamdjaya, T.; Jobiliong, E.
2017-01-01
PT Anugrah Citra Boga is a food processing industry that produces meatballs as their main product. The distribution system of the products must be considered, because it needs to be more efficient in order to reduce the shipment cost. The purpose of this research is to optimize the distribution time by simulating the distribution channels with capacitated vehicle routing problem method. Firstly, the distribution route is observed in order to calculate the average speed, time capacity and shipping costs. Then build the model using AIMMS software. A few things that are required to simulate the model are customer locations, distances, and the process time. Finally, compare the total distribution cost obtained by the simulation and the historical data. It concludes that the company can reduce the shipping cost around 4.1% or Rp 529,800 per month. By using this model, the utilization rate can be more optimal. The current value for the first vehicle is 104.6% and after the simulation it becomes 88.6%. Meanwhile, the utilization rate of the second vehicle is increase from 59.8% to 74.1%. The simulation model is able to produce the optimal shipping route with time restriction, vehicle capacity, and amount of vehicle.
The Effect of modeled recharge distribution on simulated groundwater availability and capture
Tillman, Fred D.; Pool, Donald R.; Leake, Stanley A.
2015-01-01
Simulating groundwater flow in basin-fill aquifers of the semiarid southwestern United States commonly requires decisions about how to distribute aquifer recharge. Precipitation can recharge basin-fill aquifers by direct infiltration and transport through faults and fractures in the high-elevation areas, by flowing overland through high-elevation areas to infiltrate at basin-fill margins along mountain fronts, by flowing overland to infiltrate along ephemeral channels that often traverse basins in the area, or by some combination of these processes. The importance of accurately simulating recharge distributions is a current topic of discussion among hydrologists and water managers in the region, but no comparative study has been performed to analyze the effects of different recharge distributions on groundwater simulations. This study investigates the importance of the distribution of aquifer recharge in simulating regional groundwater flow in basin-fill aquifers by calibrating a groundwater-flow model to four different recharge distributions, all with the same total amount of recharge. Similarities are seen in results from steady-state models for optimized hydraulic conductivity values, fit of simulated to observed hydraulic heads, and composite scaled sensitivities of conductivity parameter zones. Transient simulations with hypothetical storage properties and pumping rates produce similar capture rates and storage change results, but differences are noted in the rate of drawdown at some well locations owing to the differences in optimized hydraulic conductivity. Depending on whether the purpose of the groundwater model is to simulate changes in groundwater levels or changes in storage and capture, the distribution of aquifer recharge may or may not be of primary importance.
Distributed Generation Market Demand Model (dGen): Documentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sigrin, Benjamin; Gleason, Michael; Preus, Robert
The Distributed Generation Market Demand model (dGen) is a geospatially rich, bottom-up, market-penetration model that simulates the potential adoption of distributed energy resources (DERs) for residential, commercial, and industrial entities in the continental United States through 2050. The National Renewable Energy Laboratory (NREL) developed dGen to analyze the key factors that will affect future market demand for distributed solar, wind, storage, and other DER technologies in the United States. The new model builds off, extends, and replaces NREL's SolarDS model (Denholm et al. 2009a), which simulates the market penetration of distributed PV only. Unlike the SolarDS model, dGen can modelmore » various DER technologies under one platform--it currently can simulate the adoption of distributed solar (the dSolar module) and distributed wind (the dWind module) and link with the ReEDS capacity expansion model (Appendix C). The underlying algorithms and datasets in dGen, which improve the representation of customer decision making as well as the spatial resolution of analyses (Figure ES-1), also are improvements over SolarDS.« less
Huang, Qiuhua; Vittal, Vijay
2018-05-09
Conventional electromagnetic transient (EMT) and phasor-domain hybrid simulation approaches presently exist for trans-mission system level studies. Their simulation efficiency is generally constrained by the EMT simulation. With an increasing number of distributed energy resources and non-conventional loads being installed in distribution systems, it is imperative to extend the hybrid simulation application to include distribution systems and integrated transmission and distribution systems. Meanwhile, it is equally important to improve the simulation efficiency as the modeling scope and complexity of the detailed system in the EMT simulation increases. To meet both requirements, this paper introduces an advanced EMT and phasor-domain hybrid simulationmore » approach. This approach has two main features: 1) a comprehensive phasor-domain modeling framework which supports positive-sequence, three-sequence, three-phase and mixed three-sequence/three-phase representations and 2) a robust and flexible simulation mode switching scheme. The developed scheme enables simulation switching from hybrid simulation mode back to pure phasor-domain dynamic simulation mode to achieve significantly improved simulation efficiency. The proposed method has been tested on integrated transmission and distribution systems. In conclusion, the results show that with the developed simulation switching feature, the total computational time is significantly reduced compared to running the hybrid simulation for the whole simulation period, while maintaining good simulation accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Qiuhua; Vittal, Vijay
Conventional electromagnetic transient (EMT) and phasor-domain hybrid simulation approaches presently exist for trans-mission system level studies. Their simulation efficiency is generally constrained by the EMT simulation. With an increasing number of distributed energy resources and non-conventional loads being installed in distribution systems, it is imperative to extend the hybrid simulation application to include distribution systems and integrated transmission and distribution systems. Meanwhile, it is equally important to improve the simulation efficiency as the modeling scope and complexity of the detailed system in the EMT simulation increases. To meet both requirements, this paper introduces an advanced EMT and phasor-domain hybrid simulationmore » approach. This approach has two main features: 1) a comprehensive phasor-domain modeling framework which supports positive-sequence, three-sequence, three-phase and mixed three-sequence/three-phase representations and 2) a robust and flexible simulation mode switching scheme. The developed scheme enables simulation switching from hybrid simulation mode back to pure phasor-domain dynamic simulation mode to achieve significantly improved simulation efficiency. The proposed method has been tested on integrated transmission and distribution systems. In conclusion, the results show that with the developed simulation switching feature, the total computational time is significantly reduced compared to running the hybrid simulation for the whole simulation period, while maintaining good simulation accuracy.« less
Modeling and simulation of satellite subsystems for end-to-end spacecraft modeling
NASA Astrophysics Data System (ADS)
Schum, William K.; Doolittle, Christina M.; Boyarko, George A.
2006-05-01
During the past ten years, the Air Force Research Laboratory (AFRL) has been simultaneously developing high-fidelity spacecraft payload models as well as a robust distributed simulation environment for modeling spacecraft subsystems. Much of this research has occurred in the Distributed Architecture Simulation Laboratory (DASL). AFRL developers working in the DASL have effectively combined satellite power, attitude pointing, and communication link analysis subsystem models with robust satellite sensor models to create a first-order end-to-end satellite simulation capability. The merging of these two simulation areas has advanced the field of spacecraft simulation, design, and analysis, and enabled more in-depth mission and satellite utility analyses. A core capability of the DASL is the support of a variety of modeling and analysis efforts, ranging from physics and engineering-level modeling to mission and campaign-level analysis. The flexibility and agility of this simulation architecture will be used to support space mission analysis, military utility analysis, and various integrated exercises with other military and space organizations via direct integration, or through DOD standards such as Distributed Interaction Simulation. This paper discusses the results and lessons learned in modeling satellite communication link analysis, power, and attitude control subsystems for an end-to-end satellite simulation. It also discusses how these spacecraft subsystem simulations feed into and support military utility and space mission analyses.
Diagnostic and model dependent uncertainty of simulated Tibetan permafrost area
Wang, A.; Moore, J.C.; Cui, Xingquan; Ji, D.; Li, Q.; Zhang, N.; Wang, C.; Zhang, S.; Lawrence, D.M.; McGuire, A.D.; Zhang, W.; Delire, C.; Koven, C.; Saito, K.; MacDougall, A.; Burke, E.; Decharme, B.
2016-01-01
We perform a land-surface model intercomparison to investigate how the simulation of permafrost area on the Tibetan Plateau (TP) varies among six modern stand-alone land-surface models (CLM4.5, CoLM, ISBA, JULES, LPJ-GUESS, UVic). We also examine the variability in simulated permafrost area and distribution introduced by five different methods of diagnosing permafrost (from modeled monthly ground temperature, mean annual ground and air temperatures, air and surface frost indexes). There is good agreement (99 to 135 × 104 km2) between the two diagnostic methods based on air temperature which are also consistent with the observation-based estimate of actual permafrost area (101 × 104 km2). However the uncertainty (1 to 128 × 104 km2) using the three methods that require simulation of ground temperature is much greater. Moreover simulated permafrost distribution on the TP is generally only fair to poor for these three methods (diagnosis of permafrost from monthly, and mean annual ground temperature, and surface frost index), while permafrost distribution using air-temperature-based methods is generally good. Model evaluation at field sites highlights specific problems in process simulations likely related to soil texture specification, vegetation types and snow cover. Models are particularly poor at simulating permafrost distribution using the definition that soil temperature remains at or below 0 °C for 24 consecutive months, which requires reliable simulation of both mean annual ground temperatures and seasonal cycle, and hence is relatively demanding. Although models can produce better permafrost maps using mean annual ground temperature and surface frost index, analysis of simulated soil temperature profiles reveals substantial biases. The current generation of land-surface models need to reduce biases in simulated soil temperature profiles before reliable contemporary permafrost maps and predictions of changes in future permafrost distribution can be made for the Tibetan Plateau.
Diagnostic and model dependent uncertainty of simulated Tibetan permafrost area
NASA Astrophysics Data System (ADS)
Wang, W.; Rinke, A.; Moore, J. C.; Cui, X.; Ji, D.; Li, Q.; Zhang, N.; Wang, C.; Zhang, S.; Lawrence, D. M.; McGuire, A. D.; Zhang, W.; Delire, C.; Koven, C.; Saito, K.; MacDougall, A.; Burke, E.; Decharme, B.
2016-02-01
We perform a land-surface model intercomparison to investigate how the simulation of permafrost area on the Tibetan Plateau (TP) varies among six modern stand-alone land-surface models (CLM4.5, CoLM, ISBA, JULES, LPJ-GUESS, UVic). We also examine the variability in simulated permafrost area and distribution introduced by five different methods of diagnosing permafrost (from modeled monthly ground temperature, mean annual ground and air temperatures, air and surface frost indexes). There is good agreement (99 to 135 × 104 km2) between the two diagnostic methods based on air temperature which are also consistent with the observation-based estimate of actual permafrost area (101 × 104 km2). However the uncertainty (1 to 128 × 104 km2) using the three methods that require simulation of ground temperature is much greater. Moreover simulated permafrost distribution on the TP is generally only fair to poor for these three methods (diagnosis of permafrost from monthly, and mean annual ground temperature, and surface frost index), while permafrost distribution using air-temperature-based methods is generally good. Model evaluation at field sites highlights specific problems in process simulations likely related to soil texture specification, vegetation types and snow cover. Models are particularly poor at simulating permafrost distribution using the definition that soil temperature remains at or below 0 °C for 24 consecutive months, which requires reliable simulation of both mean annual ground temperatures and seasonal cycle, and hence is relatively demanding. Although models can produce better permafrost maps using mean annual ground temperature and surface frost index, analysis of simulated soil temperature profiles reveals substantial biases. The current generation of land-surface models need to reduce biases in simulated soil temperature profiles before reliable contemporary permafrost maps and predictions of changes in future permafrost distribution can be made for the Tibetan Plateau.
IGMS: An Integrated ISO-to-Appliance Scale Grid Modeling System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan; Hale, Elaine; Hansen, Timothy M.
This paper describes the Integrated Grid Modeling System (IGMS), a novel electric power system modeling platform for integrated transmission-distribution analysis that co-simulates off-the-shelf tools on high performance computing (HPC) platforms to offer unprecedented resolution from ISO markets down to appliances and other end uses. Specifically, the system simultaneously models hundreds or thousands of distribution systems in co-simulation with detailed Independent System Operator (ISO) markets and AGC-level reserve deployment. IGMS uses a new MPI-based hierarchical co-simulation framework to connect existing sub-domain models. Our initial efforts integrate opensource tools for wholesale markets (FESTIV), bulk AC power flow (MATPOWER), and full-featured distribution systemsmore » including physics-based end-use and distributed generation models (many instances of GridLAB-D[TM]). The modular IGMS framework enables tool substitution and additions for multi-domain analyses. This paper describes the IGMS tool, characterizes its performance, and demonstrates the impacts of the coupled simulations for analyzing high-penetration solar PV and price responsive load scenarios.« less
Reusable Component Model Development Approach for Parallel and Distributed Simulation
Zhu, Feng; Yao, Yiping; Chen, Huilong; Yao, Feng
2014-01-01
Model reuse is a key issue to be resolved in parallel and distributed simulation at present. However, component models built by different domain experts usually have diversiform interfaces, couple tightly, and bind with simulation platforms closely. As a result, they are difficult to be reused across different simulation platforms and applications. To address the problem, this paper first proposed a reusable component model framework. Based on this framework, then our reusable model development approach is elaborated, which contains two phases: (1) domain experts create simulation computational modules observing three principles to achieve their independence; (2) model developer encapsulates these simulation computational modules with six standard service interfaces to improve their reusability. The case study of a radar model indicates that the model developed using our approach has good reusability and it is easy to be used in different simulation platforms and applications. PMID:24729751
NASA Technical Reports Server (NTRS)
Lin, D. S.; Wood, E. F.; Famiglietti, J. S.; Mancini, M.
1994-01-01
Spatial distributions of soil moisture over an agricultural watershed with a drainage area of 60 ha were derived from two NASA microwave remote sensors, and then used as a feedback to determine the initial condition for a distributed water balance model. Simulated hydrologic fluxes over a period of twelve days were compared with field observations and with model predictions based on a streamflow derived initial condition. The results indicated that even the low resolution remotely sensed data can improve the hydrologic model's performance in simulating the dynamics of unsaturated zone soil moisture. For the particular watershed under study, the simulated water budget was not sensitive to the resolutions of the microwave sensors.
Huang, Wei; Shi, Jun; Yen, R T
2012-12-01
The objective of our study was to develop a computing program for computing the transit time frequency distributions of red blood cell in human pulmonary circulation, based on our anatomic and elasticity data of blood vessels in human lung. A stochastic simulation model was introduced to simulate blood flow in human pulmonary circulation. In the stochastic simulation model, the connectivity data of pulmonary blood vessels in human lung was converted into a probability matrix. Based on this model, the transit time of red blood cell in human pulmonary circulation and the output blood pressure were studied. Additionally, the stochastic simulation model can be used to predict the changes of blood flow in human pulmonary circulation with the advantage of the lower computing cost and the higher flexibility. In conclusion, a stochastic simulation approach was introduced to simulate the blood flow in the hierarchical structure of a pulmonary circulation system, and to calculate the transit time distributions and the blood pressure outputs.
NASA Technical Reports Server (NTRS)
Huang, Lei; Jiang, Jonathan H.; Murray, Lee T.; Damon, Megan R.; Su, Hui; Livesey, Nathaniel J.
2016-01-01
This study evaluates the distribution and variation of carbon monoxide (CO) in the upper troposphere and lower stratosphere (UTLS) during 2004-2012 as simulated by two chemical transport models, using the latest version of Aura Microwave Limb Sounder (MLS) observations. The simulated spatial distributions, temporal variations and vertical transport of CO in the UTLS region are compared with those observed by MLS. We also investigate the impact of surface emissions and deep convection on CO concentrations in the UTLS over different regions, using both model simulations and MLS observations. Global Modeling Initiative (GMI) and GEOS-Chem simulations of UTLS CO both show similar spatial distributions to observations. The global mean CO values simulated by both models agree with MLS observations at 215 and 147 hPa, but are significantly underestimated by more than 40% at 100 hPa. In addition, the models underestimate the peak CO values by up to 70% at 100 hPa, 60% at 147 hPa and 40% at 215 hPa, with GEOS-Chem generally simulating more CO at 100 hPa and less CO at 215 hPa than GMI. The seasonal distributions of CO simulated by both models are in better agreement with MLS in the Southern Hemisphere (SH) than in the Northern Hemisphere (NH), with disagreements between model and observations over enhanced CO regions such as southern Africa. The simulated vertical transport of CO shows better agreement with MLS in the tropics and the SH subtropics than the NH subtropics. We also examine regional variations in the relationships among surface CO emission, convection and UTLS CO concentrations. The two models exhibit emission-convection- CO relationships similar to those observed by MLS over the tropics and some regions with enhanced UTLS CO.
Constraining the noise-free distribution of halo spin parameters
NASA Astrophysics Data System (ADS)
Benson, Andrew J.
2017-11-01
Any measurement made using an N-body simulation is subject to noise due to the finite number of particles used to sample the dark matter distribution function, and the lack of structure below the simulation resolution. This noise can be particularly significant when attempting to measure intrinsically small quantities, such as halo spin. In this work, we develop a model to describe the effects of particle noise on halo spin parameters. This model is calibrated using N-body simulations in which the particle noise can be treated as a Poisson process on the underlying dark matter distribution function, and we demonstrate that this calibrated model reproduces measurements of halo spin parameter error distributions previously measured in N-body convergence studies. Utilizing this model, along with previous measurements of the distribution of halo spin parameters in N-body simulations, we place constraints on the noise-free distribution of halo spins. We find that the noise-free median spin is 3 per cent lower than that measured directly from the N-body simulation, corresponding to a shift of approximately 40 times the statistical uncertainty in this measurement arising purely from halo counting statistics. We also show that measurement of the spin of an individual halo to 10 per cent precision requires at least 4 × 104 particles in the halo - for haloes containing 200 particles, the fractional error on spins measured for individual haloes is of order unity. N-body simulations should be viewed as the results of a statistical experiment applied to a model of dark matter structure formation. When viewed in this way, it is clear that determination of any quantity from such a simulation should be made through forward modelling of the effects of particle noise.
NASA Astrophysics Data System (ADS)
Skaugen, Thomas; Weltzien, Ingunn H.
2016-09-01
Snow is an important and complicated element in hydrological modelling. The traditional catchment hydrological model with its many free calibration parameters, also in snow sub-models, is not a well-suited tool for predicting conditions for which it has not been calibrated. Such conditions include prediction in ungauged basins and assessing hydrological effects of climate change. In this study, a new model for the spatial distribution of snow water equivalent (SWE), parameterized solely from observed spatial variability of precipitation, is compared with the current snow distribution model used in the operational flood forecasting models in Norway. The former model uses a dynamic gamma distribution and is called Snow Distribution_Gamma, (SD_G), whereas the latter model has a fixed, calibrated coefficient of variation, which parameterizes a log-normal model for snow distribution and is called Snow Distribution_Log-Normal (SD_LN). The two models are implemented in the parameter parsimonious rainfall-runoff model Distance Distribution Dynamics (DDD), and their capability for predicting runoff, SWE and snow-covered area (SCA) is tested and compared for 71 Norwegian catchments. The calibration period is 1985-2000 and validation period is 2000-2014. Results show that SDG better simulates SCA when compared with MODIS satellite-derived snow cover. In addition, SWE is simulated more realistically in that seasonal snow is melted out and the building up of "snow towers" and giving spurious positive trends in SWE, typical for SD_LN, is prevented. The precision of runoff simulations using SDG is slightly inferior, with a reduction in Nash-Sutcliffe and Kling-Gupta efficiency criterion of 0.01, but it is shown that the high precision in runoff prediction using SD_LN is accompanied with erroneous simulations of SWE.
New Approaches to Quantifying Transport Model Error in Atmospheric CO2 Simulations
NASA Technical Reports Server (NTRS)
Ott, L.; Pawson, S.; Zhu, Z.; Nielsen, J. E.; Collatz, G. J.; Gregg, W. W.
2012-01-01
In recent years, much progress has been made in observing CO2 distributions from space. However, the use of these observations to infer source/sink distributions in inversion studies continues to be complicated by difficulty in quantifying atmospheric transport model errors. We will present results from several different experiments designed to quantify different aspects of transport error using the Goddard Earth Observing System, Version 5 (GEOS-5) Atmospheric General Circulation Model (AGCM). In the first set of experiments, an ensemble of simulations is constructed using perturbations to parameters in the model s moist physics and turbulence parameterizations that control sub-grid scale transport of trace gases. Analysis of the ensemble spread and scales of temporal and spatial variability among the simulations allows insight into how parameterized, small-scale transport processes influence simulated CO2 distributions. In the second set of experiments, atmospheric tracers representing model error are constructed using observation minus analysis statistics from NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA). The goal of these simulations is to understand how errors in large scale dynamics are distributed, and how they propagate in space and time, affecting trace gas distributions. These simulations will also be compared to results from NASA's Carbon Monitoring System Flux Pilot Project that quantified the impact of uncertainty in satellite constrained CO2 flux estimates on atmospheric mixing ratios to assess the major factors governing uncertainty in global and regional trace gas distributions.
The effect of modeled recharge distribution on simulated groundwater availability and capture.
Tillman, F D; Pool, D R; Leake, S A
2015-01-01
Simulating groundwater flow in basin-fill aquifers of the semiarid southwestern United States commonly requires decisions about how to distribute aquifer recharge. Precipitation can recharge basin-fill aquifers by direct infiltration and transport through faults and fractures in the high-elevation areas, by flowing overland through high-elevation areas to infiltrate at basin-fill margins along mountain fronts, by flowing overland to infiltrate along ephemeral channels that often traverse basins in the area, or by some combination of these processes. The importance of accurately simulating recharge distributions is a current topic of discussion among hydrologists and water managers in the region, but no comparative study has been performed to analyze the effects of different recharge distributions on groundwater simulations. This study investigates the importance of the distribution of aquifer recharge in simulating regional groundwater flow in basin-fill aquifers by calibrating a groundwater-flow model to four different recharge distributions, all with the same total amount of recharge. Similarities are seen in results from steady-state models for optimized hydraulic conductivity values, fit of simulated to observed hydraulic heads, and composite scaled sensitivities of conductivity parameter zones. Transient simulations with hypothetical storage properties and pumping rates produce similar capture rates and storage change results, but differences are noted in the rate of drawdown at some well locations owing to the differences in optimized hydraulic conductivity. Depending on whether the purpose of the groundwater model is to simulate changes in groundwater levels or changes in storage and capture, the distribution of aquifer recharge may or may not be of primary importance. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.
ESIM_DSN Web-Enabled Distributed Simulation Network
NASA Technical Reports Server (NTRS)
Bedrossian, Nazareth; Novotny, John
2002-01-01
In this paper, the eSim(sup DSN) approach to achieve distributed simulation capability using the Internet is presented. With this approach a complete simulation can be assembled from component subsystems that run on different computers. The subsystems interact with each other via the Internet The distributed simulation uses a hub-and-spoke type network topology. It provides the ability to dynamically link simulation subsystem models to different computers as well as the ability to assign a particular model to each computer. A proof-of-concept demonstrator is also presented. The eSim(sup DSN) demonstrator can be accessed at http://www.jsc.draper.com/esim which hosts various examples of Web enabled simulations.
Distribution Feeder Modeling for Time-Series Simulation of Voltage Management Strategies: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giraldez Miner, Julieta I; Gotseff, Peter; Nagarajan, Adarsh
This paper presents techniques to create baseline distribution models using a utility feeder from Hawai'ian Electric Company. It describes the software-to-software conversion, steady-state, and time-series validations of a utility feeder model. It also presents a methodology to add secondary low-voltage circuit models to accurately capture the voltage at the customer meter level. This enables preparing models to perform studies that simulate how customer-sited resources integrate into legacy utility distribution system operations.
NASA Astrophysics Data System (ADS)
Ajami, H.; Sharma, A.; Lakshmi, V.
2017-12-01
Application of semi-distributed hydrologic modeling frameworks is a viable alternative to fully distributed hyper-resolution hydrologic models due to computational efficiency and resolving fine-scale spatial structure of hydrologic fluxes and states. However, fidelity of semi-distributed model simulations is impacted by (1) formulation of hydrologic response units (HRUs), and (2) aggregation of catchment properties for formulating simulation elements. Here, we evaluate the performance of a recently developed Soil Moisture and Runoff simulation Toolkit (SMART) for large catchment scale simulations. In SMART, topologically connected HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are equivalent cross sections (ECS) representative of a hillslope in first order sub-basins. Earlier investigations have shown that formulation of ECSs at the scale of a first order sub-basin reduces computational time significantly without compromising simulation accuracy. However, the implementation of this approach has not been fully explored for catchment scale simulations. To assess SMART performance, we set-up the model over the Little Washita watershed in Oklahoma. Model evaluations using in-situ soil moisture observations show satisfactory model performance. In addition, we evaluated the performance of a number of soil moisture disaggregation schemes recently developed to provide spatially explicit soil moisture outputs at fine scale resolution. Our results illustrate that the statistical disaggregation scheme performs significantly better than the methods based on topographic data. Future work is focused on assessing the performance of SMART using remotely sensed soil moisture observations using spatially based model evaluation metrics.
A Distributed Simulation Software System for Multi-Spacecraft Missions
NASA Technical Reports Server (NTRS)
Burns, Richard; Davis, George; Cary, Everett
2003-01-01
The paper will provide an overview of the web-based distributed simulation software system developed for end-to-end, multi-spacecraft mission design, analysis, and test at the NASA Goddard Space Flight Center (GSFC). This software system was developed for an internal research and development (IR&D) activity at GSFC called the Distributed Space Systems (DSS) Distributed Synthesis Environment (DSE). The long-term goal of the DSS-DSE is to integrate existing GSFC stand-alone test beds, models, and simulation systems to create a "hands on", end-to-end simulation environment for mission design, trade studies and simulations. The short-term goal of the DSE was therefore to develop the system architecture, and then to prototype the core software simulation capability based on a distributed computing approach, with demonstrations of some key capabilities by the end of Fiscal Year 2002 (FY02). To achieve the DSS-DSE IR&D objective, the team adopted a reference model and mission upon which FY02 capabilities were developed. The software was prototyped according to the reference model, and demonstrations were conducted for the reference mission to validate interfaces, concepts, etc. The reference model, illustrated in Fig. 1, included both space and ground elements, with functional capabilities such as spacecraft dynamics and control, science data collection, space-to-space and space-to-ground communications, mission operations, science operations, and data processing, archival and distribution addressed.
NASA Technical Reports Server (NTRS)
Lipatov, A. S.; Sittler, E. C., Jr.; Hartle, R. E.; Cooper, J. F.; Simpson, D. G.
2011-01-01
In this report we discuss the ion velocity distribution dynamics from the 3D hybrid simulation. In our model the background, pickup, and ionospheric ions are considered as a particles, whereas the electrons are described as a fluid. Inhomogeneous photoionization, electron-impact ionization and charge exchange are included in our model. We also take into account the collisions between the ions and neutrals. The current simulation shows that mass loading by pickup ions H(+); H2(+), CH4(+) and N2(+) is stronger than in the previous simulations when O+ ions are introduced into the background plasma. In our hybrid simulations we use Chamberlain profiles for the atmospheric components. We also include a simple ionosphere model with average mass M = 28 amu ions that were generated inside the ionosphere. The moon is considered as a weakly conducting body. Special attention will be paid to comparing the simulated pickup ion velocity distribution with CAPS T9 observations. Our simulation shows an asymmetry of the ion density distribution and the magnetic field, including the formation of the Alfve n wing-like structures. The simulation also shows that the ring-like velocity distribution for pickup ions relaxes to a Maxwellian core and a shell-like halo.
Simulation of financial market via nonlinear Ising model
NASA Astrophysics Data System (ADS)
Ko, Bonggyun; Song, Jae Wook; Chang, Woojin
2016-09-01
In this research, we propose a practical method for simulating the financial return series whose distribution has a specific heaviness. We employ the Ising model for generating financial return series to be analogous to those of the real series. The similarity between real financial return series and simulated one is statistically verified based on their stylized facts including the power law behavior of tail distribution. We also suggest the scheme for setting the parameters in order to simulate the financial return series with specific tail behavior. The simulation method introduced in this paper is expected to be applied to the other financial products whose price return distribution is fat-tailed.
Juckem, Paul F.; Clark, Brian R.; Feinstein, Daniel T.
2017-05-04
The U.S. Geological Survey, National Water-Quality Assessment seeks to map estimated intrinsic susceptibility of the glacial aquifer system of the conterminous United States. Improved understanding of the hydrogeologic characteristics that explain spatial patterns of intrinsic susceptibility, commonly inferred from estimates of groundwater age distributions, is sought so that methods used for the estimation process are properly equipped. An important step beyond identifying relevant hydrogeologic datasets, such as glacial geology maps, is to evaluate how incorporation of these resources into process-based models using differing levels of detail could affect resulting simulations of groundwater age distributions and, thus, estimates of intrinsic susceptibility.This report describes the construction and calibration of three groundwater-flow models of northeastern Wisconsin that were developed with differing levels of complexity to provide a framework for subsequent evaluations of the effects of process-based model complexity on estimations of groundwater age distributions for withdrawal wells and streams. Preliminary assessments, which focused on the effects of model complexity on simulated water levels and base flows in the glacial aquifer system, illustrate that simulation of vertical gradients using multiple model layers improves simulated heads more in low-permeability units than in high-permeability units. Moreover, simulation of heterogeneous hydraulic conductivity fields in coarse-grained and some fine-grained glacial materials produced a larger improvement in simulated water levels in the glacial aquifer system compared with simulation of uniform hydraulic conductivity within zones. The relation between base flows and model complexity was less clear; however, the relation generally seemed to follow a similar pattern as water levels. Although increased model complexity resulted in improved calibrations, future application of the models using simulated particle tracking is anticipated to evaluate if these model design considerations are similarly important for understanding the primary modeling objective - to simulate reasonable groundwater age distributions.
NASA Astrophysics Data System (ADS)
Adams, P. J.; Marks, M.
2015-12-01
The aerosol indirect effect is the largest source of forcing uncertainty in current climate models. This effect arises from the influence of aerosols on the reflective properties and lifetimes of clouds, and its magnitude depends on how many particles can serve as cloud droplet formation sites. Assessing levels of this subset of particles (cloud condensation nuclei, or CCN) requires knowledge of aerosol levels and their global distribution, size distributions, and composition. A key tool necessary to advance our understanding of CCN is the use of global aerosol microphysical models, which simulate the processes that control aerosol size distributions: nucleation, condensation/evaporation, and coagulation. Previous studies have found important differences in CO (Chen, D. et al., 2009) and ozone (Jang, J., 1995) modeled at different spatial resolutions, and it is reasonable to believe that short-lived, spatially-variable aerosol species will be similarly - or more - susceptible to model resolution effects. The goal of this study is to determine how CCN levels and spatial distributions change as simulations are run at higher spatial resolution - specifically, to evaluate how sensitive the model is to grid size, and how this affects comparisons against observations. Higher resolution simulations are necessary supports for model/measurement synergy. Simulations were performed using the global chemical transport model GEOS-Chem (v9-02). The years 2008 and 2009 were simulated at 4ox5o and 2ox2.5o globally and at 0.5ox0.667o over Europe and North America. Results were evaluated against surface-based particle size distribution measurements from the European Supersites for Atmospheric Aerosol Research project. The fine-resolution model simulates more spatial and temporal variability in ultrafine levels, and better resolves topography. Results suggest that the coarse model predicts systematically lower ultrafine levels than does the fine-resolution model. Significant differences are also evident with respect to model-measurement comparisons, and will be discussed.
Phase Distribution Phenomena for Simulated Microgravity Conditions: Experimental Work
NASA Technical Reports Server (NTRS)
Singhal, Maneesh; Bonetto, Fabian J.; Lahey, R. T., Jr.
1996-01-01
This report summarizes the work accomplished at Rensselaer to study phase distribution phenomenon under simulated microgravity conditions. Our group at Rensselaer has been able to develop sophisticated analytical models to predict phase distribution in two-phase flows under a variety of conditions. These models are based on physics and data obtained from carefully controlled experiments that are being conducted here. These experiments also serve to verify the models developed.
Phase Distribution Phenomena for Simulated Microgravity Conditions: Experimental Work
NASA Technical Reports Server (NTRS)
Singhal, Maneesh; Bonetto, Fabian J.; Lahey, R. T., Jr.
1996-01-01
This report summarizes the work accomplished at Rensselaer to study phase distribution phenomenon under simulated microgravity conditions. Our group at Rensselaer has been able to develop sophisticated analytical models to predict phase distribution in two-phase flows under variety of conditions. These models are based on physics and data obtained from carefully controlled experiments that are being conducted here. These experiments also serve to verify the models developed.
Mapping the spatial distribution of Aedes aegypti and Aedes albopictus.
Ding, Fangyu; Fu, Jingying; Jiang, Dong; Hao, Mengmeng; Lin, Gang
2018-02-01
Mosquito-borne infectious diseases, such as Rift Valley fever, Dengue, Chikungunya and Zika, have caused mass human death with the transnational expansion fueled by economic globalization. Simulating the distribution of the disease vectors is of great importance in formulating public health planning and disease control strategies. In the present study, we simulated the global distribution of Aedes aegypti and Aedes albopictus at a 5×5km spatial resolution with high-dimensional multidisciplinary datasets and machine learning methods Three relatively popular and robust machine learning models, including support vector machine (SVM), gradient boosting machine (GBM) and random forest (RF), were used. During the fine-tuning process based on training datasets of A. aegypti and A. albopictus, RF models achieved the highest performance with an area under the curve (AUC) of 0.973 and 0.974, respectively, followed by GBM (AUC of 0.971 and 0.972, respectively) and SVM (AUC of 0.963 and 0.964, respectively) models. The simulation difference between RF and GBM models was not statistically significant (p>0.05) based on the validation datasets, whereas statistically significant differences (p<0.05) were observed for RF and GBM simulations compared with SVM simulations. From the simulated maps derived from RF models, we observed that the distribution of A. albopictus was wider than that of A. aegypti along a latitudinal gradient. The discriminatory power of each factor in simulating the global distribution of the two species was also analyzed. Our results provided fundamental information for further study on disease transmission simulation and risk assessment. Copyright © 2017 Elsevier B.V. All rights reserved.
Simulation on Poisson and negative binomial models of count road accident modeling
NASA Astrophysics Data System (ADS)
Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.
2016-11-01
Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.
ERIC Educational Resources Information Center
Pant, Mohan Dev
2011-01-01
The Burr families (Type III and Type XII) of distributions are traditionally used in the context of statistical modeling and for simulating non-normal distributions with moment-based parameters (e.g., Skew and Kurtosis). In educational and psychological studies, the Burr families of distributions can be used to simulate extremely asymmetrical and…
NASA Astrophysics Data System (ADS)
Agaesse, Tristan; Lamibrac, Adrien; Büchi, Felix N.; Pauchet, Joel; Prat, Marc
2016-11-01
Understanding and modeling two-phase flows in the gas diffusion layer (GDL) of proton exchange membrane fuel cells are important in order to improve fuel cells performance. They are scientifically challenging because of the peculiarities of GDLs microstructures. In the present work, simulations on a pore network model are compared to X-ray tomographic images of water distributions during an ex-situ water invasion experiment. A method based on watershed segmentation was developed to extract a pore network from the 3D segmented image of the dry GDL. Pore network modeling and a full morphology model were then used to perform two-phase simulations and compared to the experimental data. The results show good agreement between experimental and simulated microscopic water distributions. Pore network extraction parameters were also benchmarked using the experimental data and results from full morphology simulations.
A fortran program for Monte Carlo simulation of oil-field discovery sequences
Bohling, Geoffrey C.; Davis, J.C.
1993-01-01
We have developed a program for performing Monte Carlo simulation of oil-field discovery histories. A synthetic parent population of fields is generated as a finite sample from a distribution of specified form. The discovery sequence then is simulated by sampling without replacement from this parent population in accordance with a probabilistic discovery process model. The program computes a chi-squared deviation between synthetic and actual discovery sequences as a function of the parameters of the discovery process model, the number of fields in the parent population, and the distributional parameters of the parent population. The program employs the three-parameter log gamma model for the distribution of field sizes and employs a two-parameter discovery process model, allowing the simulation of a wide range of scenarios. ?? 1993.
NASA Astrophysics Data System (ADS)
Dou, Zhi-Wu
2010-08-01
To solve the inherent safety problem puzzling the coal mining industry, analyzing the characteristic and the application of distributed interactive simulation based on high level architecture (DIS/HLA), a new method is proposed for developing coal mining industry inherent safety distributed interactive simulation adopting HLA technology. Researching the function and structure of the system, a simple coal mining industry inherent safety is modeled with HLA, the FOM and SOM are developed, and the math models are suggested. The results of the instance research show that HLA plays an important role in developing distributed interactive simulation of complicated distributed system and the method is valid to solve the problem puzzling coal mining industry. To the coal mining industry, the conclusions show that the simulation system with HLA plays an important role to identify the source of hazard, to make the measure for accident, and to improve the level of management.
Gerber, Daniel L.; Vossos, Vagelis; Feng, Wei; ...
2017-06-12
Direct current (DC) power distribution has recently gained traction in buildings research due to the proliferation of on-site electricity generation and battery storage, and an increasing prevalence of internal DC loads. The research discussed in this paper uses Modelica-based simulation to compare the efficiency of DC building power distribution with an equivalent alternating current (AC) distribution. The buildings are all modeled with solar generation, battery storage, and loads that are representative of the most efficient building technology. A variety of paramet ric simulations determine how and when DC distribution proves advantageous. These simulations also validate previous studies that use simplermore » approaches and arithmetic efficiency models. This work shows that using DC distribution can be considerably more efficient: a medium sized office building using DC distribution has an expected baseline of 12% savings, but may also save up to 18%. In these results, the baseline simulation parameters are for a zero net energy (ZNE) building that can island as a microgrid. DC is most advantageous in buildings with large solar capacity, large battery capacity, and high voltage distribution.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerber, Daniel L.; Vossos, Vagelis; Feng, Wei
Direct current (DC) power distribution has recently gained traction in buildings research due to the proliferation of on-site electricity generation and battery storage, and an increasing prevalence of internal DC loads. The research discussed in this paper uses Modelica-based simulation to compare the efficiency of DC building power distribution with an equivalent alternating current (AC) distribution. The buildings are all modeled with solar generation, battery storage, and loads that are representative of the most efficient building technology. A variety of paramet ric simulations determine how and when DC distribution proves advantageous. These simulations also validate previous studies that use simplermore » approaches and arithmetic efficiency models. This work shows that using DC distribution can be considerably more efficient: a medium sized office building using DC distribution has an expected baseline of 12% savings, but may also save up to 18%. In these results, the baseline simulation parameters are for a zero net energy (ZNE) building that can island as a microgrid. DC is most advantageous in buildings with large solar capacity, large battery capacity, and high voltage distribution.« less
NASA Astrophysics Data System (ADS)
Anderson, Brian J.; Korth, Haje; Welling, Daniel T.; Merkin, Viacheslav G.; Wiltberger, Michael J.; Raeder, Joachim; Barnes, Robin J.; Waters, Colin L.; Pulkkinen, Antti A.; Rastaetter, Lutz
2017-02-01
Two of the geomagnetic storms for the Space Weather Prediction Center Geospace Environment Modeling challenge occurred after data were first acquired by the Active Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE). We compare Birkeland currents from AMPERE with predictions from four models for the 4-5 April 2010 and 5-6 August 2011 storms. The four models are the Weimer (2005b) field-aligned current statistical model, the Lyon-Fedder-Mobarry magnetohydrodynamic (MHD) simulation, the Open Global Geospace Circulation Model MHD simulation, and the Space Weather Modeling Framework MHD simulation. The MHD simulations were run as described in Pulkkinen et al. (2013) and the results obtained from the Community Coordinated Modeling Center. The total radial Birkeland current, ITotal, and the distribution of radial current density, Jr, for all models are compared with AMPERE results. While the total currents are well correlated, the quantitative agreement varies considerably. The Jr distributions reveal discrepancies between the models and observations related to the latitude distribution, morphologies, and lack of nightside current systems in the models. The results motivate enhancing the simulations first by increasing the simulation resolution and then by examining the relative merits of implementing more sophisticated ionospheric conductance models, including ionospheric outflows or other omitted physical processes. Some aspects of the system, including substorm timing and location, may remain challenging to simulate, implying a continuing need for real-time specification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan
A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan
A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less
Open-source framework for power system transmission and distribution dynamics co-simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Renke; Fan, Rui; Daily, Jeff
The promise of the smart grid entails more interactions between the transmission and distribution networks, and there is an immediate need for tools to provide the comprehensive modelling and simulation required to integrate operations at both transmission and distribution levels. Existing electromagnetic transient simulators can perform simulations with integration of transmission and distribution systems, but the computational burden is high for large-scale system analysis. For transient stability analysis, currently there are only separate tools for simulating transient dynamics of the transmission and distribution systems. In this paper, we introduce an open source co-simulation framework “Framework for Network Co-Simulation” (FNCS), togethermore » with the decoupled simulation approach that links existing transmission and distribution dynamic simulators through FNCS. FNCS is a middleware interface and framework that manages the interaction and synchronization of the transmission and distribution simulators. Preliminary testing results show the validity and capability of the proposed open-source co-simulation framework and the decoupled co-simulation methodology.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sparn, Bethany F; Ruth, Mark F; Krishnamurthy, Dheepak
Many have proposed that responsive load provided by distributed energy resources (DERs) and demand response (DR) are an option to provide flexibility to the grid and especially to distribution feeders. However, because responsive load involves a complex interplay between tariffs and DER and DR technologies, it is challenging to test and evaluate options without negatively impacting customers. This paper describes a hardware-in-the-loop (HIL) simulation system that has been developed to reduce the cost of evaluating the impact of advanced controllers (e.g., model predictive controllers) and technologies (e.g., responsive appliances). The HIL simulation system combines large-scale software simulation with a smallmore » set of representative building equipment hardware. It is used to perform HIL simulation of a distribution feeder and the loads on it under various tariff structures. In the reported HIL simulation, loads include many simulated air conditioners and one physical air conditioner. Independent model predictive controllers manage operations of all air conditioners under a time-of-use tariff. Results from this HIL simulation and a discussion of future development work of the system are presented.« less
A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia M.; Culley, Dennis E.; Aretskin-Hariton, Eliot D.
2014-01-01
Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a Simulink(R) library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.
A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia M.; Culley, Dennis E.; Aretskin-Hariton, Eliot D.
2015-01-01
Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a SimulinkR library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.
A Modular Framework for Modeling Hardware Elements in Distributed Engine Control Systems
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia Mae; Culley, Dennis E.; Aretskin-Hariton, Eliot D.
2014-01-01
Progress toward the implementation of distributed engine control in an aerospace application may be accelerated through the development of a hardware-in-the-loop (HIL) system for testing new control architectures and hardware outside of a physical test cell environment. One component required in an HIL simulation system is a high-fidelity model of the control platform: sensors, actuators, and the control law. The control system developed for the Commercial Modular Aero-Propulsion System Simulation 40k (40,000 pound force thrust) (C-MAPSS40k) provides a verifiable baseline for development of a model for simulating a distributed control architecture. This distributed controller model will contain enhanced hardware models, capturing the dynamics of the transducer and the effects of data processing, and a model of the controller network. A multilevel framework is presented that establishes three sets of interfaces in the control platform: communication with the engine (through sensors and actuators), communication between hardware and controller (over a network), and the physical connections within individual pieces of hardware. This introduces modularity at each level of the model, encouraging collaboration in the development and testing of various control schemes or hardware designs. At the hardware level, this modularity is leveraged through the creation of a Simulink (R) library containing blocks for constructing smart transducer models complying with the IEEE 1451 specification. These hardware models were incorporated in a distributed version of the baseline C-MAPSS40k controller and simulations were run to compare the performance of the two models. The overall tracking ability differed only due to quantization effects in the feedback measurements in the distributed controller. Additionally, it was also found that the added complexity of the smart transducer models did not prevent real-time operation of the distributed controller model, a requirement of an HIL system.
NASA Astrophysics Data System (ADS)
Nakano, Masaru; Kubota, Fumiko; Inamori, Yutaka; Mitsuyuki, Keiji
Manufacturing system designers should concentrate on designing and planning manufacturing systems instead of spending their efforts on creating the simulation models to verify the design. This paper proposes a method and its tool to navigate the designers through the engineering process and generate the simulation model automatically from the design results. The design agent also supports collaborative design projects among different companies or divisions with distributed engineering and distributed simulation techniques. The idea was implemented and applied to a factory planning process.
NASA Technical Reports Server (NTRS)
Bahrami, K. A.; Kirkham, H.; Rahman, S.
1986-01-01
In a series of tests performed under the Department of Energy auspices, power line carrier propagation was observed to be anomalous under certain circumstances. To investigate the cause, a distribution system simulator was constructed. The simulator was a physical simulator that accurately represented the distribution system from below power frequency to above 50 kHz. Effects such as phase-to-phase coupling and skin effect were modeled. Construction details of the simulator, and experimental results from its use are presented.
2010-08-01
CSTR continuously stirred tank reactors CT contact time EDB ethylene dibromide ESTCP Environmental Security Technology Certification Program...63 6.2 Simulating Oxidant Distribution Using a Series of CSTRs -------------------- 63 6.2.1 Model...SIMULATING OXIDANT DISTRIBUTION USING A SERIES OF CSTRS 6.2.1 MODEL DEVELOPMENT The transport and consumption of permanganate are simulated within the
Distributed Generation Market Demand Model | NREL
Demand Model The Distributed Generation Market Demand (dGen) model simulates the potential adoption of distributed energy resources (DERs) for residential, commercial, and industrial entities in the dGen model can help develop deployment forecasts for distributed resources, including sensitivity to
Defense Simulation Internet: next generation information highway.
Lilienthal, M G
1995-06-01
The Department of Defense has been engaged in the Defense Modeling and Simulation Initiative (DMSI) to provide advanced distributed simulation warfighters in geographically distributed localities. Lessons learned from the Defense Simulation Internet (DSI) concerning architecture, standards, protocols, interoperability, information sharing, and distributed data bases are equally applicable to telemedicine. Much of the vision and objectives of the DMSI are easily translated into the vision for world wide telemedicine.
Li, Chunqing; Tie, Xiaobo; Liang, Kai; Ji, Chanjuan
2016-01-01
After conducting the intensive research on the distribution of fluid's velocity and biochemical reactions in the membrane bioreactor (MBR), this paper introduces the use of the mass-transfer differential equation to simulate the distribution of the chemical oxygen demand (COD) concentration in MBR membrane pool. The solutions are as follows: first, use computational fluid dynamics to establish a flow control equation model of the fluid in MBR membrane pool; second, calculate this model by adopting direct numerical simulation to get the velocity field of the fluid in membrane pool; third, combine the data of velocity field to establish mass-transfer differential equation model for the concentration field in MBR membrane pool, and use Seidel iteration method to solve the equation model; last but not least, substitute the real factory data into the velocity and concentration field model to calculate simulation results, and use visualization software Tecplot to display the results. Finally by analyzing the nephogram of COD concentration distribution, it can be found that the simulation result conforms the distribution rule of the COD's concentration in real membrane pool, and the mass-transfer phenomenon can be affected by the velocity field of the fluid in membrane pool. The simulation results of this paper have certain reference value for the design optimization of the real MBR system.
Cifuentes, L.A.; Schemel, L.E.; Sharp, J.H.
1990-01-01
The effects of river inflow variations on alkalinity/salinity distributions in San Francisco Bay and nitrate/salinity distributions in Delaware Bay are described. One-dimensional, advective-dispersion equations for salinity and the dissolved constituents are solved numerically and are used to simulate mixing in the estuaries. These simulations account for time-varying river inflow, variations in estuarine cross-sectional area, and longitudinally varying dispersion coefficients. The model simulates field observations better than models that use constant hydrodynamic coefficients and uniform estuarine geometry. Furthermore, field observations and model simulations are consistent with theoretical 'predictions' that the curvature of propery-salinity distributions depends on the relation between the estuarine residence time and the period of river concentration variation. ?? 1990.
Numerical simulation of a horizontal sedimentation tank considering sludge recirculation.
Zhang, Wei; Zou, Zhihong; Sui, Jun
2010-01-01
Most research conducted on the concentration distribution of sediment in the sedimentation tank does not consider the role of the suction dredge. To analyze concentration distribution more accurately, a suspended sediment transportation model was constructed and the velocity field in the sedimentation tank was determined based on the influence of the suction dredge. An application model was then used to analyze the concentration distribution in the sedimentation tank when the suction dredge was fixed, with results showing that distribution was in accordance with theoretical analysis. The simulated value of the outlet concentration was similar to the experimental value, and the trends of the isoconcentration distribution curves, as well as the vertical distribution curves of the five monitoring sections acquired through simulations, were almost the same as curves acquired through experimentation. The differences between the simulated values and the experimental values were significant.
A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations
NASA Astrophysics Data System (ADS)
Demir, I.; Agliamzanov, R.
2014-12-01
Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.
The Distributed Space Exploration Simulation (DSES)
NASA Technical Reports Server (NTRS)
Crues, Edwin Z.; Chung, Victoria I.; Blum, Mike G.; Bowman, James D.
2007-01-01
The paper describes the Distributed Space Exploration Simulation (DSES) Project, a research and development collaboration between NASA centers which focuses on the investigation and development of technologies, processes and integrated simulations related to the collaborative distributed simulation of complex space systems in support of NASA's Exploration Initiative. This paper describes the three major components of DSES: network infrastructure, software infrastructure and simulation development. In the network work area, DSES is developing a Distributed Simulation Network that will provide agency wide support for distributed simulation between all NASA centers. In the software work area, DSES is developing a collection of software models, tool and procedures that ease the burden of developing distributed simulations and provides a consistent interoperability infrastructure for agency wide participation in integrated simulation. Finally, for simulation development, DSES is developing an integrated end-to-end simulation capability to support NASA development of new exploration spacecraft and missions. This paper will present current status and plans for each of these work areas with specific examples of simulations that support NASA's exploration initiatives.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan; Hale, Elaine; Hodge, Bri-Mathias
2016-08-11
This paper discusses the development of, approaches for, experiences with, and some results from a large-scale, high-performance-computer-based (HPC-based) co-simulation of electric power transmission and distribution systems using the Integrated Grid Modeling System (IGMS). IGMS was developed at the National Renewable Energy Laboratory (NREL) as a novel Independent System Operator (ISO)-to-appliance scale electric power system modeling platform that combines off-the-shelf tools to simultaneously model 100s to 1000s of distribution systems in co-simulation with detailed ISO markets, transmission power flows, and AGC-level reserve deployment. Lessons learned from the co-simulation architecture development are shared, along with a case study that explores the reactivemore » power impacts of PV inverter voltage support on the bulk power system.« less
Improving the Aircraft Design Process Using Web-Based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.; Follen, Gregory J. (Technical Monitor)
2000-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and multifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
Improving the Aircraft Design Process Using Web-based Modeling and Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Follen, Gregory J.; Afjeh, Abdollah A.
2003-01-01
Designing and developing new aircraft systems is time-consuming and expensive. Computational simulation is a promising means for reducing design cycle times, but requires a flexible software environment capable of integrating advanced multidisciplinary and muitifidelity analysis methods, dynamically managing data across heterogeneous computing platforms, and distributing computationally complex tasks. Web-based simulation, with its emphasis on collaborative composition of simulation models, distributed heterogeneous execution, and dynamic multimedia documentation, has the potential to meet these requirements. This paper outlines the current aircraft design process, highlighting its problems and complexities, and presents our vision of an aircraft design process using Web-based modeling and simulation.
NASA Technical Reports Server (NTRS)
Davis, George; Cary, Everett; Higinbotham, John; Burns, Richard; Hogie, Keith; Hallahan, Francis
2003-01-01
The paper will provide an overview of the web-based distributed simulation software system developed for end-to-end, multi-spacecraft mission design, analysis, and test at the NASA Goddard Space Flight Center (GSFC). This software system was developed for an internal research and development (IR&D) activity at GSFC called the Distributed Space Systems (DSS) Distributed Synthesis Environment (DSE). The long-term goal of the DSS-DSE is to integrate existing GSFC stand-alone test beds, models, and simulation systems to create a "hands on", end-to-end simulation environment for mission design, trade studies and simulations. The short-term goal of the DSE was therefore to develop the system architecture, and then to prototype the core software simulation capability based on a distributed computing approach, with demonstrations of some key capabilities by the end of Fiscal Year 2002 (FY02). To achieve the DSS-DSE IR&D objective, the team adopted a reference model and mission upon which FY02 capabilities were developed. The software was prototyped according to the reference model, and demonstrations were conducted for the reference mission to validate interfaces, concepts, etc. The reference model, illustrated in Fig. 1, included both space and ground elements, with functional capabilities such as spacecraft dynamics and control, science data collection, space-to-space and space-to-ground communications, mission operations, science operations, and data processing, archival and distribution addressed.
Model improvements to simulate charging in SEM
NASA Astrophysics Data System (ADS)
Arat, K. T.; Klimpel, T.; Hagen, C. W.
2018-03-01
Charging of insulators is a complex phenomenon to simulate since the accuracy of the simulations is very sensitive to the interaction of electrons with matter and electric fields. In this study, we report model improvements for a previously developed Monte-Carlo simulator to more accurately simulate samples that charge. The improvements include both modelling of low energy electron scattering and charging of insulators. The new first-principle scattering models provide a more realistic charge distribution cloud in the material, and a better match between non-charging simulations and experimental results. Improvements on charging models mainly focus on redistribution of the charge carriers in the material with an induced conductivity (EBIC) and a breakdown model, leading to a smoother distribution of the charges. Combined with a more accurate tracing of low energy electrons in the electric field, we managed to reproduce the dynamically changing charging contrast due to an induced positive surface potential.
Parallel computing method for simulating hydrological processesof large rivers under climate change
NASA Astrophysics Data System (ADS)
Wang, H.; Chen, Y.
2016-12-01
Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.
Comparing Simulated and Theoretical Sampling Distributions of the U3 Person-Fit Statistic.
ERIC Educational Resources Information Center
Emons, Wilco H. M.; Meijer, Rob R.; Sijtsma, Klaas
2002-01-01
Studied whether the theoretical sampling distribution of the U3 person-fit statistic is in agreement with the simulated sampling distribution under different item response theory models and varying item and test characteristics. Simulation results suggest that the use of standard normal deviates for the standardized version of the U3 statistic may…
Wang, Wei; Lu, Hui; Yang, Dawen; Sothea, Khem; Jiao, Yang; Gao, Bin; Peng, Xueting; Pang, Zhiguo
2016-01-01
The Mekong River is the most important river in Southeast Asia. It has increasingly suffered from water-related problems due to economic development, population growth and climate change in the surrounding areas. In this study, we built a distributed Geomorphology-Based Hydrological Model (GBHM) of the Mekong River using remote sensing data and other publicly available data. Two numerical experiments were conducted using different rainfall data sets as model inputs. The data sets included rain gauge data from the Mekong River Commission (MRC) and remote sensing rainfall data from the Tropic Rainfall Measurement Mission (TRMM 3B42V7). Model calibration and validation were conducted for the two rainfall data sets. Compared to the observed discharge, both the gauge simulation and TRMM simulation performed well during the calibration period (1998–2001). However, the performance of the gauge simulation was worse than that of the TRMM simulation during the validation period (2002–2012). The TRMM simulation is more stable and reliable at different scales. Moreover, the calibration period was changed to 2, 4, and 8 years to test the impact of the calibration period length on the two simulations. The results suggest that longer calibration periods improved the GBHM performance during validation periods. In addition, the TRMM simulation is more stable and less sensitive to the calibration period length than is the gauge simulation. Further analysis reveals that the uneven distribution of rain gauges makes the input rainfall data less representative and more heterogeneous, worsening the simulation performance. Our results indicate that remotely sensed rainfall data may be more suitable for driving distributed hydrologic models, especially in basins with poor data quality or limited gauge availability. PMID:27010692
Wang, Wei; Lu, Hui; Yang, Dawen; Sothea, Khem; Jiao, Yang; Gao, Bin; Peng, Xueting; Pang, Zhiguo
2016-01-01
The Mekong River is the most important river in Southeast Asia. It has increasingly suffered from water-related problems due to economic development, population growth and climate change in the surrounding areas. In this study, we built a distributed Geomorphology-Based Hydrological Model (GBHM) of the Mekong River using remote sensing data and other publicly available data. Two numerical experiments were conducted using different rainfall data sets as model inputs. The data sets included rain gauge data from the Mekong River Commission (MRC) and remote sensing rainfall data from the Tropic Rainfall Measurement Mission (TRMM 3B42V7). Model calibration and validation were conducted for the two rainfall data sets. Compared to the observed discharge, both the gauge simulation and TRMM simulation performed well during the calibration period (1998-2001). However, the performance of the gauge simulation was worse than that of the TRMM simulation during the validation period (2002-2012). The TRMM simulation is more stable and reliable at different scales. Moreover, the calibration period was changed to 2, 4, and 8 years to test the impact of the calibration period length on the two simulations. The results suggest that longer calibration periods improved the GBHM performance during validation periods. In addition, the TRMM simulation is more stable and less sensitive to the calibration period length than is the gauge simulation. Further analysis reveals that the uneven distribution of rain gauges makes the input rainfall data less representative and more heterogeneous, worsening the simulation performance. Our results indicate that remotely sensed rainfall data may be more suitable for driving distributed hydrologic models, especially in basins with poor data quality or limited gauge availability.
Barlow, P.M.; Wagner, B.J.; Belitz, K.
1996-01-01
The simulation-optimization approach is used to identify ground-water pumping strategies for control of the shallow water table in the western San Joaquin Valley, California, where shallow ground water threatens continued agricultural productivity. The approach combines the use of ground-water flow simulation with optimization techniques to build on and refine pumping strategies identified in previous research that used flow simulation alone. Use of the combined simulation-optimization model resulted in a 20 percent reduction in the area subject to a shallow water table over that identified by use of the simulation model alone. The simulation-optimization model identifies increasingly more effective pumping strategies for control of the water table as the complexity of the problem increases; that is, as the number of subareas in which pumping is to be managed increases, the simulation-optimization model is better able to discriminate areally among subareas to determine optimal pumping locations. The simulation-optimization approach provides an improved understanding of controls on the ground-water flow system and management alternatives that can be implemented in the valley. In particular, results of the simulation-optimization model indicate that optimal pumping strategies are constrained by the existing distribution of wells between the semiconfined and confined zones of the aquifer, by the distribution of sediment types (and associated hydraulic conductivities) in the western valley, and by the historical distribution of pumping throughout the western valley.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horiike, S.; Okazaki, Y.
This paper describes a performance estimation tool developed for modeling and simulation of open distributed energy management systems to support their design. The approach of discrete event simulation with detailed models is considered for efficient performance estimation. The tool includes basic models constituting a platform, e.g., Ethernet, communication protocol, operating system, etc. Application softwares are modeled by specifying CPU time, disk access size, communication data size, etc. Different types of system configurations for various system activities can be easily studied. Simulation examples show how the tool is utilized for the efficient design of open distributed energy management systems.
NASA Astrophysics Data System (ADS)
Žukovič, Milan; Hristopulos, Dionissios T.
2009-02-01
A current problem of practical significance is how to analyze large, spatially distributed, environmental data sets. The problem is more challenging for variables that follow non-Gaussian distributions. We show by means of numerical simulations that the spatial correlations between variables can be captured by interactions between 'spins'. The spins represent multilevel discretizations of environmental variables with respect to a number of pre-defined thresholds. The spatial dependence between the 'spins' is imposed by means of short-range interactions. We present two approaches, inspired by the Ising and Potts models, that generate conditional simulations of spatially distributed variables from samples with missing data. Currently, the sampling and simulation points are assumed to be at the nodes of a regular grid. The conditional simulations of the 'spin system' are forced to respect locally the sample values and the system statistics globally. The second constraint is enforced by minimizing a cost function representing the deviation between normalized correlation energies of the simulated and the sample distributions. In the approach based on the Nc-state Potts model, each point is assigned to one of Nc classes. The interactions involve all the points simultaneously. In the Ising model approach, a sequential simulation scheme is used: the discretization at each simulation level is binomial (i.e., ± 1). Information propagates from lower to higher levels as the simulation proceeds. We compare the two approaches in terms of their ability to reproduce the target statistics (e.g., the histogram and the variogram of the sample distribution), to predict data at unsampled locations, as well as in terms of their computational complexity. The comparison is based on a non-Gaussian data set (derived from a digital elevation model of the Walker Lake area, Nevada, USA). We discuss the impact of relevant simulation parameters, such as the domain size, the number of discretization levels, and the initial conditions.
Simulation study on the impact of air distribution on formaldehyde pollutant distribution in room
NASA Astrophysics Data System (ADS)
Wu, Jingtao; Wang, Jun; Cheng, Zhu
2017-01-01
In this paper, physical and mathematical model of a room was established based on the Airpak software. The velocity distribution, air age distribution, formaldehyde concentration distribution and Predicted Mean Vote(PMV), Predicted Percentage Dissatisfied(PPD) distribution in the ward of a hospital were simulated. In addition, the air volume was doubled, the change of indoor pollutant concentration distribution was simulated. And further, the change of air age was simulated. Through the simulation, it can help arrange the position of the air supply port, so it is very necessary to increase the comfort of the staff in the room. Finally, through the simulation of pollutant concentration distribution, it can be seen that when concentration of indoor pollutants was high, the supply air flow rate should be increased appropriately. Indoor pollutant will be discharged as soon as possible, which is very beneficial to human body health.
An Overview of the Distributed Space Exploration Simulation (DSES) Project
NASA Technical Reports Server (NTRS)
Crues, Edwin Z.; Chung, Victoria I.; Blum, Michael G.; Bowman, James D.
2007-01-01
This paper describes the Distributed Space Exploration Simulation (DSES) Project, a research and development collaboration between NASA centers which investigates technologies, and processes related to integrated, distributed simulation of complex space systems in support of NASA's Exploration Initiative. In particular, it describes the three major components of DSES: network infrastructure, software infrastructure and simulation development. With regard to network infrastructure, DSES is developing a Distributed Simulation Network for use by all NASA centers. With regard to software, DSES is developing software models, tools and procedures that streamline distributed simulation development and provide an interoperable infrastructure for agency-wide integrated simulation. Finally, with regard to simulation development, DSES is developing an integrated end-to-end simulation capability to support NASA development of new exploration spacecraft and missions. This paper presents the current status and plans for these three areas, including examples of specific simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trent, D.S.; Eyler, L.L.
In this study several aspects of simulating hydrogen distribution in geometric configurations relevant to reactor containment structures were investigated using the TEMPEST computer code. Of particular interest was the performance of the TEMPEST turbulence model in a density-stratified environment. Computed results illustrated that the TEMPEST numerical procedures predicted the measured phenomena with good accuracy under a variety of conditions and that the turbulence model used is a viable approach in complex turbulent flow simulation.
Modeling the VARTM Composite Manufacturing Process
NASA Technical Reports Server (NTRS)
Song, Xiao-Lan; Loos, Alfred C.; Grimsley, Brian W.; Cano, Roberto J.; Hubert, Pascal
2004-01-01
A comprehensive simulation model of the Vacuum Assisted Resin Transfer Modeling (VARTM) composite manufacturing process has been developed. For isothermal resin infiltration, the model incorporates submodels which describe cure of the resin and changes in resin viscosity due to cure, resin flow through the reinforcement preform and distribution medium and compaction of the preform during the infiltration. The accuracy of the model was validated by measuring the flow patterns during resin infiltration of flat preforms. The modeling software was used to evaluate the effects of the distribution medium on resin infiltration of a flat preform. Different distribution medium configurations were examined using the model and the results were compared with data collected during resin infiltration of a carbon fabric preform. The results of the simulations show that the approach used to model the distribution medium can significantly effect the predicted resin infiltration times. Resin infiltration into the preform can be accurately predicted only when the distribution medium is modeled correctly.
NASA Astrophysics Data System (ADS)
Kelleher, Christa; McGlynn, Brian; Wagener, Thorsten
2017-07-01
Distributed catchment models are widely used tools for predicting hydrologic behavior. While distributed models require many parameters to describe a system, they are expected to simulate behavior that is more consistent with observed processes. However, obtaining a single set of acceptable parameters can be problematic, as parameter equifinality often results in several behavioral
sets that fit observations (typically streamflow). In this study, we investigate the extent to which equifinality impacts a typical distributed modeling application. We outline a hierarchical approach to reduce the number of behavioral sets based on regional, observation-driven, and expert-knowledge-based constraints. For our application, we explore how each of these constraint classes reduced the number of behavioral
parameter sets and altered distributions of spatiotemporal simulations, simulating a well-studied headwater catchment, Stringer Creek, Montana, using the distributed hydrology-soil-vegetation model (DHSVM). As a demonstrative exercise, we investigated model performance across 10 000 parameter sets. Constraints on regional signatures, the hydrograph, and two internal measurements of snow water equivalent time series reduced the number of behavioral parameter sets but still left a small number with similar goodness of fit. This subset was ultimately further reduced by incorporating pattern expectations of groundwater table depth across the catchment. Our results suggest that utilizing a hierarchical approach based on regional datasets, observations, and expert knowledge to identify behavioral parameter sets can reduce equifinality and bolster more careful application and simulation of spatiotemporal processes via distributed modeling at the catchment scale.
Parallel discrete event simulation using shared memory
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.
1988-01-01
With traditional event-list techniques, evaluating a detailed discrete-event simulation-model can often require hours or even days of computation time. By eliminating the event list and maintaining only sufficient synchronization to ensure causality, parallel simulation can potentially provide speedups that are linear in the numbers of processors. A set of shared-memory experiments, using the Chandy-Misra distributed-simulation algorithm, to simulate networks of queues is presented. Parameters of the study include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential-simulation of most queueing network models.
NASA Astrophysics Data System (ADS)
Ajami, H.; Sharma, A.
2016-12-01
A computationally efficient, semi-distributed hydrologic modeling framework is developed to simulate water balance at a catchment scale. The Soil Moisture and Runoff simulation Toolkit (SMART) is based upon the delineation of contiguous and topologically connected Hydrologic Response Units (HRUs). In SMART, HRUs are delineated using thresholds obtained from topographic and geomorphic analysis of a catchment, and simulation elements are distributed cross sections or equivalent cross sections (ECS) delineated in first order sub-basins. ECSs are formulated by aggregating topographic and physiographic properties of the part or entire first order sub-basins to further reduce computational time in SMART. Previous investigations using SMART have shown that temporal dynamics of soil moisture are well captured at a HRU level using the ECS delineation approach. However, spatial variability of soil moisture within a given HRU is ignored. Here, we examined a number of disaggregation schemes for soil moisture distribution in each HRU. The disaggregation schemes are either based on topographic based indices or a covariance matrix obtained from distributed soil moisture simulations. To assess the performance of the disaggregation schemes, soil moisture simulations from an integrated land surface-groundwater model, ParFlow.CLM in Baldry sub-catchment, Australia are used. ParFlow is a variably saturated sub-surface flow model that is coupled to the Common Land Model (CLM). Our results illustrate that the statistical disaggregation scheme performs better than the methods based on topographic data in approximating soil moisture distribution at a 60m scale. Moreover, the statistical disaggregation scheme maintains temporal correlation of simulated daily soil moisture while preserves the mean sub-basin soil moisture. Future work is focused on assessing the performance of this scheme in catchments with various topographic and climate settings.
Measuring Dark Matter With MilkyWay@home
NASA Astrophysics Data System (ADS)
Shelton, Siddhartha; Newberg, Heidi Jo; Arsenault, Matthew; Bauer, Jacob; Desell, Travis; Judd, Roland; Magdon-Ismail, Malik; Newby, Matthew; Rice, Colin; Thompson, Jeffrey; Ulin, Steve; Weiss, Jake; Widrow, Larry
2016-01-01
We perform N-body simulations of two component dwarf galaxies (dark matter and stars follow separate distributions) falling into the Milky Way and the forming of tidal streams. Using MilkyWay@home we optimize the parameters of the progenitor dwarf galaxy and the orbital time to fit the simulated distribution of stars along the tidal stream to the observed distribution of stars. Our initial dwarf galaxy models are constructed with two separate Plummer profiles (one for the dark matter and one for the baryonic matter), sampled using a generalized distribution function for spherically symmetric systems. We perform rigorous testing to ensure that our simulated galaxies are in virial equilibrium, and stable over a simulation time. The N-body simulations are performed using a Barnes-Hut Tree algorithm. Optimization traverses the likelihood surface from our six model parameters using particle swarm and differential evolution methods. We have generated simulated data with known model parameters that are similar to those of the Orphan Stream. We show that we are able to recover a majority of our model parameters, and most importantly the mass-to-light ratio of the now disrupted progenitor galaxy, using MilkyWay@home. This research is supported by generous gifts from the Marvin Clan, Babette Josephs, Manit Limlamai, and the MilkyWay@home volunteers.
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2016-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.
ERIC Educational Resources Information Center
Jewett, Frank
These instructions describe the use of BRIDGE, a computer software simulation model that is designed to compare the costs of expanding a college campus using distributed instruction (television or asynchronous network courses) versus the costs of expanding using lecture/lab type instruction. The model compares the projected operating and capital…
Letsinger, S.L.; Olyphant, G.A.
2007-01-01
A distributed energy-balance model was developed for simulating snowpack evolution and melt in rugged terrain. The model, which was applied to a 43-km2 watershed in the Tobacco Root Mountains, Montana, USA, used measured ambient data from nearby weather stations to drive energy-balance calculations and to constrain the model of Liston and Sturm [Liston, G.E., Sturm, M., 1998. A snow-transport model for complex terrain. Journal of Glaciology 44 (148), 498-516] for calculating the initial snowpack thickness. Simulated initial snow-water equivalent ranged between 1 cm and 385 cm w.e. (water equivalent) with high values concentrated on east-facing slopes below tall summits. An interpreted satellite image of the snowcover distribution on May 6, 1998, closely matched the simulated distribution with the greatest discrepancy occurring in the floor of the main trunk valley. Model simulations indicated that snowmelt commenced early in the melt season, but rapid meltout of snow cover did not occur until after the average energy balance of the entire watershed became positive about 45 days into the melt season. Meltout was fastest in the lower part of the watershed where warmer temperatures and tree cover enhanced the energy income of the underlying snow. An interpreted satellite image of the snowcover distribution on July 9, 1998 compared favorably with the simulated distribution, and melt curves for modeled canopy-covered cells mimicked the trends measured at nearby snow pillow stations. By the end of the simulation period (August 3), 28% of the watershed remained snow covered, most of which was concentrated in the highest parts of the watershed where initially thick accumulations had been shaded by surrounding summits. The results of this study provide further demonstration of the critical role that topography plays in the timing and magnitude of snowmelt from high mountain watersheds. ?? 2006 Elsevier B.V. All rights reserved.
Parallelization and automatic data distribution for nuclear reactor simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebrock, L.M.
1997-07-01
Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directlymore » affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.« less
System analysis for the Huntsville Operational Support Center distributed computer system
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Mauldin, J.
1984-01-01
The Huntsville Operations Support Center (HOSC) is a distributed computer system used to provide real time data acquisition, analysis and display during NASA space missions and to perform simulation and study activities during non-mission times. The primary purpose is to provide a HOSC system simulation model that is used to investigate the effects of various HOSC system configurations. Such a model would be valuable in planning the future growth of HOSC and in ascertaining the effects of data rate variations, update table broadcasting and smart display terminal data requirements on the HOSC HYPERchannel network system. A simulation model was developed in PASCAL and results of the simulation model for various system configuraions were obtained. A tutorial of the model is presented and the results of simulation runs are presented. Some very high data rate situations were simulated to observe the effects of the HYPERchannel switch over from contention to priority mode under high channel loading.
NASA Astrophysics Data System (ADS)
Penot, David; Paquet, Emmanuel; Lang, Michel
2014-05-01
SCHADEX is a probabilistic method for extreme flood estimation, developed and applied since 2006 at Electricité de France (EDF) for dam spillway design [Paquet et al., 2013]. SCHADEX is based on a semi-continuous rainfall-runoff simulation process. The method has been built around two models: a Multi-Exponential Weather Pattern (MEWP) distribution for rainfall probability estimation [Garavaglia et al., 2010] and the MORDOR hydrological model. To use SCHADEX in ungauged context, rainfall distribution and hydrological model must be regionalized. The regionalization of the MEWP rainfall distribution can be managed with SPAZM, a daily rainfall interpolator [Gottardi et al., 2012] which provides reasonable estimates of point and areal rainfall up to hight quantiles. The main issue remains to regionalize MORDOR which is heavily parametrized. A much more simple model has been considered: the SCS model. It is a well known model for event simulation [USDA SCS, 1985; Beven, 2003] and it relies on only one parameter. Then, the idea is to use the SCS model instead of MORDOR within a simplified stochastic simulation scheme to produce a distribution of flood volume from an exhaustive crossing between rainy events and catchment saturation hazards. The presentation details this process and its capacity to generate a runoff distribution based on catchment areal rainfall distribution. The simulation method depends on a unique parameter Smax, the maximum initial loss of the catchment. Then an initial loss S (between zero and Smax) can be drawn to account for the variability of catchment state (between dry and saturated). The distribution of initial loss (or conversely, of catchment saturation, as modeled by MORDOR) seems closely linked to the catchment's regime, therefore easily to regionalize. The simulation takes into account a snow contribution for snow driven catchments, and an antecedent runoff. The presentation shows the results of this stochastic procedure applied on 80 French catchments and its capacity to represent the asymptotic behaviour of the runoff distribution. References: K. J. Beven. Rainfall-Runoff modelling The Primer, British Library, 2003. F. Garavaglia, J. Gailhard, E. Paquet, M. Lang, R. Garçon, and P. Bernardara. Introducing a rainfall compound distribution model based on weather patterns sub-sampling. Hydrology and Earth System Sciences, 14(6):951-964, 2010. F. Gottardi, C. Obled, J. Gailhard, and E. Paquet. Statistical reanalysis of precipitation fields based on ground network data and weather patterns : Application over french mountains. Journal of Hydrology, 432-433:154-167, 2012. ISSN 0022-1694. E. Paquet, F. Garavaglia, R Garçon, and J. Gailhard. The schadex method : a semi-continuous rainfall-runoff simulation for extreme flood estimation. Journal of Hydrology, 2013. USDA SCS, National Engineering Handbook, Supplement A, Section 4, Chapter 10. Whashington DC, 1985.
Real-time modeling and simulation of distribution feeder and distributed resources
NASA Astrophysics Data System (ADS)
Singh, Pawan
The analysis of the electrical system dates back to the days when analog network analyzers were used. With the advent of digital computers, many programs were written for power-flow and short circuit analysis for the improvement of the electrical system. Real-time computer simulations can answer many what-if scenarios in the existing or the proposed power system. In this thesis, the standard IEEE 13-Node distribution feeder is developed and validated on a real-time platform OPAL-RT. The concept and the challenges of the real-time simulation are studied and addressed. Distributed energy resources include some of the commonly used distributed generation and storage devices like diesel engine, solar photovoltaic array, and battery storage system are modeled and simulated on a real-time platform. A microgrid encompasses a portion of an electric power distribution which is located downstream of the distribution substation. Normally, the microgrid operates in paralleled mode with the grid; however, scheduled or forced isolation can take place. In such conditions, the microgrid must have the ability to operate stably and autonomously. The microgrid can operate in grid connected and islanded mode, both the operating modes are studied in the last chapter. Towards the end, a simple microgrid controller modeled and simulated on the real-time platform is developed for energy management and protection for the microgrid.
NASA Technical Reports Server (NTRS)
Daigle, Matthew John; Goebel, Kai Frank
2010-01-01
Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.
Modeled ground water age distributions
Woolfenden, Linda R.; Ginn, Timothy R.
2009-01-01
The age of ground water in any given sample is a distributed quantity representing distributed provenance (in space and time) of the water. Conventional analysis of tracers such as unstable isotopes or anthropogenic chemical species gives discrete or binary measures of the presence of water of a given age. Modeled ground water age distributions provide a continuous measure of contributions from different recharge sources to aquifers. A numerical solution of the ground water age equation of Ginn (1999) was tested both on a hypothetical simplified one-dimensional flow system and under real world conditions. Results from these simulations yield the first continuous distributions of ground water age using this model. Complete age distributions as a function of one and two space dimensions were obtained from both numerical experiments. Simulations in the test problem produced mean ages that were consistent with the expected value at the end of the model domain for all dispersivity values tested, although the mean ages for the two highest dispersivity values deviated slightly from the expected value. Mean ages in the dispersionless case also were consistent with the expected mean ages throughout the physical model domain. Simulations under real world conditions for three dispersivity values resulted in decreasing mean age with increasing dispersivity. This likely is a consequence of an edge effect. However, simulations for all three dispersivity values tested were mass balanced and stable demonstrating that the solution of the ground water age equation can provide estimates of water mass density distributions over age under real world conditions.
NASA Astrophysics Data System (ADS)
Skaugen, Thomas; Weltzien, Ingunn
2016-04-01
The traditional catchment hydrological model with its many free calibration parameters is not a well suited tool for prediction under conditions for which is has not been calibrated. Important tasks for hydrological modelling such as prediction in ungauged basins and assessing hydrological effects of climate change are hence not solved satisfactory. In order to reduce the number of calibration parameters in hydrological models we have introduced a new model which uses a dynamic gamma distribution as the spatial frequency distribution of snow water equivalent (SWE). The parameters are estimated from observed spatial variability of precipitation and the magnitude of accumulation and melting events and are hence not subject to calibration. The relationship between spatial mean and variance of precipitation is found to follow a pattern where decreasing temporal correlation with increasing accumulation or duration of the event leads to a levelling off or even a decrease of the spatial variance. The new model for snow distribution is implemented in the, already parameter parsimonious, DDD (Distance Distribution Dynamics) hydrological model and was tested for 71 Norwegian catchments. We compared the new snow distribution model with the current operational snow distribution model where a fixed, calibrated coefficient of variation parameterizes a log-normal model for snow distribution. Results show that the precision of runoff simulations is equal, but that the new snow distribution model better simulates snow covered area (SCA) when compared with MODIS satellite derived snow cover. In addition, SWE is simulated more realistically in that seasonal snow is melted out and the building up of "snow towers" is prevented and hence spurious trends in SWE.
NASA Astrophysics Data System (ADS)
Caviedes-Voullième, Daniel; García-Navarro, Pilar; Murillo, Javier
2012-07-01
SummaryHydrological simulation of rain-runoff processes is often performed with lumped models which rely on calibration to generate storm hydrographs and study catchment response to rain. In this paper, a distributed, physically-based numerical model is used for runoff simulation in a mountain catchment. This approach offers two advantages. The first is that by using shallow-water equations for runoff flow, there is less freedom to calibrate routing parameters (as compared to, for example, synthetic hydrograph methods). The second, is that spatial distributions of water depth and velocity can be obtained. Furthermore, interactions among the various hydrological processes can be modeled in a physically-based approach which may depend on transient and spatially distributed factors. On the other hand, the undertaken numerical approach relies on accurate terrain representation and mesh selection, which also affects significantly the computational cost of the simulations. Hence, we investigate the response of a gauged catchment with this distributed approach. The methodology consists of analyzing the effects that the mesh has on the simulations by using a range of meshes. Next, friction is applied to the model and the response to variations and interaction with the mesh is studied. Finally, a first approach with the well-known SCS Curve Number method is studied to evaluate its behavior when coupled with a shallow-water model for runoff flow. The results show that mesh selection is of great importance, since it may affect the results in a magnitude as large as physical factors, such as friction. Furthermore, results proved to be less sensitive to roughness spatial distribution than to mesh properties. Finally, the results indicate that SCS-CN may not be suitable for simulating hydrological processes together with a shallow-water model.
The Osseus platform: a prototype for advanced web-based distributed simulation
NASA Astrophysics Data System (ADS)
Franceschini, Derrick; Riecken, Mark
2016-05-01
Recent technological advances in web-based distributed computing and database technology have made possible a deeper and more transparent integration of some modeling and simulation applications. Despite these advances towards true integration of capabilities, disparate systems, architectures, and protocols will remain in the inventory for some time to come. These disparities present interoperability challenges for distributed modeling and simulation whether the application is training, experimentation, or analysis. Traditional approaches call for building gateways to bridge between disparate protocols and retaining interoperability specialists. Challenges in reconciling data models also persist. These challenges and their traditional mitigation approaches directly contribute to higher costs, schedule delays, and frustration for the end users. Osseus is a prototype software platform originally funded as a research project by the Defense Modeling & Simulation Coordination Office (DMSCO) to examine interoperability alternatives using modern, web-based technology and taking inspiration from the commercial sector. Osseus provides tools and services for nonexpert users to connect simulations, targeting the time and skillset needed to successfully connect disparate systems. The Osseus platform presents a web services interface to allow simulation applications to exchange data using modern techniques efficiently over Local or Wide Area Networks. Further, it provides Service Oriented Architecture capabilities such that finer granularity components such as individual models can contribute to simulation with minimal effort.
Gray, John P; Ludwig, Brad; Temple, Jack; Melby, Michael; Rough, Steve
2013-08-01
The results of a study to estimate the human resource and cost implications of changing the medication distribution model at a large medical center are presented. A two-part study was conducted to evaluate alternatives to the hospital's existing hybrid distribution model (64% of doses dispensed via cart fill and 36% via automated dispensing cabinets [ADCs]). An assessment of nurse, pharmacist, and pharmacy technician workloads within the hybrid system was performed through direct observation, with time standards calculated for each dispensing task; similar time studies were conducted at a comparator hospital with a decentralized medication distribution system involving greater use of ADCs. The time study data were then used in simulation modeling of alternative distribution scenarios: one involving no use of cart fill, one involving no use of ADCs, and one heavily dependent on ADC dispensing (89% via ADC and 11% via cart fill). Simulation of the base-case and alternative scenarios indicated that as the modeled percentage of doses dispensed from ADCs rose, the calculated pharmacy technician labor requirements decreased, with a proportionately greater increase in the nursing staff workload. Given that nurses are a higher-cost resource than pharmacy technicians, the projected human resource opportunity cost of transitioning from the hybrid system to a decentralized system similar to the comparator facility's was estimated at $229,691 per annum. Based on the simulation results, it was decided that a transition from the existing hybrid medication distribution system to a more ADC-dependent model would result in an unfavorable shift in staff skill mix and corresponding human resource costs at the medical center.
Building better water models using the shape of the charge distribution of a water molecule
NASA Astrophysics Data System (ADS)
Dharmawardhana, Chamila Chathuranga; Ichiye, Toshiko
2017-11-01
The unique properties of liquid water apparently arise from more than just the tetrahedral bond angle between the nuclei of a water molecule since simple three-site models of water are poor at mimicking these properties in computer simulations. Four- and five-site models add partial charges on dummy sites and are better at modeling these properties, which suggests that the shape of charge distribution is important. Since a multipole expansion of the electrostatic potential describes a charge distribution in an orthogonal basis set that is exact in the limit of infinite order, multipoles may be an even better way to model the charge distribution. In particular, molecular multipoles up to the octupole centered on the oxygen appear to describe the electrostatic potential from electronic structure calculations better than four- and five-site models, and molecular multipole models give better agreement with the temperature and pressure dependence of many liquid state properties of water while retaining the computational efficiency of three-site models. Here, the influence of the shape of the molecular charge distribution on liquid state properties is examined by correlating multipoles of non-polarizable water models with their liquid state properties in computer simulations. This will aid in the development of accurate water models for classical simulations as well as in determining the accuracy needed in quantum mechanical/molecular mechanical studies and ab initio molecular dynamics simulations of water. More fundamentally, this will lead to a greater understanding of how the charge distribution of a water molecule leads to the unique properties of liquid water. In particular, these studies indicate that p-orbital charge out of the molecular plane is important.
Parallel discrete event simulation: A shared memory approach
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.
1987-01-01
With traditional event list techniques, evaluating a detailed discrete event simulation model can often require hours or even days of computation time. Parallel simulation mimics the interacting servers and queues of a real system by assigning each simulated entity to a processor. By eliminating the event list and maintaining only sufficient synchronization to insure causality, parallel simulation can potentially provide speedups that are linear in the number of processors. A set of shared memory experiments is presented using the Chandy-Misra distributed simulation algorithm to simulate networks of queues. Parameters include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential simulation of most queueing network models.
Application of simulation models for the optimization of business processes
NASA Astrophysics Data System (ADS)
Jašek, Roman; Sedláček, Michal; Chramcov, Bronislav; Dvořák, Jiří
2016-06-01
The paper deals with the applications of modeling and simulation tools in the optimization of business processes, especially in solving an optimization of signal flow in security company. As a modeling tool was selected Simul8 software that is used to process modeling based on discrete event simulation and which enables the creation of a visual model of production and distribution processes.
A "total parameter estimation" method in the varification of distributed hydrological models
NASA Astrophysics Data System (ADS)
Wang, M.; Qin, D.; Wang, H.
2011-12-01
Conventionally hydrological models are used for runoff or flood forecasting, hence the determination of model parameters are common estimated based on discharge measurements at the catchment outlets. With the advancement in hydrological sciences and computer technology, distributed hydrological models based on the physical mechanism such as SWAT, MIKESHE, and WEP, have gradually become the mainstream models in hydrology sciences. However, the assessments of distributed hydrological models and model parameter determination still rely on runoff and occasionally, groundwater level measurements. It is essential in many countries, including China, to understand the local and regional water cycle: not only do we need to simulate the runoff generation process and for flood forecasting in wet areas, we also need to grasp the water cycle pathways and consumption process of transformation in arid and semi-arid regions for the conservation and integrated water resources management. As distributed hydrological model can simulate physical processes within a catchment, we can get a more realistic representation of the actual water cycle within the simulation model. Runoff is the combined result of various hydrological processes, using runoff for parameter estimation alone is inherits problematic and difficult to assess the accuracy. In particular, in the arid areas, such as the Haihe River Basin in China, runoff accounted for only 17% of the rainfall, and very concentrated during the rainy season from June to August each year. During other months, many of the perennial rivers within the river basin dry up. Thus using single runoff simulation does not fully utilize the distributed hydrological model in arid and semi-arid regions. This paper proposed a "total parameter estimation" method to verify the distributed hydrological models within various water cycle processes, including runoff, evapotranspiration, groundwater, and soil water; and apply it to the Haihe river basin in China. The application results demonstrate that this comprehensive testing method is very useful in the development of a distributed hydrological model and it provides a new way of thinking in hydrological sciences.
NASA Astrophysics Data System (ADS)
Guerrero, J.; Halldin, S.; Xu, C.; Lundin, L.
2011-12-01
Distributed hydrological models are important tools in water management as they account for the spatial variability of the hydrological data, as well as being able to produce spatially distributed outputs. They can directly incorporate and assess potential changes in the characteristics of our basins. A recognized problem for models in general is equifinality, which is only exacerbated for distributed models who tend to have a large number of parameters. We need to deal with the fundamentally ill-posed nature of the problem that such models force us to face, i.e. a large number of parameters and very few variables that can be used to constrain them, often only the catchment discharge. There is a growing but yet limited literature showing how the internal states of a distributed model can be used to calibrate/validate its predictions. In this paper, a distributed version of WASMOD, a conceptual rainfall runoff model with only three parameters, combined with a routing algorithm based on the high-resolution HydroSHEDS data was used to simulate the discharge in the Paso La Ceiba basin in Honduras. The parameter space was explored using Monte-Carlo simulations and the region of space containing the parameter-sets that were considered behavioral according to two different criteria was delimited using the geometric concept of alpha-shapes. The discharge data from five internal sub-basins was used to aid in the calibration of the model and to answer the following questions: Can this information improve the simulations at the outlet of the catchment, or decrease their uncertainty? Also, after reducing the number of model parameters needing calibration through sensitivity analysis: Is it possible to relate them to basin characteristics? The analysis revealed that in most cases the internal discharge data can be used to reduce the uncertainty in the discharge at the outlet, albeit with little improvement in the overall simulation results.
Design of object-oriented distributed simulation classes
NASA Technical Reports Server (NTRS)
Schoeffler, James D. (Principal Investigator)
1995-01-01
Distributed simulation of aircraft engines as part of a computer aided design package is being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for 'Numerical Propulsion Simulation System'. NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT 'Actor' model of a concurrent object and uses 'connectors' to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.
Design of Object-Oriented Distributed Simulation Classes
NASA Technical Reports Server (NTRS)
Schoeffler, James D.
1995-01-01
Distributed simulation of aircraft engines as part of a computer aided design package being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for "Numerical Propulsion Simulation System". NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT "Actor" model of a concurrent object and uses "connectors" to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.
Methods and tools for profiling and control of distributed systems
NASA Astrophysics Data System (ADS)
Sukharev, R.; Lukyanchikov, O.; Nikulchev, E.; Biryukov, D.; Ryadchikov, I.
2018-02-01
This article is devoted to the topic of profiling and control of distributed systems. Distributed systems have a complex architecture, applications are distributed among various computing nodes, and many network operations are performed. Therefore, today it is important to develop methods and tools for profiling distributed systems. The article analyzes and standardizes methods for profiling distributed systems that focus on simulation to conduct experiments and build a graph model of the system. The theory of queueing networks is used for simulation modeling of distributed systems, receiving and processing user requests. To automate the above method of profiling distributed systems the software application was developed with a modular structure and similar to a SCADA-system.
Finite Element Aircraft Simulation of Turbulence
DOT National Transportation Integrated Search
1997-02-01
A Simulation of Rotor Blade Element Turbulence (SORBET) model has been : developed for realtime aircraft simulation that accommodates stochastic : turbulence and distributed discrete gusts as a function of the terrain. This : model is applicable to c...
NASA Technical Reports Server (NTRS)
Yamakov, V.; Saether, E.; Phillips, D.; Glaessgen, E. H.
2004-01-01
In this paper, a multiscale modelling strategy is used to study the effect of grain-boundary sliding on stress localization in a polycrystalline microstructure with an uneven distribution of grain size. The development of the molecular dynamics (MD) analysis used to interrogate idealized grain microstructures with various types of grain boundaries and the multiscale modelling strategies for modelling large systems of grains is discussed. Both molecular-dynamics and finite-element (FE) simulations for idealized polycrystalline models of identical geometry are presented with the purpose of demonstrating the effectiveness of the adapted finite-element method using cohesive zone models to reproduce grain-boundary sliding and its effect on the stress distribution in a polycrystalline metal. The yield properties of the grain-boundary interface, used in the FE simulations, are extracted from a MD simulation on a bicrystal. The models allow for the study of the load transfer between adjacent grains of very different size through grain-boundary sliding during deformation. A large-scale FE simulation of 100 grains of a typical microstructure is then presented to reveal that the stress distribution due to grain-boundary sliding during uniform tensile strain can lead to stress localization of two to three times the background stress, thus suggesting a significant effect on the failure properties of the metal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Chao ..; Singh, Vijay P.; Mishra, Ashok K.
2013-02-06
This paper presents an improved brivariate mixed distribution, which is capable of modeling the dependence of daily rainfall from two distinct sources (e.g., rainfall from two stations, two consecutive days, or two instruments such as satellite and rain gauge). The distribution couples an existing framework for building a bivariate mixed distribution, the theory of copulae and a hybrid marginal distribution. Contributions of the improved distribution are twofold. One is the appropriate selection of the bivariate dependence structure from a wider admissible choice (10 candidate copula families). The other is the introduction of a marginal distribution capable of better representing lowmore » to moderate values as well as extremes of daily rainfall. Among several applications of the improved distribution, particularly presented here is its utility for single-site daily rainfall simulation. Rather than simulating rainfall occurrences and amounts separately, the developed generator unifies the two processes by generalizing daily rainfall as a Markov process with autocorrelation described by the improved bivariate mixed distribution. The generator is first tested on a sample station in Texas. Results reveal that the simulated and observed sequences are in good agreement with respect to essential characteristics. Then, extensive simulation experiments are carried out to compare the developed generator with three other alternative models: the conventional two-state Markov chain generator, the transition probability matrix model and the semi-parametric Markov chain model with kernel density estimation for rainfall amounts. Analyses establish that overall the developed generator is capable of reproducing characteristics of historical extreme rainfall events and is apt at extrapolating rare values beyond the upper range of available observed data. Moreover, it automatically captures the persistence of rainfall amounts on consecutive wet days in a relatively natural and easy way. Another interesting observation is that the recognized ‘overdispersion’ problem in daily rainfall simulation ascribes more to the loss of rainfall extremes than the under-representation of first-order persistence. The developed generator appears to be a sound option for daily rainfall simulation, especially in particular hydrologic planning situations when rare rainfall events are of great importance.« less
NASA Technical Reports Server (NTRS)
Mann, G. W.; Carslaw, K. S.; Reddington, C. L.; Pringle, K. J.; Schulz, M.; Asmi, A.; Spracklen, D. V.; Ridley, D. A.; Woodhouse, M. T.; Lee, L. A.;
2014-01-01
Many of the next generation of global climate models will include aerosol schemes which explicitly simulate the microphysical processes that determine the particle size distribution. These models enable aerosol optical properties and cloud condensation nuclei (CCN) concentrations to be determined by fundamental aerosol processes, which should lead to a more physically based simulation of aerosol direct and indirect radiative forcings. This study examines the global variation in particle size distribution simulated by 12 global aerosol microphysics models to quantify model diversity and to identify any common biases against observations. Evaluation against size distribution measurements from a new European network of aerosol supersites shows that the mean model agrees quite well with the observations at many sites on the annual mean, but there are some seasonal biases common to many sites. In particular, at many of these European sites, the accumulation mode number concentration is biased low during winter and Aitken mode concentrations tend to be overestimated in winter and underestimated in summer. At high northern latitudes, the models strongly underpredict Aitken and accumulation particle concentrations compared to the measurements, consistent with previous studies that have highlighted the poor performance of global aerosol models in the Arctic. In the marine boundary layer, the models capture the observed meridional variation in the size distribution, which is dominated by the Aitken mode at high latitudes, with an increasing concentration of accumulation particles with decreasing latitude. Considering vertical profiles, the models reproduce the observed peak in total particle concentrations in the upper troposphere due to new particle formation, although modelled peak concentrations tend to be biased high over Europe. Overall, the multimodel- mean data set simulates the global variation of the particle size distribution with a good degree of skill, suggesting that most of the individual global aerosol microphysics models are performing well, although the large model diversity indicates that some models are in poor agreement with the observations. Further work is required to better constrain size-resolved primary and secondary particle number sources, and an improved understanding of nucleation an growth (e.g. the role of nitrate and secondary organics) will improve the fidelity of simulated particle size distributions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelley, B. M.
The electric utility industry is undergoing significant transformations in its operation model, including a greater emphasis on automation, monitoring technologies, and distributed energy resource management systems (DERMS). With these changes and new technologies, while driving greater efficiencies and reliability, these new models may introduce new vectors of cyber attack. The appropriate cybersecurity controls to address and mitigate these newly introduced attack vectors and potential vulnerabilities are still widely unknown and performance of the control is difficult to vet. This proposal argues that modeling and simulation (M&S) is a necessary tool to address and better understand these problems introduced by emergingmore » technologies for the grid. M&S will provide electric utilities a platform to model its transmission and distribution systems and run various simulations against the model to better understand the operational impact and performance of cybersecurity controls.« less
Minimization of Blast furnace Fuel Rate by Optimizing Burden and Gas Distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dr. Chenn Zhou
2012-08-15
The goal of the research is to improve the competitive edge of steel mills by using the advanced CFD technology to optimize the gas and burden distributions inside a blast furnace for achieving the best gas utilization. A state-of-the-art 3-D CFD model has been developed for simulating the gas distribution inside a blast furnace at given burden conditions, burden distribution and blast parameters. The comprehensive 3-D CFD model has been validated by plant measurement data from an actual blast furnace. Validation of the sub-models is also achieved. The user friendly software package named Blast Furnace Shaft Simulator (BFSS) has beenmore » developed to simulate the blast furnace shaft process. The research has significant benefits to the steel industry with high productivity, low energy consumption, and improved environment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broderick, Robert; Quiroz, Jimmy; Grijalva, Santiago
2014-07-15
Matlab Toolbox for simulating the impact of solar energy on the distribution grid. The majority of the functions are useful for interfacing OpenDSS and MATLAB, and they are of generic use for commanding OpenDSS from MATLAB and retrieving GridPV Toolbox information from simulations. A set of functions is also included for modeling PV plant output and setting up the PV plant in the OpenDSS simulation. The toolbox contains functions for modeling the OpenDSS distribution feeder on satellite images with GPS coordinates. Finally, example simulations functions are included to show potential uses of the toolbox functions.
NASA Astrophysics Data System (ADS)
Marrufo-Hernández, Norma Alejandra; Hernández-Guerrero, Maribel; Nápoles-Duarte, José Manuel; Palomares-Báez, Juan Pedro; Chávez-Rojo, Marco Antonio
2018-03-01
We present a computational model that describes the diffusion of a hard spheres colloidal fluid through a membrane. The membrane matrix is modeled as a series of flat parallel planes with circular pores of different sizes and random spatial distribution. This model was employed to determine how the size distribution of the colloidal filtrate depends on the size distributions of both, the particles in the feed and the pores of the membrane, as well as to describe the filtration kinetics. A Brownian dynamics simulation study considering normal distributions was developed in order to determine empirical correlations between the parameters that characterize these distributions. The model can also be extended to other distributions such as log-normal. This study could, therefore, facilitate the selection of membranes for industrial or scientific filtration processes once the size distribution of the feed is known and the expected characteristics in the filtrate have been defined.
NASA Astrophysics Data System (ADS)
Tao, Zhu; Shi, Runhe; Zeng, Yuyan; Gao, Wei
2017-09-01
The 3D model is an important part of simulated remote sensing for earth observation. Regarding the small-scale spatial extent of DART software, both the details of the model itself and the number of models of the distribution have an important impact on the scene canopy Normalized Difference Vegetation Index (NDVI).Taking the phragmitesaustralis in the Yangtze Estuary as an example, this paper studied the effect of the P.australias model on the canopy NDVI, based on the previous studies of the model precision, mainly from the cell dimension of the DART software and the density distribution of the P.australias model in the scene, As well as the choice of the density of the P.australiass model under the cost of computer running time in the actual simulation. The DART Cell dimensions and the density of the scene model were set by using the optimal precision model from the existing research results. The simulation results of NDVI with different model densities under different cell dimensions were analyzed by error analysis. By studying the relationship between relative error, absolute error and time costs, we have mastered the density selection method of P.australias model in the simulation of small-scale spatial scale scene. Experiments showed that the number of P.australias in the simulated scene need not be the same as those in the real environment due to the difference between the 3D model and the real scenarios. The best simulation results could be obtained by keeping the density ratio of about 40 trees per square meter, simultaneously, of the visual effects.
Analysis of skin tissues spatial fluorescence distribution by the Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Y Churmakov, D.; Meglinski, I. V.; Piletsky, S. A.; Greenhalgh, D. A.
2003-07-01
A novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account the spatial distribution of fluorophores, which would arise due to the structure of collagen fibres, compared to the epidermis and stratum corneum where the distribution of fluorophores is assumed to be homogeneous. The results of simulation suggest that distribution of auto-fluorescence is significantly suppressed in the near-infrared spectral region, whereas the spatial distribution of fluorescence sources within a sensor layer embedded in the epidermis is localized at an `effective' depth.
Particle yields from numerical simulations
NASA Astrophysics Data System (ADS)
Homor, Marietta M.; Jakovác, Antal
2018-04-01
In this paper we use numerical field theoretical simulations to calculate particle yields. We demonstrate that in the model of local particle creation the deviation from the pure exponential distribution is natural even in equilibrium, and an approximate Tsallis-Pareto-like distribution function can be well fitted to the calculated yields, in accordance with the experimental observations. We present numerical simulations in the classical Φ4 model as well as in the SU(3) quantum Yang-Mills theory to clarify this issue.
A Process for Comparing Dynamics of Distributed Space Systems Simulations
NASA Technical Reports Server (NTRS)
Cures, Edwin Z.; Jackson, Albert A.; Morris, Jeffery C.
2009-01-01
The paper describes a process that was developed for comparing the primary orbital dynamics behavior between space systems distributed simulations. This process is used to characterize and understand the fundamental fidelities and compatibilities of the modeling of orbital dynamics between spacecraft simulations. This is required for high-latency distributed simulations such as NASA s Integrated Mission Simulation and must be understood when reporting results from simulation executions. This paper presents 10 principal comparison tests along with their rationale and examples of the results. The Integrated Mission Simulation (IMSim) (formerly know as the Distributed Space Exploration Simulation (DSES)) is a NASA research and development project focusing on the technologies and processes that are related to the collaborative simulation of complex space systems involved in the exploration of our solar system. Currently, the NASA centers that are actively participating in the IMSim project are the Ames Research Center, the Jet Propulsion Laboratory (JPL), the Johnson Space Center (JSC), the Kennedy Space Center, the Langley Research Center and the Marshall Space Flight Center. In concept, each center participating in IMSim has its own set of simulation models and environment(s). These simulation tools are used to build the various simulation products that are used for scientific investigation, engineering analysis, system design, training, planning, operations and more. Working individually, these production simulations provide important data to various NASA projects.
On validating remote sensing simulations using coincident real data
NASA Astrophysics Data System (ADS)
Wang, Mingming; Yao, Wei; Brown, Scott; Goodenough, Adam; van Aardt, Jan
2016-05-01
The remote sensing community often requires data simulation, either via spectral/spatial downsampling or through virtual, physics-based models, to assess systems and algorithms. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is one such first-principles, physics-based model for simulating imagery for a range of modalities. Complex simulation of vegetation environments subsequently has become possible, as scene rendering technology and software advanced. This in turn has created questions related to the validity of such complex models, with potential multiple scattering, bidirectional distribution function (BRDF), etc. phenomena that could impact results in the case of complex vegetation scenes. We selected three sites, located in the Pacific Southwest domain (Fresno, CA) of the National Ecological Observatory Network (NEON). These sites represent oak savanna, hardwood forests, and conifer-manzanita-mixed forests. We constructed corresponding virtual scenes, using airborne LiDAR and imaging spectroscopy data from NEON, ground-based LiDAR data, and field-collected spectra to characterize the scenes. Imaging spectroscopy data for these virtual sites then were generated using the DIRSIG simulation environment. This simulated imagery was compared to real AVIRIS imagery (15m spatial resolution; 12 pixels/scene) and NEON Airborne Observation Platform (AOP) data (1m spatial resolution; 180 pixels/scene). These tests were performed using a distribution-comparison approach for select spectral statistics, e.g., established the spectra's shape, for each simulated versus real distribution pair. The initial comparison results of the spectral distributions indicated that the shapes of spectra between the virtual and real sites were closely matched.
An Integrated Modeling and Simulation Methodology for Intelligent Systems Design and Testing
2002-08-01
simulation and actual execution. KEYWORDS: Model Continuity, Modeling, Simulation, Experimental Frame, Real Time Systems , Intelligent Systems...the methodology for a stand-alone real time system. Then it will scale up to distributed real time systems . For both systems, step-wise simulation...MODEL CONTINUITY Intelligent real time systems monitor, respond to, or control, an external environment. This environment is connected to the digital
NASA Astrophysics Data System (ADS)
Li, J.
2017-12-01
Large-watershed flood simulation and forecasting is very important for a distributed hydrological model in the application. There are some challenges including the model's spatial resolution effect, model performance and accuracy and so on. To cope with the challenge of the model's spatial resolution effect, different model resolution including 1000m*1000m, 600m*600m, 500m*500m, 400m*400m, 200m*200m were used to build the distributed hydrological model—Liuxihe model respectively. The purpose is to find which one is the best resolution for Liuxihe model in Large-watershed flood simulation and forecasting. This study sets up a physically based distributed hydrological model for flood forecasting of the Liujiang River basin in south China. Terrain data digital elevation model (DEM), soil type and land use type are downloaded from the website freely. The model parameters are optimized by using an improved Particle Swarm Optimization(PSO) algorithm; And parameter optimization could reduce the parameter uncertainty that exists for physically deriving model parameters. The different model resolution (200m*200m—1000m*1000m ) are proposed for modeling the Liujiang River basin flood with the Liuxihe model in this study. The best model's spatial resolution effect for flood simulation and forecasting is 200m*200m.And with the model's spatial resolution reduction, the model performance and accuracy also become worse and worse. When the model resolution is 1000m*1000m, the flood simulation and forecasting result is the worst, also the river channel divided based on this resolution is differs from the actual one. To keep the model with an acceptable performance, minimum model spatial resolution is needed. The suggested threshold model spatial resolution for modeling the Liujiang River basin flood is a 500m*500m grid cell, but the model spatial resolution with a 200m*200m grid cell is recommended in this study to keep the model at a best performance.
Simulating fail-stop in asynchronous distributed systems
NASA Technical Reports Server (NTRS)
Sabel, Laura; Marzullo, Keith
1994-01-01
The fail-stop failure model appears frequently in the distributed systems literature. However, in an asynchronous distributed system, the fail-stop model cannot be implemented. In particular, it is impossible to reliably detect crash failures in an asynchronous system. In this paper, we show that it is possible to specify and implement a failure model that is indistinguishable from the fail-stop model from the point of view of any process within an asynchronous system. We give necessary conditions for a failure model to be indistinguishable from the fail-stop model, and derive lower bounds on the amount of process replication needed to implement such a failure model. We present a simple one-round protocol for implementing one such failure model, which we call simulated fail-stop.
Distributed Observer Network (DON), Version 3.0, User's Guide
NASA Technical Reports Server (NTRS)
Mazzone, Rebecca A.; Conroy, Michael P.
2015-01-01
The Distributed Observer Network (DON) is a data presentation tool developed by the National Aeronautics and Space Administration (NASA) to distribute and publish simulation results. Leveraging the display capabilities inherent in modern gaming technology, DON places users in a fully navigable 3-D environment containing graphical models and allows the users to observe how those models evolve and interact over time in a given scenario. Each scenario is driven with data that has been generated by authoritative NASA simulation tools and exported in accordance with a published data interface specification. This decoupling of the data from the source tool enables DON to faithfully display a simulator's results and ensure that every simulation stakeholder will view the exact same information every time.
Discriminative Random Field Models for Subsurface Contamination Uncertainty Quantification
NASA Astrophysics Data System (ADS)
Arshadi, M.; Abriola, L. M.; Miller, E. L.; De Paolis Kaluza, C.
2017-12-01
Application of flow and transport simulators for prediction of the release, entrapment, and persistence of dense non-aqueous phase liquids (DNAPLs) and associated contaminant plumes is a computationally intensive process that requires specification of a large number of material properties and hydrologic/chemical parameters. Given its computational burden, this direct simulation approach is particularly ill-suited for quantifying both the expected performance and uncertainty associated with candidate remediation strategies under real field conditions. Prediction uncertainties primarily arise from limited information about contaminant mass distributions, as well as the spatial distribution of subsurface hydrologic properties. Application of direct simulation to quantify uncertainty would, thus, typically require simulating multiphase flow and transport for a large number of permeability and release scenarios to collect statistics associated with remedial effectiveness, a computationally prohibitive process. The primary objective of this work is to develop and demonstrate a methodology that employs measured field data to produce equi-probable stochastic representations of a subsurface source zone that capture the spatial distribution and uncertainty associated with key features that control remediation performance (i.e., permeability and contamination mass). Here we employ probabilistic models known as discriminative random fields (DRFs) to synthesize stochastic realizations of initial mass distributions consistent with known, and typically limited, site characterization data. Using a limited number of full scale simulations as training data, a statistical model is developed for predicting the distribution of contaminant mass (e.g., DNAPL saturation and aqueous concentration) across a heterogeneous domain. Monte-Carlo sampling methods are then employed, in conjunction with the trained statistical model, to generate realizations conditioned on measured borehole data. Performance of the statistical model is illustrated through comparisons of generated realizations with the `true' numerical simulations. Finally, we demonstrate how these realizations can be used to determine statistically optimal locations for further interrogation of the subsurface.
Development and Testing of Protection Scheme for Renewable-Rich Distribution System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brahma, Sukumar; Ranade, Satish; Elkhatib, Mohamed E.
As the penetration of renewables increases in the distribution systems, and microgrids are conceived with high penetration of such generation that connects through inverters, fault location and protection of microgrids needs consideration. This report proposes averaged models that help simulate fault scenarios in renewable-rich microgrids, models for locating faults in such microgrids, and comments on the protection models that may be considered for microgrids. Simulation studies are reported to justify the models.
Simulation Models for the Electric Power Requirements in a Guideway Transit System
DOT National Transportation Integrated Search
1980-04-01
This report describes a computer simulation model developed at the Transportation Systems Center to study the electrical power distribution characteristics of Automated Guideway Transit (AGT) systems. The objective of this simulation effort is to pro...
Ko, Sungahn; Zhao, Jieqiong; Xia, Jing; Afzal, Shehzad; Wang, Xiaoyu; Abram, Greg; Elmqvist, Niklas; Kne, Len; Van Riper, David; Gaither, Kelly; Kennedy, Shaun; Tolone, William; Ribarsky, William; Ebert, David S
2014-12-01
We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is the Workbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain.
Uncertainty of future projections of species distributions in mountainous regions.
Tang, Ying; Winkler, Julie A; Viña, Andrés; Liu, Jianguo; Zhang, Yuanbin; Zhang, Xiaofeng; Li, Xiaohong; Wang, Fang; Zhang, Jindong; Zhao, Zhiqiang
2018-01-01
Multiple factors introduce uncertainty into projections of species distributions under climate change. The uncertainty introduced by the choice of baseline climate information used to calibrate a species distribution model and to downscale global climate model (GCM) simulations to a finer spatial resolution is a particular concern for mountainous regions, as the spatial resolution of climate observing networks is often insufficient to detect the steep climatic gradients in these areas. Using the maximum entropy (MaxEnt) modeling framework together with occurrence data on 21 understory bamboo species distributed across the mountainous geographic range of the Giant Panda, we examined the differences in projected species distributions obtained from two contrasting sources of baseline climate information, one derived from spatial interpolation of coarse-scale station observations and the other derived from fine-spatial resolution satellite measurements. For each bamboo species, the MaxEnt model was calibrated separately for the two datasets and applied to 17 GCM simulations downscaled using the delta method. Greater differences in the projected spatial distributions of the bamboo species were observed for the models calibrated using the different baseline datasets than between the different downscaled GCM simulations for the same calibration. In terms of the projected future climatically-suitable area by species, quantification using a multi-factor analysis of variance suggested that the sum of the variance explained by the baseline climate dataset used for model calibration and the interaction between the baseline climate data and the GCM simulation via downscaling accounted for, on average, 40% of the total variation among the future projections. Our analyses illustrate that the combined use of gridded datasets developed from station observations and satellite measurements can help estimate the uncertainty introduced by the choice of baseline climate information to the projected changes in species distribution.
Uncertainty of future projections of species distributions in mountainous regions
Tang, Ying; Viña, Andrés; Liu, Jianguo; Zhang, Yuanbin; Zhang, Xiaofeng; Li, Xiaohong; Wang, Fang; Zhang, Jindong; Zhao, Zhiqiang
2018-01-01
Multiple factors introduce uncertainty into projections of species distributions under climate change. The uncertainty introduced by the choice of baseline climate information used to calibrate a species distribution model and to downscale global climate model (GCM) simulations to a finer spatial resolution is a particular concern for mountainous regions, as the spatial resolution of climate observing networks is often insufficient to detect the steep climatic gradients in these areas. Using the maximum entropy (MaxEnt) modeling framework together with occurrence data on 21 understory bamboo species distributed across the mountainous geographic range of the Giant Panda, we examined the differences in projected species distributions obtained from two contrasting sources of baseline climate information, one derived from spatial interpolation of coarse-scale station observations and the other derived from fine-spatial resolution satellite measurements. For each bamboo species, the MaxEnt model was calibrated separately for the two datasets and applied to 17 GCM simulations downscaled using the delta method. Greater differences in the projected spatial distributions of the bamboo species were observed for the models calibrated using the different baseline datasets than between the different downscaled GCM simulations for the same calibration. In terms of the projected future climatically-suitable area by species, quantification using a multi-factor analysis of variance suggested that the sum of the variance explained by the baseline climate dataset used for model calibration and the interaction between the baseline climate data and the GCM simulation via downscaling accounted for, on average, 40% of the total variation among the future projections. Our analyses illustrate that the combined use of gridded datasets developed from station observations and satellite measurements can help estimate the uncertainty introduced by the choice of baseline climate information to the projected changes in species distribution. PMID:29320501
Fumeaux, Christophe; Lin, Hungyen; Serita, Kazunori; Withayachumnankul, Withawat; Kaufmann, Thomas; Tonouchi, Masayoshi; Abbott, Derek
2012-07-30
The process of terahertz generation through optical rectification in a nonlinear crystal is modeled using discretized equivalent current sources. The equivalent terahertz sources are distributed in the active volume and computed based on a separately modeled near-infrared pump beam. This approach can be used to define an appropriate excitation for full-wave electromagnetic numerical simulations of the generated terahertz radiation. This enables predictive modeling of the near-field interactions of the terahertz beam with micro-structured samples, e.g. in a near-field time-resolved microscopy system. The distributed source model is described in detail, and an implementation in a particular full-wave simulation tool is presented. The numerical results are then validated through a series of measurements on square apertures. The general principle can be applied to other nonlinear processes with possible implementation in any full-wave numerical electromagnetic solver.
NASA Astrophysics Data System (ADS)
Han, Suyue; Chang, Gary Han; Schirmer, Clemens; Modarres-Sadeghi, Yahya
2016-11-01
We construct a reduced-order model (ROM) to study the Wall Shear Stress (WSS) distributions in image-based patient-specific aneurysms models. The magnitude of WSS has been shown to be a critical factor in growth and rupture of human aneurysms. We start the process by running a training case using Computational Fluid Dynamics (CFD) simulation with time-varying flow parameters, such that these parameters cover the range of parameters of interest. The method of snapshot Proper Orthogonal Decomposition (POD) is utilized to construct the reduced-order bases using the training CFD simulation. The resulting ROM enables us to study the flow patterns and the WSS distributions over a range of system parameters computationally very efficiently with a relatively small number of modes. This enables comprehensive analysis of the model system across a range of physiological conditions without the need to re-compute the simulation for small changes in the system parameters.
Evaluating the spatial distribution of water balance in a small watershed, Pennsylvania
NASA Astrophysics Data System (ADS)
Yu, Zhongbo; Gburek, W. J.; Schwartz, F. W.
2000-04-01
A conceptual water-balance model was modified from a point application to be distributed for evaluating the spatial distribution of watershed water balance based on daily precipitation, temperature and other hydrological parameters. The model was calibrated by comparing simulated daily variation in soil moisture with field observed data and results of another model that simulates the vertical soil moisture flow by numerically solving Richards' equation. The impacts of soil and land use on the hydrological components of the water balance, such as evapotranspiration, soil moisture deficit, runoff and subsurface drainage, were evaluated with the calibrated model in this study. Given the same meteorological conditions and land use, the soil moisture deficit, evapotranspiration and surface runoff increase, and subsurface drainage decreases, as the available water capacity of soil increases. Among various land uses, alfalfa produced high soil moisture deficit and evapotranspiration and lower surface runoff and subsurface drainage, whereas soybeans produced an opposite trend. The simulated distribution of various hydrological components shows the combined effect of soil and land use. Simulated hydrological components compare well with observed data. The study demonstrated that the distributed water balance approach is efficient and has advantages over the use of single average value of hydrological variables and the application at a single point in the traditional practice.
Shen, Meiyu; Russek-Cohen, Estelle; Slud, Eric V
2016-08-12
Bioequivalence (BE) studies are an essential part of the evaluation of generic drugs. The most common in vivo BE study design is the two-period two-treatment crossover design. AUC (area under the concentration-time curve) and Cmax (maximum concentration) are obtained from the observed concentration-time profiles for each subject from each treatment under each sequence. In the BE evaluation of pharmacokinetic crossover studies, the normality of the univariate response variable, e.g. log(AUC) 1 or log(Cmax), is often assumed in the literature without much evidence. Therefore, we investigate the distributional assumption of the normality of response variables, log(AUC) and log(Cmax), by simulating concentration-time profiles from two-stage pharmacokinetic models (commonly used in pharmacokinetic research) for a wide range of pharmacokinetic parameters and measurement error structures. Our simulations show that, under reasonable distributional assumptions on the pharmacokinetic parameters, log(AUC) has heavy tails and log(Cmax) is skewed. Sensitivity analyses are conducted to investigate how the distribution of the standardized log(AUC) (or the standardized log(Cmax)) for a large number of simulated subjects deviates from normality if distributions of errors in the pharmacokinetic model for plasma concentrations deviate from normality and if the plasma concentration can be described by different compartmental models.
Analysis on flood generation processes by means of a continuous simulation model
NASA Astrophysics Data System (ADS)
Fiorentino, M.; Gioia, A.; Iacobellis, V.; Manfreda, S.
2006-03-01
In the present research, we exploited a continuous hydrological simulation to investigate on key variables responsible of flood peak formation. With this purpose, a distributed hydrological model (DREAM) is used in cascade with a rainfall generator (IRP-Iterated Random Pulse) to simulate a large number of extreme events providing insight into the main controls of flood generation mechanisms. Investigated variables are those used in theoretically derived probability distribution of floods based on the concept of partial contributing area (e.g. Iacobellis and Fiorentino, 2000). The continuous simulation model is used to investigate on the hydrological losses occurring during extreme events, the variability of the source area contributing to the flood peak and its lag-time. Results suggest interesting simplification for the theoretical probability distribution of floods according to the different climatic and geomorfologic environments. The study is applied to two basins located in Southern Italy with different climatic characteristics.
Investigations on 3-dimensional temperature distribution in a FLATCON-type CPV module
NASA Astrophysics Data System (ADS)
Wiesenfarth, Maike; Gamisch, Sebastian; Kraus, Harald; Bett, Andreas W.
2013-09-01
The thermal flow in a FLATCON®-type CPV module is investigated theoretically and experimentally. For the simulation a model in the computational fluid dynamics (CFD) software SolidWorks Flow Simulation was established. In order to verify the simulation results the calculated and measured temperatures were compared assuming the same operating conditions (wind speed and direction, direct normal irradiance (DNI) and ambient temperature). Therefore, an experimental module was manufactured and equipped with temperature sensors at defined positions. In addition, the temperature distribution on the back plate of the module was displayed by infrared images. The simulated absolute temperature and the distribution compare well with an average deviation of only 3.3 K to the sensor measurements. Finally, the validated model was used to investigate the influence of the back plate material on the temperature distribution by replacing the glass material by aluminum. The simulation showed that it is important to consider heat dissipation by radiation when designing a CPV module.
Building Better Planet Populations for EXOSIMS
NASA Astrophysics Data System (ADS)
Garrett, Daniel; Savransky, Dmitry
2018-01-01
The Exoplanet Open-Source Imaging Mission Simulator (EXOSIMS) software package simulates ensembles of space-based direct imaging surveys to provide a variety of science and engineering yield distributions for proposed mission designs. These mission simulations rely heavily on assumed distributions of planetary population parameters including semi-major axis, planetary radius, eccentricity, albedo, and orbital orientation to provide heuristics for target selection and to simulate planetary systems for detection and characterization. The distributions are encoded in PlanetPopulation modules within EXOSIMS which are selected by the user in the input JSON script when a simulation is run. The earliest written PlanetPopulation modules available in EXOSIMS are based on planet population models where the planetary parameters are considered to be independent from one another. While independent parameters allow for quick computation of heuristics and sampling for simulated planetary systems, results from planet-finding surveys have shown that many parameters (e.g., semi-major axis/orbital period and planetary radius) are not independent. We present new PlanetPopulation modules for EXOSIMS which are built on models based on planet-finding survey results where semi-major axis and planetary radius are not independent and provide methods for sampling their joint distribution. These new modules enhance the ability of EXOSIMS to simulate realistic planetary systems and give more realistic science yield distributions.
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2017-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. PMID:28190948
NASA Astrophysics Data System (ADS)
Guo, Yue; Du, Lei; Jiang, Long; Li, Qing; Zhao, Zhenning
2017-01-01
In this paper, the combustion and NOx emission characteristics of a 300 MW tangential boiler are simulated, we obtain the flue gas velocity field in the hearth, component concentration distribution of temperature field and combustion products, and the speed, temperature, concentration of oxygen and NOx emissions compared with the test results in the waisting air distribution conditions, found the simulation values coincide well with the test value, to verify the rationality of the model. At the same time, the flow field in the furnace, the combustion and the influence of NOx emission characteristics are simulated by different conditions, including compared with primary zone secondary waisting air distribution, uniform air distribution and pagodas go down air distribution, the results show that, waisting air distribution is useful to reduce NOx emissions.
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2018-01-01
The goal of this study is to develop a generalized source model (GSM) for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology. PMID:28079526
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
NASA Astrophysics Data System (ADS)
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2017-03-01
The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.
System Engineering Strategy for Distributed Multi-Purpose Simulation Architectures
NASA Technical Reports Server (NTRS)
Bhula, Dlilpkumar; Kurt, Cindy Marie; Luty, Roger
2007-01-01
This paper describes the system engineering approach used to develop distributed multi-purpose simulations. The multi-purpose simulation architecture focuses on user needs, operations, flexibility, cost and maintenance. This approach was used to develop an International Space Station (ISS) simulator, which is called the International Space Station Integrated Simulation (ISIS)1. The ISIS runs unmodified ISS flight software, system models, and the astronaut command and control interface in an open system design that allows for rapid integration of multiple ISS models. The initial intent of ISIS was to provide a distributed system that allows access to ISS flight software and models for the creation, test, and validation of crew and ground controller procedures. This capability reduces the cost and scheduling issues associated with utilizing standalone simulators in fixed locations, and facilitates discovering unknowns and errors earlier in the development lifecycle. Since its inception, the flexible architecture of the ISIS has allowed its purpose to evolve to include ground operator system and display training, flight software modification testing, and as a realistic test bed for Exploration automation technology research and development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molenkamp, C.R.; Grossman, A.
1999-12-20
A network of small balloon-borne transponders which gather very high resolution wind and temperature data for use by modern numerical weather predication models has been proposed to improve the reliability of long-range weather forecasts. The global distribution of an array of such transponders is simulated using LLNL's atmospheric parcel transport model (GRANTOUR) with winds supplied by two different general circulation models. An initial study used winds from CCM3 with a horizontal resolution of about 3 degrees in latitude and longitude, and a second study used winds from NOGAPS with a 0.75 degree horizontal resolution. Results from both simulations show thatmore » reasonable global coverage can be attained by releasing balloons from an appropriate set of launch sites.« less
An electromechanical based deformable model for soft tissue simulation.
Zhong, Yongmin; Shirinzadeh, Bijan; Smith, Julian; Gu, Chengfan
2009-11-01
Soft tissue deformation is of great importance to surgery simulation. Although a significant amount of research efforts have been dedicated to simulating the behaviours of soft tissues, modelling of soft tissue deformation is still a challenging problem. This paper presents a new deformable model for simulation of soft tissue deformation from the electromechanical viewpoint of soft tissues. Soft tissue deformation is formulated as a reaction-diffusion process coupled with a mechanical load. The mechanical load applied to a soft tissue to cause a deformation is incorporated into the reaction-diffusion system, and consequently distributed among mass points of the soft tissue. Reaction-diffusion of mechanical load and non-rigid mechanics of motion are combined to govern the simulation dynamics of soft tissue deformation. An improved reaction-diffusion model is developed to describe the distribution of the mechanical load in soft tissues. A three-layer artificial cellular neural network is constructed to solve the reaction-diffusion model for real-time simulation of soft tissue deformation. A gradient based method is established to derive internal forces from the distribution of the mechanical load. Integration with a haptic device has also been achieved to simulate soft tissue deformation with haptic feedback. The proposed methodology does not only predict the typical behaviours of living tissues, but it also accepts both local and large-range deformations. It also accommodates isotropic, anisotropic and inhomogeneous deformations by simple modification of diffusion coefficients.
NASA Astrophysics Data System (ADS)
Hosseini, S. A.; Zangian, M.; Aghabozorgi, S.
2018-03-01
In the present paper, the light output distribution due to poly-energetic neutron/gamma (neutron or gamma) source was calculated using the developed MCNPX-ESUT-PE (MCNPX-Energy engineering of Sharif University of Technology-Poly Energetic version) computational code. The simulation of light output distribution includes the modeling of the particle transport, the calculation of scintillation photons induced by charged particles, simulation of the scintillation photon transport and considering the light resolution obtained from the experiment. The developed computational code is able to simulate the light output distribution due to any neutron/gamma source. In the experimental step of the present study, the neutron-gamma discrimination based on the light output distribution was performed using the zero crossing method. As a case study, 241Am-9Be source was considered and the simulated and measured neutron/gamma light output distributions were compared. There is an acceptable agreement between the discriminated neutron/gamma light output distributions obtained from the simulation and experiment.
Wieland, Birgit; Ropte, Sven
2017-01-01
The production of rotor blades for wind turbines is still a predominantly manual process. Process simulation is an adequate way of improving blade quality without a significant increase in production costs. This paper introduces a module for tolerance simulation for rotor-blade production processes. The investigation focuses on the simulation of temperature distribution for one-sided, self-heated tooling and thick laminates. Experimental data from rotor-blade production and down-scaled laboratory tests are presented. Based on influencing factors that are identified, a physical model is created and implemented as a simulation. This provides an opportunity to simulate temperature and cure-degree distribution for two-dimensional cross sections. The aim of this simulation is to support production processes. Hence, it is modelled as an in situ simulation with direct input of temperature data and real-time capability. A monolithic part of the rotor blade, the main girder, is used as an example for presenting the results. PMID:28981458
Wieland, Birgit; Ropte, Sven
2017-10-05
The production of rotor blades for wind turbines is still a predominantly manual process. Process simulation is an adequate way of improving blade quality without a significant increase in production costs. This paper introduces a module for tolerance simulation for rotor-blade production processes. The investigation focuses on the simulation of temperature distribution for one-sided, self-heated tooling and thick laminates. Experimental data from rotor-blade production and down-scaled laboratory tests are presented. Based on influencing factors that are identified, a physical model is created and implemented as a simulation. This provides an opportunity to simulate temperature and cure-degree distribution for two-dimensional cross sections. The aim of this simulation is to support production processes. Hence, it is modelled as an in situ simulation with direct input of temperature data and real-time capability. A monolithic part of the rotor blade, the main girder, is used as an example for presenting the results.
NASA Astrophysics Data System (ADS)
Johnson, Donald R.; Lenzen, Allen J.; Zapotocny, Tom H.; Schaack, Todd K.
2000-11-01
A challenge common to weather, climate, and seasonal numerical prediction is the need to simulate accurately reversible isentropic processes in combination with appropriate determination of sources/sinks of energy and entropy. Ultimately, this task includes the distribution and transport of internal, gravitational, and kinetic energies, the energies of water substances in all forms, and the related thermodynamic processes of phase changes involved with clouds, including condensation, evaporation, and precipitation processes.All of the processes noted above involve the entropies of matter, radiation, and chemical substances, conservation during transport, and/or changes in entropies by physical processes internal to the atmosphere. With respect to the entropy of matter, a means to study a model's accuracy in simulating internal hydrologic processes is to determine its capability to simulate the appropriate conservation of potential and equivalent potential temperature as surrogates of dry and moist entropy under reversible adiabatic processes in which clouds form, evaporate, and precipitate. In this study, a statistical strategy utilizing the concept of `pure error' is set forth to assess the numerical accuracies of models to simulate reversible processes during 10-day integrations of the global circulation corresponding to the global residence time of water vapor. During the integrations, the sums of squared differences between equivalent potential temperature e numerically simulated by the governing equations of mass, energy, water vapor, and cloud water and a proxy equivalent potential temperature te numerically simulated as a conservative property are monitored. Inspection of the differences of e and te in time and space and the relative frequency distribution of the differences details bias and random errors that develop from nonlinear numerical inaccuracies in the advection and transport of potential temperature and water substances within the global atmosphere.A series of nine global simulations employing various versions of Community Climate Models CCM2 and CCM3-all Eulerian spectral numerics, all semi-Lagrangian numerics, mixed Eulerian spectral, and semi-Lagrangian numerics-and the University of Wisconsin-Madison (UW) isentropic-sigma gridpoint model provides an interesting comparison of numerical accuracies in the simulation of reversibility. By day 10, large bias and random differences were identified in the simulation of reversible processes in all of the models except for the UW isentropic-sigma model. The CCM2 and CCM3 simulations yielded systematic differences that varied zonally, vertically, and temporally. Within the comparison, the UW isentropic-sigma model was superior in transporting water vapor and cloud water/ice and in simulating reversibility involving the conservation of dry and moist entropy. The only relative frequency distribution of differences that appeared optimal, in that the distribution remained unbiased and equilibrated with minimal variance as it remained statistically stationary, was the distribution from the UW isentropic-sigma model. All other distributions revealed nonstationary characteristics with spreading and/or shifting of the maxima as the biases and variances of the numerical differences of e and te amplified.
USMC Inventory Control Using Optimization Modeling and Discrete Event Simulation
2016-09-01
release. Distribution is unlimited. USMC INVENTORY CONTROL USING OPTIMIZATION MODELING AND DISCRETE EVENT SIMULATION by Timothy A. Curling...USING OPTIMIZATION MODELING AND DISCRETE EVENT SIMULATION 5. FUNDING NUMBERS 6. AUTHOR(S) Timothy A. Curling 7. PERFORMING ORGANIZATION NAME(S...optimization and discrete -event simulation. This construct can potentially provide an effective means in improving order management decisions. However
NASA Astrophysics Data System (ADS)
Yatheendradas, S.; Vivoni, E.
2007-12-01
A common practice in distributed hydrological modeling is to assign soil hydraulic properties based on coarse textural datasets. For semiarid regions with poor soil information, the performance of a model can be severely constrained due to the high model sensitivity to near-surface soil characteristics. Neglecting the uncertainty in soil hydraulic properties, their spatial variation and their naturally-occurring horizonation can potentially affect the modeled hydrological response. In this study, we investigate such effects using the TIN-based Real-time Integrated Basin Simulator (tRIBS) applied to the mid-sized (100 km2) Sierra Los Locos watershed in northern Sonora, Mexico. The Sierra Los Locos basin is characterized by complex mountainous terrain leading to topographic organization of soil characteristics and ecosystem distributions. We focus on simulations during the 2004 North American Monsoon Experiment (NAME) when intensive soil moisture measurements and aircraft- based soil moisture retrievals are available in the basin. Our experiments focus on soil moisture comparisons at the point, topographic transect and basin scales using a range of different soil characterizations. We compare the distributed soil moisture estimates obtained using (1) a deterministic simulation based on soil texture from coarse soil maps, (2) a set of ensemble simulations that capture soil parameter uncertainty and their spatial distribution, and (3) a set of simulations that conditions the ensemble on recent soil profile measurements. Uncertainties considered in near-surface soil characterization provide insights into their influence on the modeled uncertainty, into the value of soil profile observations, and into effective use of on-going field observations for constraining the soil moisture response uncertainty.
Population Synthesis of Radio & Gamma-Ray Millisecond Pulsars
NASA Astrophysics Data System (ADS)
Frederick, Sara; Gonthier, P. L.; Harding, A. K.
2014-01-01
In recent years, the number of known gamma-ray millisecond pulsars (MSPs) in the Galactic disk has risen substantially thanks to confirmed detections by Fermi Gamma-ray Space Telescope (Fermi). We have developed a new population synthesis of gamma-ray and radio MSPs in the galaxy which uses Markov Chain Monte Carlo techniques to explore the large and small worlds of the model parameter space and allows for comparisons of the simulated and detected MSP distributions. The simulation employs empirical radio and gamma-ray luminosity models that are dependent upon the pulsar period and period derivative with freely varying exponents. Parameters associated with the birth distributions are also free to vary. The computer code adjusts the magnitudes of the model luminosities to reproduce the number of MSPs detected by a group of ten radio surveys, thus normalizing the simulation and predicting the MSP birth rates in the Galaxy. Computing many Markov chains leads to preferred sets of model parameters that are further explored through two statistical methods. Marginalized plots define confidence regions in the model parameter space using maximum likelihood methods. A secondary set of confidence regions is determined in parallel using Kuiper statistics calculated from comparisons of cumulative distributions. These two techniques provide feedback to affirm the results and to check for consistency. Radio flux and dispersion measure constraints have been imposed on the simulated gamma-ray distributions in order to reproduce realistic detection conditions. The simulated and detected distributions agree well for both sets of radio and gamma-ray pulsar characteristics, as evidenced by our various comparisons.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demeure, I.M.
The research presented here is concerned with representation techniques and tools to support the design, prototyping, simulation, and evaluation of message-based parallel, distributed computations. The author describes ParaDiGM-Parallel, Distributed computation Graph Model-a visual representation technique for parallel, message-based distributed computations. ParaDiGM provides several views of a computation depending on the aspect of concern. It is made of two complementary submodels, the DCPG-Distributed Computing Precedence Graph-model, and the PAM-Process Architecture Model-model. DCPGs are precedence graphs used to express the functionality of a computation in terms of tasks, message-passing, and data. PAM graphs are used to represent the partitioning of a computationmore » into schedulable units or processes, and the pattern of communication among those units. There is a natural mapping between the two models. He illustrates the utility of ParaDiGM as a representation technique by applying it to various computations (e.g., an adaptive global optimization algorithm, the client-server model). ParaDiGM representations are concise. They can be used in documenting the design and the implementation of parallel, distributed computations, in describing such computations to colleagues, and in comparing and contrasting various implementations of the same computation. He then describes VISA-VISual Assistant, a software tool to support the design, prototyping, and simulation of message-based parallel, distributed computations. VISA is based on the ParaDiGM model. In particular, it supports the editing of ParaDiGM graphs to describe the computations of interest, and the animation of these graphs to provide visual feedback during simulations. The graphs are supplemented with various attributes, simulation parameters, and interpretations which are procedures that can be executed by VISA.« less
NASA Astrophysics Data System (ADS)
Garavaglia, Federico; Le Lay, Matthieu; Gottardi, Fréderic; Garçon, Rémy; Gailhard, Joël; Paquet, Emmanuel; Mathevet, Thibault
2017-08-01
Model intercomparison experiments are widely used to investigate and improve hydrological model performance. However, a study based only on runoff simulation is not sufficient to discriminate between different model structures. Hence, there is a need to improve hydrological models for specific streamflow signatures (e.g., low and high flow) and multi-variable predictions (e.g., soil moisture, snow and groundwater). This study assesses the impact of model structure on flow simulation and hydrological realism using three versions of a hydrological model called MORDOR: the historical lumped structure and a revisited formulation available in both lumped and semi-distributed structures. In particular, the main goal of this paper is to investigate the relative impact of model equations and spatial discretization on flow simulation, snowpack representation and evapotranspiration estimation. Comparison of the models is based on an extensive dataset composed of 50 catchments located in French mountainous regions. The evaluation framework is founded on a multi-criterion split-sample strategy. All models were calibrated using an automatic optimization method based on an efficient genetic algorithm. The evaluation framework is enriched by the assessment of snow and evapotranspiration modeling against in situ and satellite data. The results showed that the new model formulations perform significantly better than the initial one in terms of the various streamflow signatures, snow and evapotranspiration predictions. The semi-distributed approach provides better calibration-validation performance for the snow cover area, snow water equivalent and runoff simulation, especially for nival catchments.
NASA Astrophysics Data System (ADS)
Mohammed, Touseef Ahmed Faisal
Since 2000, renewable electricity installations in the United States (excluding hydropower) have more than tripled. Renewable electricity has grown at a compounded annual average of nearly 14% per year from 2000-2010. Wind, Concentrated Solar Power (CSP) and solar Photo Voltaic (PV) are the fastest growing renewable energy sectors. In 2010 in the U.S., solar PV grew over 71% and CSP grew by 18% from the previous year. Globally renewable electricity installations have more than quadrupled from 2000-2010. Solar PV generation grew by a factor of more than 28 between 2000 and 2010. The amount of CSP and solar PV installations are increasing on the distribution grid. These PV installations transmit electrical current from the load centers to the generating stations. But the transmission and distribution grid have been designed for uni-directional flow of electrical energy from generating stations to load centers. This causes imbalances in voltage and switchgear of the electrical circuitry. With the continuous rise in PV installations, analysis of voltage profile and penetration levels remain an active area of research. Standard distributed photovoltaic (PV) generators represented in simulation studies do not reflect the exact location and variability properties such as distance between interconnection points to substations, voltage regulators, solar irradiance and other environmental factors. Quasi-Static simulations assist in peak load planning hour and day ahead as it gives a time sequence analysis to help in generation allocation. Simulation models can be daily, hourly or yearly depending on duty cycle and dynamics of the system. High penetration of PV into the power grid changes the voltage profile and power flow dynamically in the distribution circuits due to the inherent variability of PV. There are a number of modeling and simulations tools available for the study of such high penetration PV scenarios. This thesis will specifically utilize OpenDSS, a open source Distribution System Simulator developed by Electric Power Research Institute, to simulate grid voltage profile with a large scale PV system under quasi-static time series considering variations of PV output in seconds, minutes, and the average daily load variations. A 13 bus IEEE distribution feeder model is utilized with distributed residential and commercial scale PV at different buses for simulation studies. Time series simulations are discussed for various modes of operation considering dynamic PV penetration at different time periods in a day. In addition, this thesis demonstrates simulations taking into account the presence of moving cloud for solar forecasting studies.
Collisionless Electrostatic Shock Modeling and Simulation
2016-10-21
unlimited. PA#16490 Dissipation Controls Wave Train Under- and Over-damped Shocks – Under-damped: • Dissipation is weak, ripples persist. • High...Density Position – Over-damped: ● Strong dissipation damps ripples . ● Low Density Position 12 Position Distribution A. Approved for public release...distribution unlimited. PA#16490 Model Verification Comparison with Linearized Solution – Evolution of the First Ripple Wavelength: • Simulated
Modeling of mineral dust in the atmosphere: Sources, transport, and optical thickness
NASA Technical Reports Server (NTRS)
Tegen, Ina; Fung, Inez
1994-01-01
A global three-dimensional model of the atmospheric mineral dust cycle is developed for the study of its impact on the radiative balance of the atmosphere. The model includes four size classes of minearl dust, whose source distributions are based on the distributions of vegetation, soil texture and soil moisture. Uplift and deposition are parameterized using analyzed winds and rainfall statistics that resolve high-frequency events. Dust transport in the atmosphere is simulated with the tracer transport model of the Goddard Institute for Space Studies. The simulated seasonal variations of dust concentrations show general reasonable agreement with the observed distributions, as do the size distributions at several observing sites. The discrepancies between the simulated and the observed dust concentrations point to regions of significant land surface modification. Monthly distribution of aerosol optical depths are calculated from the distribution of dust particle sizes. The maximum optical depth due to dust is 0.4-0.5 in the seasonal mean. The main uncertainties, about a factor of 3-5, in calculating optical thicknesses arise from the crude resolution of soil particle sizes, from insufficient constraint by the total dust loading in the atmosphere, and from our ignorance about adhesion, agglomeration, uplift, and size distributions of fine dust particles (less than 1 micrometer).
NASA Astrophysics Data System (ADS)
Tsirkas, S. A.
2018-03-01
The present investigation is focused to the modelling of the temperature field in aluminium aircraft components welded by a CO2 laser. A three-dimensional finite element model has been developed to simulate the laser welding process and predict the temperature distribution in T-joint laser welded plates with fillet material. The simulation of the laser beam welding process was performed using a nonlinear heat transfer analysis, based on a keyhole formation model analysis. The model employs the technique of element ;birth and death; in order to simulate the weld fillet. Various phenomena associated with welding like temperature dependent material properties and heat losses through convection and radiation were accounted for in the model. The materials considered were 6056-T78 and 6013-T4 aluminium alloys, commonly used for aircraft components. The temperature distribution during laser welding process has been calculated numerically and validated by experimental measurements on different locations of the welded structure. The numerical results are in good agreement with the experimental measurements.
A sEMG model with experimentally based simulation parameters.
Wheeler, Katherine A; Shimada, Hiroshima; Kumar, Dinesh K; Arjunan, Sridhar P
2010-01-01
A differential, time-invariant, surface electromyogram (sEMG) model has been implemented. While it is based on existing EMG models, the novelty of this implementation is that it assigns more accurate distributions of variables to create realistic motor unit (MU) characteristics. Variables such as muscle fibre conduction velocity, jitter (the change in the interpulse interval between subsequent action potential firings) and motor unit size have been considered to follow normal distributions about an experimentally obtained mean. In addition, motor unit firing frequencies have been considered to have non-linear and type based distributions that are in accordance with experimental results. Motor unit recruitment thresholds have been considered to be related to the MU type. The model has been used to simulate single channel differential sEMG signals from voluntary, isometric contractions of the biceps brachii muscle. The model has been experimentally verified by conducting experiments on three subjects. Comparison between simulated signals and experimental recordings shows that the Root Mean Square (RMS) increases linearly with force in both cases. The simulated signals also show similar values and rates of change of RMS to the experimental signals.
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
Li, Weixuan; Lin, Guang
2015-03-21
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less
An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan; Lin, Guang, E-mail: guanglin@purdue.edu
2015-08-01
Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes' rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less
The role of root distribution in eco-hydrological modeling in semi-arid regions
NASA Astrophysics Data System (ADS)
Sivandran, G.; Bras, R. L.
2010-12-01
In semi arid regions, the rooting strategies employed by vegetation can be critical to its survival. Arid regions are characterized by high variability in the arrival of rainfall, and species found in these areas have adapted mechanisms to ensure the capture of this scarce resource. Niche separation, through rooting strategies, is one manner in which different species coexist. At present, land surface models prescribe rooting profiles as a function of only the plant functional type of interest with no consideration for the soil texture or rainfall regime of the region being modeled. These models do not incorporate the ability of vegetation to dynamically alter their rooting strategies in response to transient changes in environmental forcings and therefore tend to underestimate the resilience of many of these ecosystems. A coupled, dynamic vegetation and hydrologic model, tRIBS+VEGGIE, was used to explore the role of vertical root distribution on hydrologic fluxes. Point scale simulations were carried out using two vertical root distribution schemes: (i) Static - a temporally invariant root distribution; and (ii) Dynamic - a temporally variable allocation of assimilated carbon at any depth within the root zone in order to minimize the soil moisture-induced stress on the vegetation. The simulations were forced with a stochastic climate generator calibrated to weather stations and rain gauges in the semi-arid Walnut Gulch Experimental Watershed in Arizona. For the static root distribution scheme, a series of simulations were carried out varying the shape of the rooting profile. The optimal distribution for the simulation was defined as the root distribution with the maximum mean transpiration over a 200 year period. This optimal distribution was determined for 5 soil textures and using 2 plant functional types, and the results varied from case to case. The dynamic rooting simulations allow vegetation the freedom to adjust the allocation of assimilated carbon to different rooting depths in response to changes in stress caused by the redistribution and uptake of soil moisture. The results obtained from these experiments elucidate the strong link between plant functional type, soil texture and climate and highlight the potential errors in the modeling of hydrologic fluxes from imposing a static root profile.
NASA Astrophysics Data System (ADS)
Bonomi, Tullia; Cavallin, Angelo
1999-10-01
Within the framework of Geographic Information System (GIS), the distributed three-dimensional groundwater model MODFLOW has been applied to evaluate the groundwater processes of the hydrogeological system in the Alverà mudslide (Cortina d'Ampezzo, Italy; test site in the TESLEC Project of the European Union). The application of this model has permitted an analysis of the spatial distribution of the structure (DTM and landslide bottom) and the mass transfer elements of the hydrogeological system. The field survey suggested zoning the area on the basis of the recharge, groundwater fluctuation and drainage system. For each zone, a hydraulic conductivity value to simulate the different recharge and the drainage responses has been assigned. The effect of rainfall infiltration into the ground and its effect on the groundwater table, with different intensity related to different time periods, have been simulated to reproduce the real condition of the area. The applied model can simulate the positive fluctuations of the water table on the whole landslide, with a different response of the hydrogeological system in each zone. The spatial simulated water level distribution is in accordance with the real one, with very small difference between them. The application of distributed three-dimensional models, within the framework of GIS, is an approach which permits data to be continually updated, standardised and integrated.
A Petri Net model for distributed energy system
NASA Astrophysics Data System (ADS)
Konopko, Joanna
2015-12-01
Electrical networks need to evolve to become more intelligent, more flexible and less costly. The smart grid is the next generation power energy, uses two-way flows of electricity and information to create a distributed automated energy delivery network. Building a comprehensive smart grid is a challenge for system protection, optimization and energy efficient. Proper modeling and analysis is needed to build an extensive distributed energy system and intelligent electricity infrastructure. In this paper, the whole model of smart grid have been proposed using Generalized Stochastic Petri Nets (GSPN). The simulation of created model is also explored. The simulation of the model has allowed the analysis of how close the behavior of the model is to the usage of the real smart grid.
Clark, Brian R.; Landon, Matthew K.; Kauffman, Leon J.; Hornberger, George Z.
2008-01-01
Contamination of public-supply wells has resulted in public-health threats and negative economic effects for communities that must treat contaminated water or find alternative water supplies. To investigate factors controlling vulnerability of public-supply wells to anthropogenic and natural contaminants using consistent and systematic data collected in a variety of principal aquifer settings in the United States, a study of Transport of Anthropogenic and Natural Contaminants to public-supply wells was begun in 2001 as part of the U.S. Geological Survey National Water-Quality Assessment Program. The area simulated by the ground-water flow model described in this report was selected for a study of processes influencing contaminant distribution and transport along the direction of ground-water flow towards a public-supply well in southeastern York, Nebraska. Ground-water flow is simulated for a 60-year period from September 1, 1944, to August 31, 2004. Steady-state conditions are simulated prior to September 1, 1944, and represent conditions prior to use of ground water for irrigation. Irrigation, municipal, and industrial wells were simulated using the Multi-Node Well package of the modular three-dimensional ground-water flow model code, MODFLOW-2000, which allows simulation of flow and solutes through wells that are simulated in multiple nodes or layers. Ground-water flow, age, and transport of selected tracers were simulated using the Ground-Water Transport process of MODFLOW-2000. Simulated ground-water age was compared to interpreted ground-water age in six monitoring wells in the unconfined aquifer. The tracer chlorofluorocarbon-11 was simulated directly using Ground-Water Transport for comparison with concentrations measured in six monitoring wells and one public supply well screened in the upper confined aquifer. Three alternative model simulations indicate that simulation results are highly sensitive to the distribution of multilayer well bores where leakage can occur and that the calibrated model resulted in smaller differences than the alternative models between simulated and interpreted ages and measured tracer concentrations in most, but not all, wells. Results of the first alternative model indicate that the distribution of young water in the upper confined aquifer is substantially different when well-bore leakage at known abandoned wells and test holes is removed from the model. In the second alternative model, simulated age near the bottom of the unconfined aquifer was younger than interpreted ages and simulated chlorofluorocarbon-11 concentrations in the upper confined aquifer were zero in five out of six wells because the conventional Well Package fails to account for flow between model layers though well bores. The third alternative model produced differences between simulated and interpreted ground-water ages and measured chlorofluorocarbon-11 concentrations that were comparable to the calibrated model. However, simulated hydraulic heads deviated from measured hydraulic heads by a greater amount than for the calibrated model. Even so, because the third alternative model simulates steady-state flow, additional analysis was possible using steady-state particle tracking to assess the contributing recharge area to a public supply well selected for analysis of factors contributing to well vulnerability. Results from particle-tracking software (MODPATH) using the third alternative model indicates that the contributing recharge area of the study public-supply well is a composite of elongated, seemingly isolated areas associated with wells that are screened in multiple aquifers. The simulated age distribution of particles at the study public-supply well indicates that all water younger than 58 years travels through well bores of wells screened in multiple aquifers. The age distribution from the steady-state model using MODPATH estimates the youngest 7 percent of the water to have a flow-weighted mean age
DOE Office of Scientific and Technical Information (OSTI.GOV)
ALAM,TODD M.
Monte Carlo simulations of phosphate tetrahedron connectivity distributions in alkali and alkaline earth phosphate glasses are reported. By utilizing a discrete bond model, the distribution of next-nearest neighbor connectivities between phosphate polyhedron for random, alternating and clustering bonding scenarios was evaluated as a function of the relative bond energy difference. The simulated distributions are compared to experimentally observed connectivities reported for solid-state two-dimensional exchange and double-quantum NMR experiments of phosphate glasses. These Monte Carlo simulations demonstrate that the polyhedron connectivity is best described by a random distribution in lithium phosphate and calcium phosphate glasses.
NASA Technical Reports Server (NTRS)
Cen, Renyue
1994-01-01
The mass and velocity distributions in the outskirts (0.5-3.0/h Mpc) of simulated clusters of galaxies are examined for a suite of cosmogonic models (two Omega(sub 0) = 1 and two Omega(sub 0) = 0.2 models) utilizing large-scale particle-mesh (PM) simulations. Through a series of model computations, designed to isolate the different effects, we find that both Omega(sub 0) and P(sub k) (lambda less than or = 16/h Mpc) are important to the mass distributions in clusters of galaxies. There is a correlation between power, P(sub k), and density profiles of massive clusters; more power tends to point to the direction of a stronger correlation between alpha and M(r less than 1.5/h Mpc); i.e., massive clusters being relatively extended and small mass clusters being relatively concentrated. A lower Omega(sub 0) universe tends to produce relatively concentrated massive clusters and relatively extended small mass clusters compared to their counterparts in a higher Omega(sub 0) model with the same power. Models with little (initial) small-scale power, such as the hot dark matter (HDM) model, produce more extended mass distributions than the isothermal distribution for most of the mass clusters. But the cold dark matter (CDM) models show mass distributions of most of the clusters more concentrated than the isothermal distribution. X-ray and gravitational lensing observations are beginning providing useful information on the mass distribution in and around clusters; some interesting constraints on Omega(sub 0) and/or the (initial) power of the density fluctuations on scales lambda less than or = 16/h Mpc (where linear extrapolation is invalid) can be obtained when larger observational data sets, such as the Sloan Digital Sky Survey, become available.
NASA Astrophysics Data System (ADS)
Satrio, Reza Indra; Subiyanto
2018-03-01
The effect of electric loads growth emerged direct impact in power systems distribution. Drop voltage and power losses one of the important things in power systems distribution. This paper presents modelling approach used to restructrure electrical network configuration, reduce drop voltage, reduce power losses and add new distribution transformer to enhance reliability of power systems distribution. Restructrure electrical network was aimed to analyse and investigate electric loads of a distribution transformer. Measurement of real voltage and real current were finished two times for each consumer, that were morning period and night period or when peak load. Design and simulation were conduct by using ETAP Power Station Software. Based on result of simulation and real measurement precentage of drop voltage and total power losses were mismatch with SPLN (Standard PLN) 72:1987. After added a new distribution transformer and restructrured electricity network configuration, the result of simulation could reduce drop voltage from 1.3 % - 31.3 % to 8.1 % - 9.6 % and power losses from 646.7 watt to 233.29 watt. Result showed, restructrure electricity network configuration and added new distribution transformer can be applied as an effective method to reduce drop voltage and reduce power losses.
A Global System for Transportation Simulation and Visualization in Emergency Evacuation Scenarios
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Wei; Liu, Cheng; Thomas, Neil
2015-01-01
Simulation-based studies are frequently used for evacuation planning and decision making processes. Given the transportation systems complexity and data availability, most evacuation simulation models focus on certain geographic areas. With routine improvement of OpenStreetMap road networks and LandScanTM global population distribution data, we present WWEE, a uniform system for world-wide emergency evacuation simulations. WWEE uses unified data structure for simulation inputs. It also integrates a super-node trip distribution model as the default simulation parameter to improve the system computational performance. Two levels of visualization tools are implemented for evacuation performance analysis, including link-based macroscopic visualization and vehicle-based microscopic visualization. Formore » left-hand and right-hand traffic patterns in different countries, the authors propose a mirror technique to experiment with both scenarios without significantly changing traffic simulation models. Ten cities in US, Europe, Middle East, and Asia are modeled for demonstration. With default traffic simulation models for fast and easy-to-use evacuation estimation and visualization, WWEE also retains the capability of interactive operation for users to adopt customized traffic simulation models. For the first time, WWEE provides a unified platform for global evacuation researchers to estimate and visualize their strategies performance of transportation systems under evacuation scenarios.« less
NASA Astrophysics Data System (ADS)
Ševeček, P.; Brož, M.; Nesvorný, D.; Enke, B.; Durda, D.; Walsh, K.; Richardson, D. C.
2017-11-01
We report on our study of asteroidal breakups, i.e. fragmentations of targets, subsequent gravitational reaccumulation and formation of small asteroid families. We focused on parent bodies with diameters Dpb = 10km . Simulations were performed with a smoothed-particle hydrodynamics (SPH) code combined with an efficient N-body integrator. We assumed various projectile sizes, impact velocities and impact angles (125 runs in total). Resulting size-frequency distributions are significantly different from scaled-down simulations with Dpb = 100km targets (Durda et al., 2007). We derive new parametric relations describing fragment distributions, suitable for Monte-Carlo collisional models. We also characterize velocity fields and angular distributions of fragments, which can be used as initial conditions for N-body simulations of small asteroid families. Finally, we discuss a number of uncertainties related to SPH simulations.
NASA Astrophysics Data System (ADS)
Kamal Chowdhury, AFM; Lockart, Natalie; Willgoose, Garry; Kuczera, George; Kiem, Anthony; Parana Manage, Nadeeka
2016-04-01
Stochastic simulation of rainfall is often required in the simulation of streamflow and reservoir levels for water security assessment. As reservoir water levels generally vary on monthly to multi-year timescales, it is important that these rainfall series accurately simulate the multi-year variability. However, the underestimation of multi-year variability is a well-known issue in daily rainfall simulation. Focusing on this issue, we developed a hierarchical Markov Chain (MC) model in a traditional two-part MC-Gamma Distribution modelling structure, but with a new parameterization technique. We used two parameters of first-order MC process (transition probabilities of wet-to-wet and dry-to-dry days) to simulate the wet and dry days, and two parameters of Gamma distribution (mean and standard deviation of wet day rainfall) to simulate wet day rainfall depths. We found that use of deterministic Gamma parameter values results in underestimation of multi-year variability of rainfall depths. Therefore, we calculated the Gamma parameters for each month of each year from the observed data. Then, for each month, we fitted a multi-variate normal distribution to the calculated Gamma parameter values. In the model, we stochastically sampled these two Gamma parameters from the multi-variate normal distribution for each month of each year and used them to generate rainfall depth in wet days using the Gamma distribution. In another study, Mehrotra and Sharma (2007) proposed a semi-parametric Markov model. They also used a first-order MC process for rainfall occurrence simulation. But, the MC parameters were modified by using an additional factor to incorporate the multi-year variability. Generally, the additional factor is analytically derived from the rainfall over a pre-specified past periods (e.g. last 30, 180, or 360 days). They used a non-parametric kernel density process to simulate the wet day rainfall depths. In this study, we have compared the performance of our hierarchical MC model with the semi-parametric model in preserving rainfall variability in daily, monthly, and multi-year scales. To calibrate the parameters of both models and assess their ability to preserve observed statistics, we have used ground based data from 15 raingauge stations around Australia, which consist a wide range of climate zones including coastal, monsoonal, and arid climate characteristics. In preliminary results, both models show comparative performances in preserving the multi-year variability of rainfall depth and occurrence. However, the semi-parametric model shows a tendency of overestimating the mean rainfall depth, while our model shows a tendency of overestimating the number of wet days. We will discuss further the relative merits of the both models for hydrology simulation in the presentation.
NASA Astrophysics Data System (ADS)
Bidari, Pooya Sobhe; Alirezaie, Javad; Tavakkoli, Jahan
2017-03-01
This paper presents a method for modeling and simulation of shear wave generation from a nonlinear Acoustic Radiation Force Impulse (ARFI) that is considered as a distributed force applied at the focal region of a HIFU transducer radiating in nonlinear regime. The shear wave propagation is simulated by solving the Navier's equation from the distributed nonlinear ARFI as the source of the shear wave. Then, the Wigner-Ville Distribution (WVD) as a time-frequency analysis method is used to detect the shear wave at different local points in the region of interest. The WVD results in an estimation of the shear wave time of arrival, its mean frequency and local attenuation which can be utilized to estimate medium's shear modulus and shear viscosity using the Voigt model.
Skill of Ensemble Seasonal Probability Forecasts
NASA Astrophysics Data System (ADS)
Smith, Leonard A.; Binter, Roman; Du, Hailiang; Niehoerster, Falk
2010-05-01
In operational forecasting, the computational complexity of large simulation models is, ideally, justified by enhanced performance over simpler models. We will consider probability forecasts and contrast the skill of ENSEMBLES-based seasonal probability forecasts of interest to the finance sector (specifically temperature forecasts for Nino 3.4 and the Atlantic Main Development Region (MDR)). The ENSEMBLES model simulations will be contrasted against forecasts from statistical models based on the observations (climatological distributions) and empirical dynamics based on the observations but conditioned on the current state (dynamical climatology). For some start dates, individual ENSEMBLES models yield significant skill even at a lead-time of 14 months. The nature of this skill is discussed, and chances of application are noted. Questions surrounding the interpretation of probability forecasts based on these multi-model ensemble simulations are then considered; the distributions considered are formed by kernel dressing the ensemble and blending with the climatology. The sources of apparent (RMS) skill in distributions based on multi-model simulations is discussed, and it is demonstrated that the inclusion of "zero-skill" models in the long range can improve Root-Mean-Square-Error scores, casting some doubt on the common justification for the claim that all models should be included in forming an operational probability forecast. It is argued that the rational response varies with lead time.
Milliren, Carly E; Evans, Clare R; Richmond, Tracy K; Dunn, Erin C
2018-06-06
Recent advances in multilevel modeling allow for modeling non-hierarchical levels (e.g., youth in non-nested schools and neighborhoods) using cross-classified multilevel models (CCMM). Current practice is to cluster samples from one context (e.g., schools) and utilize the observations however they are distributed from the second context (e.g., neighborhoods). However, it is unknown whether an uneven distribution of sample size across these contexts leads to incorrect estimates of random effects in CCMMs. Using the school and neighborhood data structure in Add Health, we examined the effect of neighborhood sample size imbalance on the estimation of variance parameters in models predicting BMI. We differentially assigned students from a given school to neighborhoods within that school's catchment area using three scenarios of (im)balance. 1000 random datasets were simulated for each of five combinations of school- and neighborhood-level variance and imbalance scenarios, for a total of 15,000 simulated data sets. For each simulation, we calculated 95% CIs for the variance parameters to determine whether the true simulated variance fell within the interval. Across all simulations, the "true" school and neighborhood variance parameters were estimated 93-96% of the time. Only 5% of models failed to capture neighborhood variance; 6% failed to capture school variance. These results suggest that there is no systematic bias in the ability of CCMM to capture the true variance parameters regardless of the distribution of students across neighborhoods. Ongoing efforts to use CCMM are warranted and can proceed without concern for the sample imbalance across contexts. Copyright © 2018 Elsevier Ltd. All rights reserved.
Foust, Thomas D.; Ziegler, Jack L.; Pannala, Sreekanth; ...
2017-02-28
Here in this computational study, we model the mixing of biomass pyrolysis vapor with solid catalyst in circulating riser reactors with a focus on the determination of solid catalyst residence time distributions (RTDs). A comprehensive set of 2D and 3D simulations were conducted for a pilot-scale riser using the Eulerian-Eulerian two-fluid modeling framework with and without sub-grid-scale models for the gas-solids interaction. A validation test case was also simulated and compared to experiments, showing agreement in the pressure gradient and RTD mean and spread. For simulation cases, it was found that for accurate RTD prediction, the Johnson and Jackson partialmore » slip solids boundary condition was required for all models and a sub-grid model is useful so that ultra high resolutions grids that are very computationally intensive are not required. Finally, we discovered a 2/3 scaling relation for the RTD mean and spread when comparing resolved 2D simulations to validated unresolved 3D sub-grid-scale model simulations.« less
Applying Multivariate Discrete Distributions to Genetically Informative Count Data.
Kirkpatrick, Robert M; Neale, Michael C
2016-03-01
We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.
Computer-generated forces in distributed interactive simulation
NASA Astrophysics Data System (ADS)
Petty, Mikel D.
1995-04-01
Distributed Interactive Simulation (DIS) is an architecture for building large-scale simulation models from a set of independent simulator nodes communicating via a common network protocol. DIS is most often used to create a simulated battlefield for military training. Computer Generated Forces (CGF) systems control large numbers of autonomous battlefield entities in a DIS simulation using computer equipment and software rather than humans in simulators. CGF entities serve as both enemy forces and supplemental friendly forces in a DIS exercise. Research into various aspects of CGF systems is ongoing. Several CGF systems have been implemented.
MAGIC: Model and Graphic Information Converter
NASA Technical Reports Server (NTRS)
Herbert, W. C.
2009-01-01
MAGIC is a software tool capable of converting highly detailed 3D models from an open, standard format, VRML 2.0/97, into the proprietary DTS file format used by the Torque Game Engine from GarageGames. MAGIC is used to convert 3D simulations from authoritative sources into the data needed to run the simulations in NASA's Distributed Observer Network. The Distributed Observer Network (DON) is a simulation presentation tool built by NASA to facilitate the simulation sharing requirements of the Data Presentation and Visualization effort within the Constellation Program. DON is built on top of the Torque Game Engine (TGE) and has chosen TGE's Dynamix Three Space (DTS) file format to represent 3D objects within simulations.
Risk Assessment in Relation to the Effect of Climate Change on Water Shortage in the Taichung Area
NASA Astrophysics Data System (ADS)
Hsiao, J.; Chang, L.; Ho, C.; Niu, M.
2010-12-01
Rapid economic development has stimulated a worldwide greenhouse effect and induced global climate change. Global climate change has increased the range of variation in the quantity of regional river flows between wet and dry seasons, which effects the management of regional water resources. Consequently, the influence of climate change has become an important issue in the management of regional water resources. In this study, the Monte Carlo simulation method was applied to risk analysis of shortage of water supply in the Taichung area. This study proposed a simulation model that integrated three models: weather generator model, surface runoff model, and water distribution model. The proposed model was used to evaluate the efficiency of the current water supply system and the potential effectiveness of two additional plans for water supply: the “artificial lakes” plan and the “cross-basin water transport” plan. A first-order Markov Chain method and two probability distribution models, exponential distribution and normal distribution, were used in the weather generator model. In the surface runoff model, researchers selected the Generalized Watershed Loading Function model (GWLF) to simulate the relationship between quantity of rainfall and basin outflow. A system dynamics model (SD) was applied to the water distribution model. Results of the simulation indicated that climate change could increase the annual quantity of river flow in the Dachia River and Daan River basins. However, climate change could also increase the difference in the quantity of river flow between wet and dry seasons. Simulation results showed that in current system case or in the additional plan cases, shortage status of water for both public and agricultural uses with conditions of climate change will be mostly worse than that without conditions of climate change except for the shortage status for the public use in the current system case. With or without considering the effect of climate change, the additional plans, especially the “cross-basin water transport” plan, for water supply could significantly increase the supply of water for public use. The proposed simulation model and results of analysis in this study could provide valuable reference for decision-makers in regards to risk analysis of regional water supply.
Modeling Best Management Practices (BMPs) with HSPF
The Hydrological Simulation Program-Fortran (HSPF) is a semi-distributed watershed model, which simulates hydrology and water quality processes at user-specified spatial and temporal scales. Although HSPF is a comprehensive and highly flexible model, a number of investigators not...
NASA Astrophysics Data System (ADS)
Shrestha, Rudra K.; Arora, Vivek K.; Melton, Joe R.; Sushama, Laxmi
2017-10-01
The performance of the competition module of the CLASS-CTEM (Canadian Land Surface Scheme and Canadian Terrestrial Ecosystem Model) modelling framework is assessed at 1° spatial resolution over North America by comparing the simulated geographical distribution of its plant functional types (PFTs) with two observation-based estimates. The model successfully reproduces the broad geographical distribution of trees, grasses and bare ground although limitations remain. In particular, compared to the two observation-based estimates, the simulated fractional vegetation coverage is lower in the arid southwest North American region and higher in the Arctic region. The lower-than-observed simulated vegetation coverage in the southwest region is attributed to lack of representation of shrubs in the model and plausible errors in the observation-based data sets. The observation-based data indicate vegetation fractional coverage of more than 60 % in this arid region, despite only 200-300 mm of precipitation that the region receives annually, and observation-based leaf area index (LAI) values in the region are lower than one. The higher-than-observed vegetation fractional coverage in the Arctic is likely due to the lack of representation of moss and lichen PFTs and also likely because of inadequate representation of permafrost in the model as a result of which the C3 grass PFT performs overly well in the region. The model generally reproduces the broad spatial distribution and the total area covered by the two primary tree PFTs (needleleaf evergreen trees, NDL-EVG; and broadleaf cold deciduous trees, BDL-DCD-CLD) reasonably well. The simulated fractional coverage of tree PFTs increases after the 1960s in response to the CO2 fertilization effect and climate warming. Differences between observed and simulated PFT coverages highlight model limitations and suggest that the inclusion of shrubs, and moss and lichen PFTs, and an adequate representation of permafrost will help improve model performance.
System analysis for the Huntsville Operation Support Center distributed computer system
NASA Technical Reports Server (NTRS)
Ingels, F. M.
1986-01-01
A simulation model of the NASA Huntsville Operational Support Center (HOSC) was developed. This simulation model emulates the HYPERchannel Local Area Network (LAN) that ties together the various computers of HOSC. The HOSC system is a large installation of mainframe computers such as the Perkin Elmer 3200 series and the Dec VAX series. A series of six simulation exercises of the HOSC model is described using data sets provided by NASA. The analytical analysis of the ETHERNET LAN and the video terminals (VTs) distribution system are presented. An interface analysis of the smart terminal network model which allows the data flow requirements due to VTs on the ETHERNET LAN to be estimated, is presented.
NASA Astrophysics Data System (ADS)
Yamana, Teresa K.; Eltahir, Elfatih A. B.
2011-02-01
This paper describes the use of satellite-based estimates of rainfall to force the Hydrology, Entomology and Malaria Transmission Simulator (HYDREMATS), a hydrology-based mechanistic model of malaria transmission. We first examined the temporal resolution of rainfall input required by HYDREMATS. Simulations conducted over Banizoumbou village in Niger showed that for reasonably accurate simulation of mosquito populations, the model requires rainfall data with at least 1 h resolution. We then investigated whether HYDREMATS could be effectively forced by satellite-based estimates of rainfall instead of ground-based observations. The Climate Prediction Center morphing technique (CMORPH) precipitation estimates distributed by the National Oceanic and Atmospheric Administration are available at a 30 min temporal resolution and 8 km spatial resolution. We compared mosquito populations simulated by HYDREMATS when the model is forced by adjusted CMORPH estimates and by ground observations. The results demonstrate that adjusted rainfall estimates from satellites can be used with a mechanistic model to accurately simulate the dynamics of mosquito populations.
Modeling of Antarctic Sea Ice in a General Circulation Model.
NASA Astrophysics Data System (ADS)
Wu, Xingren; Simmonds, Ian; Budd, W. F.
1997-04-01
A dynamic-thermodynamic sea ice model is developed and coupled with the Melbourne University general circulation model to simulate the seasonal cycle of the Antarctic sea ice distribution. The model is efficient, rapid to compute, and useful for a range of climate studies. The thermodynamic part of the sea ice model is similar to that developed by Parkinson and Washington, the dynamics contain a simplified ice rheology that resists compression. The thermodynamics is based on energy conservation at the top surface of the ice/snow, the ice/water interface, and the open water area to determine the ice formation, accretion, and ablation. A lead parameterization is introduced with an effective partitioning scheme for freezing between and under the ice floes. The dynamic calculation determines the motion of ice, which is forced with the atmospheric wind, taking account of ice resistance and rafting. The simulated sea ice distribution compares reasonably well with observations. The seasonal cycle of ice extent is well simulated in phase as well as in magnitude. Simulated sea ice thickness and concentration are also in good agreement with observations over most regions and serve to indicate the importance of advection and ocean drift in the determination of the sea ice distribution.
NASA Astrophysics Data System (ADS)
Santabarbara, Ignacio; Haas, Edwin; Kraus, David; Herrera, Saul; Klatt, Steffen; Kiese, Ralf
2014-05-01
When using biogeochemical models to estimate greenhouse gas emissions at site to regional/national levels, the assessment and quantification of the uncertainties of simulation results are of significant importance. The uncertainties in simulation results of process-based ecosystem models may result from uncertainties of the process parameters that describe the processes of the model, model structure inadequacy as well as uncertainties in the observations. Data for development and testing of uncertainty analisys were corp yield observations, measurements of soil fluxes of nitrous oxide (N2O) and carbon dioxide (CO2) from 8 arable sites across Europe. Using the process-based biogeochemical model LandscapeDNDC for simulating crop yields, N2O and CO2 emissions, our aim is to assess the simulation uncertainty by setting up a Bayesian framework based on Metropolis-Hastings algorithm. Using Gelman statistics convergence criteria and parallel computing techniques, enable multi Markov Chains to run independently in parallel and create a random walk to estimate the joint model parameter distribution. Through means distribution we limit the parameter space, get probabilities of parameter values and find the complex dependencies among them. With this parameter distribution that determines soil-atmosphere C and N exchange, we are able to obtain the parameter-induced uncertainty of simulation results and compare them with the measurements data.
NASA Astrophysics Data System (ADS)
Ševecek, Pavel; Broz, Miroslav; Nesvorny, David; Durda, Daniel D.; Asphaug, Erik; Walsh, Kevin J.; Richardson, Derek C.
2016-10-01
Detailed models of asteroid collisions can yield important constrains for the evolution of the Main Asteroid Belt, but the respective parameter space is large and often unexplored. We thus performed a new set of simulations of asteroidal breakups, i.e. fragmentations of intact targets, subsequent gravitational reaccumulation and formation of small asteroid families, focusing on parent bodies with diameters D = 10 km.Simulations were performed with a smoothed-particle hydrodynamics (SPH) code (Benz & Asphaug 1994), combined with an efficient N-body integrator (Richardson et al. 2000). We assumed a number of projectile sizes, impact velocities and impact angles. The rheology used in the physical model does not include friction nor crushing; this allows for a direct comparison to results of Durda et al. (2007). Resulting size-frequency distributions are significantly different from scaled-down simulations with D = 100 km monolithic targets, although they may be even more different for pre-shattered targets.We derive new parametric relations describing fragment distributions, suitable for Monte-Carlo collisional models. We also characterize velocity fields and angular distributions of fragments, which can be used as initial conditions in N-body simulations of small asteroid families. Finally, we discuss various uncertainties related to SPH simulations.
Wang, Qin; Wang, Xiang-Bin
2014-01-01
We present a model on the simulation of the measurement-device independent quantum key distribution (MDI-QKD) with phase randomized general sources. It can be used to predict experimental observations of a MDI-QKD with linear channel loss, simulating corresponding values for the gains, the error rates in different basis, and also the final key rates. Our model can be applicable to the MDI-QKDs with arbitrary probabilistic mixture of different photon states or using any coding schemes. Therefore, it is useful in characterizing and evaluating the performance of the MDI-QKD protocol, making it a valuable tool in studying the quantum key distributions. PMID:24728000
An enhanced lumped element electrical model of a double barrier memristive device
NASA Astrophysics Data System (ADS)
Solan, Enver; Dirkmann, Sven; Hansen, Mirko; Schroeder, Dietmar; Kohlstedt, Hermann; Ziegler, Martin; Mussenbrock, Thomas; Ochs, Karlheinz
2017-05-01
The massive parallel approach of neuromorphic circuits leads to effective methods for solving complex problems. It has turned out that resistive switching devices with a continuous resistance range are potential candidates for such applications. These devices are memristive systems—nonlinear resistors with memory. They are fabricated in nanotechnology and hence parameter spread during fabrication may aggravate reproducible analyses. This issue makes simulation models of memristive devices worthwhile. Kinetic Monte-Carlo simulations based on a distributed model of the device can be used to understand the underlying physical and chemical phenomena. However, such simulations are very time-consuming and neither convenient for investigations of whole circuits nor for real-time applications, e.g. emulation purposes. Instead, a concentrated model of the device can be used for both fast simulations and real-time applications, respectively. We introduce an enhanced electrical model of a valence change mechanism (VCM) based double barrier memristive device (DBMD) with a continuous resistance range. This device consists of an ultra-thin memristive layer sandwiched between a tunnel barrier and a Schottky-contact. The introduced model leads to very fast simulations by using usual circuit simulation tools while maintaining physically meaningful parameters. Kinetic Monte-Carlo simulations based on a distributed model and experimental data have been utilized as references to verify the concentrated model.
NASA Astrophysics Data System (ADS)
Demirel, M. C.; Mai, J.; Stisen, S.; Mendiguren González, G.; Koch, J.; Samaniego, L. E.
2016-12-01
Distributed hydrologic models are traditionally calibrated and evaluated against observations of streamflow. Spatially distributed remote sensing observations offer a great opportunity to enhance spatial model calibration schemes. For that it is important to identify the model parameters that can change spatial patterns before the satellite based hydrologic model calibration. Our study is based on two main pillars: first we use spatial sensitivity analysis to identify the key parameters controlling the spatial distribution of actual evapotranspiration (AET). Second, we investigate the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mesoscale Hydrologic Model (mHM). This distributed model is selected as it allows for a change in the spatial distribution of key soil parameters through the calibration of pedo-transfer function parameters and includes options for using fully distributed daily Leaf Area Index (LAI) directly as input. In addition the simulated AET can be estimated at the spatial resolution suitable for comparison to the spatial patterns observed using MODIS data. We introduce a new dynamic scaling function employing remotely sensed vegetation to downscale coarse reference evapotranspiration. In total, 17 parameters of 47 mHM parameters are identified using both sequential screening and Latin hypercube one-at-a-time sampling methods. The spatial patterns are found to be sensitive to the vegetation parameters whereas streamflow dynamics are sensitive to the PTF parameters. The results of multi-objective model calibration show that calibration of mHM against observed streamflow does not reduce the spatial errors in AET while they improve only the streamflow simulations. We will further examine the results of model calibration using only multi spatial objective functions measuring the association between observed AET and simulated AET maps and another case including spatial and streamflow metrics together.
NASA Astrophysics Data System (ADS)
Mohammadyari, Parvin; Faghihi, Reza; Mosleh-Shirazi, Mohammad Amin; Lotfi, Mehrzad; Rahim Hematiyan, Mohammad; Koontz, Craig; Meigooni, Ali S.
2015-12-01
Compression is a technique to immobilize the target or improve the dose distribution within the treatment volume during different irradiation techniques such as AccuBoost® brachytherapy. However, there is no systematic method for determination of dose distribution for uncompressed tissue after irradiation under compression. In this study, the mechanical behavior of breast tissue between compressed and uncompressed states was investigated. With that, a novel method was developed to determine the dose distribution in uncompressed tissue after irradiation of compressed breast tissue. Dosimetry was performed using two different methods, namely, Monte Carlo simulations using the MCNP5 code and measurements using thermoluminescent dosimeters (TLD). The displacement of the breast elements was simulated using a finite element model and calculated using ABAQUS software. From these results, the 3D dose distribution in uncompressed tissue was determined. The geometry of the model was constructed from magnetic resonance images of six different women volunteers. The mechanical properties were modeled by using the Mooney-Rivlin hyperelastic material model. Experimental dosimetry was performed by placing the TLD chips into the polyvinyl alcohol breast equivalent phantom. The results determined that the nodal displacements, due to the gravitational force and the 60 Newton compression forces (with 43% contraction in the loading direction and 37% expansion in the orthogonal direction) were determined. Finally, a comparison of the experimental data and the simulated data showed agreement within 11.5% ± 5.9%.
Mohammadyari, Parvin; Faghihi, Reza; Mosleh-Shirazi, Mohammad Amin; Lotfi, Mehrzad; Hematiyan, Mohammad Rahim; Koontz, Craig; Meigooni, Ali S
2015-12-07
Compression is a technique to immobilize the target or improve the dose distribution within the treatment volume during different irradiation techniques such as AccuBoost(®) brachytherapy. However, there is no systematic method for determination of dose distribution for uncompressed tissue after irradiation under compression. In this study, the mechanical behavior of breast tissue between compressed and uncompressed states was investigated. With that, a novel method was developed to determine the dose distribution in uncompressed tissue after irradiation of compressed breast tissue. Dosimetry was performed using two different methods, namely, Monte Carlo simulations using the MCNP5 code and measurements using thermoluminescent dosimeters (TLD). The displacement of the breast elements was simulated using a finite element model and calculated using ABAQUS software. From these results, the 3D dose distribution in uncompressed tissue was determined. The geometry of the model was constructed from magnetic resonance images of six different women volunteers. The mechanical properties were modeled by using the Mooney-Rivlin hyperelastic material model. Experimental dosimetry was performed by placing the TLD chips into the polyvinyl alcohol breast equivalent phantom. The results determined that the nodal displacements, due to the gravitational force and the 60 Newton compression forces (with 43% contraction in the loading direction and 37% expansion in the orthogonal direction) were determined. Finally, a comparison of the experimental data and the simulated data showed agreement within 11.5% ± 5.9%.
NASA Astrophysics Data System (ADS)
Jiao, Cheng-Liang; Mineshige, Shin; Takeuchi, Shun; Ohsuga, Ken
2015-06-01
We apply our two-dimensional (2D), radially self-similar steady-state accretion flow model to the analysis of hydrodynamic simulation results of supercritical accretion flows. Self-similarity is checked and the input parameters for the model calculation, such as advective factor and heat capacity ratio, are obtained from time-averaged simulation data. Solutions of the model are then calculated and compared with the simulation results. We find that in the converged region of the simulation, excluding the part too close to the black hole, the radial distributions of azimuthal velocity {{v}φ }, density ρ and pressure p basically follow the self-similar assumptions, i.e., they are roughly proportional to {{r}-0.5}, {{r}-n}, and {{r}-(n+1)}, respectively, where n∼ 0.85 for the mass injection rate of 1000{{L}E}/{{c}2}, and n∼ 0.74 for 3000{{L}E}/{{c}2}. The distribution of vr and {{v}θ } agrees less with self-similarity, possibly due to convective motions in the rθ plane. The distribution of velocity, density, and pressure in the θ direction obtained by the steady model agrees well with the simulation results within the calculation boundary of the steady model. Outward mass flux in the simulations is overall directed toward a polar angle of 0.8382 rad (∼ 48\\buildrel{\\circ}\\over{.} 0) for 1000{{L}E}/{{c}2} and 0.7852 rad (∼ 43\\buildrel{\\circ}\\over{.} 4) for 3000{{L}E}/{{c}2}, and ∼94% of the mass inflow is driven away as outflow, while outward momentum and energy fluxes are focused around the polar axis. Parts of these fluxes lie in the region that is not calculated by the steady model, and special attention should be paid when the model is applied.
Numerical simulation of gas distribution in goaf under Y ventilation mode
NASA Astrophysics Data System (ADS)
Li, Shengzhou; Liu, Jun
2018-04-01
Taking the Y type ventilation of the working face as the research object, diffusion equation is introduced to simulate the diffusion characteristics of gas, using Navier-Stokes equation and Brinkman equation to simulate the gas flow in working face and goaf, the physical model of gas flow in coal mining face was established. With numerical simulation software COMSOL multiphysics methods, gas distribution in goaf under Y ventilation mode is simulated and gas distribution of the working face, the upper corner and goaf is analysised. The results show that the Y type ventilation system can effectively improve the corner gas accumulation and overrun problem.
Impact of Land Cover Characterization and Properties on Snow Albedo in Climate Models
NASA Astrophysics Data System (ADS)
Wang, L.; Bartlett, P. A.; Chan, E.; Montesano, P.
2017-12-01
The simulation of winter albedo in boreal and northern environments has been a particular challenge for land surface modellers. Assessments of output from CMIP3 and CMIP5 climate models have revealed that many simulations are characterized by overestimation of albedo in the boreal forest. Recent studies suggest that inaccurate representation of vegetation distribution, improper simulation of leaf area index, and poor treatment of canopy-snow processes are the primary causes of albedo errors. While several land cover datasets are commonly used to derive plant functional types (PFT) for use in climate models, new land cover and vegetation datasets with higher spatial resolution have become available in recent years. In this study, we compare the spatial distribution of the dominant PFTs and canopy cover fractions based on different land cover datasets, and present results from offline simulations of the latest version Canadian Land Surface Scheme (CLASS) over the northern Hemisphere land. We discuss the impact of land cover representation and surface properties on winter albedo simulations in climate models.
NASA Astrophysics Data System (ADS)
Henstridge, Martin C.; Batchelor-McAuley, Christopher; Gusmão, Rui; Compton, Richard G.
2011-11-01
Two simple models of electrode surface inhomogeneity based on Marcus-Hush theory are considered; a distribution in formal potentials and a distribution in electron tunnelling distances. Cyclic voltammetry simulated using these models is compared with that simulated using Marcus-Hush theory for a flat, uniform and homogeneous electrode surface, with the two models of surface inhomogeneity yielding broadened peaks with decreased peak-currents. An edge-plane pyrolytic graphite electrode is covalently modified with ferrocene via 'click' chemistry and the resulting voltammetry compared with each of the three previously considered models. The distribution of formal potentials is seen to fit the experimental data most closely.
Different modelling approaches to evaluate nitrogen transport and turnover at the watershed scale
NASA Astrophysics Data System (ADS)
Epelde, Ane Miren; Antiguedad, Iñaki; Brito, David; Jauch, Eduardo; Neves, Ramiro; Garneau, Cyril; Sauvage, Sabine; Sánchez-Pérez, José Miguel
2016-08-01
This study presents the simulation of hydrological processes and nutrient transport and turnover processes using two integrated numerical models: Soil and Water Assessment Tool (SWAT) (Arnold et al., 1998), an empirical and semi-distributed numerical model; and Modelo Hidrodinâmico (MOHID) (Neves, 1985), a physics-based and fully distributed numerical model. This work shows that both models reproduce satisfactorily water and nitrate exportation at the watershed scale at annual and daily basis, MOHID providing slightly better results. At the watershed scale, both SWAT and MOHID simulated similarly and satisfactorily the denitrification amount. However, as MOHID numerical model was the only one able to reproduce adequately the spatial variation of the soil hydrological conditions and water table level fluctuation, it proved to be the only model able of reproducing the spatial variation of the nutrient cycling processes that are dependent to the soil hydrological conditions such as the denitrification process. This evidences the strength of the fully distributed and physics-based models to simulate the spatial variability of nutrient cycling processes that are dependent to the hydrological conditions of the soils.
Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Secchi, Simone; Tumeo, Antonino; Villa, Oreste
Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy inmore » reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.« less
Simulated CONUS Flash Flood Climatologies from Distributed Hydrologic Models
NASA Astrophysics Data System (ADS)
Flamig, Z.; Gourley, J. J.; Vergara, H. J.; Kirstetter, P. E.; Hong, Y.
2016-12-01
This study will describe a CONUS flash flood climatology created over the period from 2002 through 2011. The MRMS reanalysis precipitation dataset was used as forcing into the Ensemble Framework For Flash Flood Forecasting (EF5). This high resolution 1-sq km 5-minute dataset is ideal for simulating flash floods with a distributed hydrologic model. EF5 features multiple water balance components including SAC-SMA, CREST, and a hydrophobic model all coupled with kinematic wave routing. The EF5/SAC-SMA and EF5/CREST water balance schemes were used for the creation of dual flash flood climatologies based on the differing water balance principles. For the period from 2002 through 2011 the daily maximum streamflow, unit streamflow, and time of peak streamflow was stored along with the minimum soil moisture. These variables are used to describe the states of the soils right before a flash flood event and the peak streamflow that was simulated during the flash flood event. The results will be shown, compared and contrasted. The resulting model simulations will be verified on basins less than 1,000-sq km with USGS gauges to ensure the distributed hydrologic models are reliable. The results will also be compared spatially to Storm Data flash flood event observations to judge the degree of agreement between the simulated climatologies and observations.
Hybrid Communication Architectures for Distributed Smart Grid Applications
Zhang, Jianhua; Hasandka, Adarsh; Wei, Jin; ...
2018-04-09
Wired and wireless communications both play an important role in the blend of communications technologies necessary to enable future smart grid communications. Hybrid networks exploit independent mediums to extend network coverage and improve performance. However, whereas individual technologies have been applied in simulation networks, as far as we know there is only limited attention that has been paid to the development of a suite of hybrid communication simulation models for the communications system design. Hybrid simulation models are needed to capture the mixed communication technologies and IP address mechanisms in one simulation. To close this gap, we have developed amore » suite of hybrid communication system simulation models to validate the critical system design criteria for a distributed solar Photovoltaic (PV) communications system, including a single trip latency of 300 ms, throughput of 9.6 Kbps, and packet loss rate of 1%. In conclusion, the results show that three low-power wireless personal area network (LoWPAN)-based hybrid architectures can satisfy three performance metrics that are critical for distributed energy resource communications.« less
Hybrid Communication Architectures for Distributed Smart Grid Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jianhua; Hasandka, Adarsh; Wei, Jin
Wired and wireless communications both play an important role in the blend of communications technologies necessary to enable future smart grid communications. Hybrid networks exploit independent mediums to extend network coverage and improve performance. However, whereas individual technologies have been applied in simulation networks, as far as we know there is only limited attention that has been paid to the development of a suite of hybrid communication simulation models for the communications system design. Hybrid simulation models are needed to capture the mixed communication technologies and IP address mechanisms in one simulation. To close this gap, we have developed amore » suite of hybrid communication system simulation models to validate the critical system design criteria for a distributed solar Photovoltaic (PV) communications system, including a single trip latency of 300 ms, throughput of 9.6 Kbps, and packet loss rate of 1%. In conclusion, the results show that three low-power wireless personal area network (LoWPAN)-based hybrid architectures can satisfy three performance metrics that are critical for distributed energy resource communications.« less
The redshift distribution of cosmological samples: a forward modeling approach
NASA Astrophysics Data System (ADS)
Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina
2017-08-01
Determining the redshift distribution n(z) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n(z) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc{UFig} (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n(z) distributions for the acceptable models. We demonstrate the method by determining n(z) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n(z) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.
The redshift distribution of cosmological samples: a forward modeling approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam
Determining the redshift distribution n ( z ) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n ( z ) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc(UFig) (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizesmore » and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n ( z ) distributions for the acceptable models. We demonstrate the method by determining n ( z ) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n ( z ) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.« less
A Hybrid Demand Response Simulator Version 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-05-02
A hybrid demand response simulator is developed to test different control algorithms for centralized and distributed demand response (DR) programs in a small distribution power grid. The HDRS is designed to model a wide variety of DR services such as peak having, load shifting, arbitrage, spinning reserves, load following, regulation, emergency load shedding, etc. The HDRS does not model the dynamic behaviors of the loads, rather, it simulates the load scheduling and dispatch process. The load models include TCAs (water heaters, air conditioners, refrigerators, freezers, etc) and non-TCAs (lighting, washer, dishwasher, etc.) The ambient temperature changes, thermal resistance, capacitance, andmore » the unit control logics can be modeled for TCA loads. The use patterns of the non-TCA can be modeled by probability of use and probabilistic durations. Some of the communication network characteristics, such as delays and errors, can also be modeled. Most importantly, because the simulator is modular and greatly simplified the thermal models for TCA loads, it is very easy and fast to be used to test and validate different control algorithms in a simulated environment.« less
Numerical simulation study on the distribution law of smoke flow velocity in horizontal tunnel fire
NASA Astrophysics Data System (ADS)
Liu, Yejiao; Tian, Zhichao; Xue, Junhua; Wang, Wencai
2018-02-01
According to the fluid similarity theory, the simulation experiment system of mining tunnel fire is established. The grid division of experimental model roadway is carried on by GAMBIT software. By setting the boundary and initial conditions of smoke flow during fire period in FLUENT software, using RNG k-Ɛ two-equation turbulence model, energy equation and SIMPLE algorithm, the steady state numerical simulation of smoke flow velocity in mining tunnel is done to obtain the distribution law of smoke flow velocity in tunnel during fire period.
How required reserve ratio affects distribution and velocity of money
NASA Astrophysics Data System (ADS)
Xi, Ning; Ding, Ning; Wang, Yougui
2005-11-01
In this paper the dependence of wealth distribution and the velocity of money on the required reserve ratio is examined based on a random transfer model of money and computer simulations. A fractional reserve banking system is introduced to the model where money creation can be achieved by bank loans and the monetary aggregate is determined by the monetary base and the required reserve ratio. It is shown that monetary wealth follows asymmetric Laplace distribution and latency time of money follows exponential distribution. The expression of monetary wealth distribution and that of the velocity of money in terms of the required reserve ratio are presented in a good agreement with simulation results.
Simulation model of stratified thermal energy storage tank using finite difference method
NASA Astrophysics Data System (ADS)
Waluyo, Joko
2016-06-01
Stratified TES tank is normally used in the cogeneration plant. The stratified TES tanks are simple, low cost, and equal or superior in thermal performance. The advantage of TES tank is that it enables shifting of energy usage from off-peak demand for on-peak demand requirement. To increase energy utilization in a stratified TES tank, it is required to build a simulation model which capable to simulate the charging phenomenon in the stratified TES tank precisely. This paper is aimed to develop a novel model in addressing the aforementioned problem. The model incorporated chiller into the charging of stratified TES tank system in a closed system. The model was developed in one-dimensional type involve with heat transfer aspect. The model covers the main factors affect to degradation of temperature distribution namely conduction through the tank wall, conduction between cool and warm water, mixing effect on the initial flow of the charging as well as heat loss to surrounding. The simulation model is developed based on finite difference method utilizing buffer concept theory and solved in explicit method. Validation of the simulation model is carried out using observed data obtained from operating stratified TES tank in cogeneration plant. The temperature distribution of the model capable of representing S-curve pattern as well as simulating decreased charging temperature after reaching full condition. The coefficient of determination values between the observed data and model obtained higher than 0.88. Meaning that the model has capability in simulating the charging phenomenon in the stratified TES tank. The model is not only capable of generating temperature distribution but also can be enhanced for representing transient condition during the charging of stratified TES tank. This successful model can be addressed for solving the limitation temperature occurs in charging of the stratified TES tank with the absorption chiller. Further, the stratified TES tank can be charged with the cooling energy of absorption chiller that utilizes from waste heat from gas turbine of the cogeneration plant.
Population Synthesis of Radio and Y-ray Normal, Isolated Pulsars Using Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Billman, Caleb; Gonthier, P. L.; Harding, A. K.
2013-04-01
We present preliminary results of a population statistics study of normal pulsars (NP) from the Galactic disk using Markov Chain Monte Carlo techniques optimized according to two different methods. The first method compares the detected and simulated cumulative distributions of series of pulsar characteristics, varying the model parameters to maximize the overall agreement. The advantage of this method is that the distributions do not have to be binned. The other method varies the model parameters to maximize the log of the maximum likelihood obtained from the comparisons of four-two dimensional distributions of radio and γ-ray pulsar characteristics. The advantage of this method is that it provides a confidence region of the model parameter space. The computer code simulates neutron stars at birth using Monte Carlo procedures and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and γ-ray emission characteristics, implementing an empirical γ-ray luminosity model. A comparison group of radio NPs detected in ten-radio surveys is used to normalize the simulation, adjusting the model radio luminosity to match a birth rate. We include the Fermi pulsars in the forthcoming second pulsar catalog. We present preliminary results comparing the simulated and detected distributions of radio and γ-ray NPs along with a confidence region in the parameter space of the assumed models. We express our gratitude for the generous support of the National Science Foundation (REU and RUI), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program.
NASA Astrophysics Data System (ADS)
WANG, P. T.
2015-12-01
Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.
Cabaraban, Maria Theresa I; Kroll, Charles N; Hirabayashi, Satoshi; Nowak, David J
2013-05-01
A distributed adaptation of i-Tree Eco was used to simulate dry deposition in an urban area. This investigation focused on the effects of varying temperature, LAI, and NO2 concentration inputs on estimated NO2 dry deposition to trees in Baltimore, MD. A coupled modeling system is described, wherein WRF provided temperature and LAI fields, and CMAQ provided NO2 concentrations. A base case simulation was conducted using built-in distributed i-Tree Eco tools, and simulations using different inputs were compared against this base case. Differences in land cover classification and tree cover between the distributed i-Tree Eco and WRF resulted in changes in estimated LAI, which in turn resulted in variations in simulated NO2 dry deposition. Estimated NO2 removal decreased when CMAQ-derived concentration was applied to the distributed i-Tree Eco simulation. Discrepancies in temperature inputs did little to affect estimates of NO2 removal by dry deposition to trees in Baltimore. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Riecken, Mark; Lessmann, Kurt; Schillero, David
2016-05-01
The Data Distribution Service (DDS) was started by the Object Management Group (OMG) in 2004. Currently, DDS is one of the contenders to support the Internet of Things (IoT) and the Industrial IOT (IIoT). DDS has also been used as a distributed simulation architecture. Given the anticipated proliferation of IoT and II devices, along with the explosive growth of sensor technology, can we expect this to have an impact on the broader community of distributed simulation? If it does, what is the impact and which distributed simulation domains will be most affected? DDS shares many of the same goals and characteristics of distributed simulation such as the need to support scale and an emphasis on Quality of Service (QoS) that can be tailored to meet the end user's needs. In addition, DDS has some built-in features such as security that are not present in traditional distributed simulation protocols. If the IoT and II realize their potential application, we predict a large base of technology to be built around this distributed data paradigm, much of which could be directly beneficial to the distributed M&S community. In this paper we compare some of the perceived gaps and shortfalls of current distributed M&S technology to the emerging capabilities of DDS built around the IoT. Although some trial work has been conducted in this area, we propose a more focused examination of the potential of these new technologies and their applicability to current and future problems in distributed M&S. The Internet of Things (IoT) and its data communications mechanisms such as the Data Distribution System (DDS) share properties in common with distributed modeling and simulation (M&S) and its protocols such as the High Level Architecture (HLA) and the Test and Training Enabling Architecture (TENA). This paper proposes a framework based on the sensor use case for how the two communities of practice (CoP) can benefit from one another and achieve greater capability in practical distributed computing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toyosada, M.; Niwa, T.
1995-12-31
In this paper, Newman`s calculation model is modified to solve his neglected effect of the change of stress distribution ahead of a crack, and to leave elastic plastic materials along the crack surface because of the compatibility of Dugdale model. In addition to above treatment, the authors introduce plastic shrinkage at an immediate generation of new crack surfaces due to emancipation of internal force with the magnitude of yield stress level during unloading process in the model. Moreover, the model is expanded to arbitrary stress distribution field. By using the model, RPG load is simulated for a center notched specimenmore » under constant amplitude loading with various stress ratios and decreased maximum load while keeping minimum load.« less
A Petri Net model for distributed energy system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konopko, Joanna
2015-12-31
Electrical networks need to evolve to become more intelligent, more flexible and less costly. The smart grid is the next generation power energy, uses two-way flows of electricity and information to create a distributed automated energy delivery network. Building a comprehensive smart grid is a challenge for system protection, optimization and energy efficient. Proper modeling and analysis is needed to build an extensive distributed energy system and intelligent electricity infrastructure. In this paper, the whole model of smart grid have been proposed using Generalized Stochastic Petri Nets (GSPN). The simulation of created model is also explored. The simulation of themore » model has allowed the analysis of how close the behavior of the model is to the usage of the real smart grid.« less
Nucleation and growth in one dimension. I. The generalized Kolmogorov-Johnson-Mehl-Avrami model
NASA Astrophysics Data System (ADS)
Jun, Suckjoon; Zhang, Haiyang; Bechhoefer, John
2005-01-01
Motivated by a recent application of the Kolmogorov-Johnson-Mehl-Avrami (KJMA) model to the study of DNA replication, we consider the one-dimensional (1D) version of this model. We generalize previous work to the case where the nucleation rate is an arbitrary function I(t) and obtain analytical results for the time-dependent distributions of various quantities (such as the island distribution). We also present improved computer simulation algorithms to study the 1D KJMA model. The analytical results and simulations are in excellent agreement.
Grid Integrated Distributed PV (GridPV) Version 2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reno, Matthew J.; Coogan, Kyle
2014-12-01
This manual provides the documentation of the MATLAB toolbox of functions for using OpenDSS to simulate the impact of solar energy on the distribution system. The majority of the functio ns are useful for interfacing OpenDSS and MATLAB, and they are of generic use for commanding OpenDSS from MATLAB and retrieving information from simulations. A set of functions is also included for modeling PV plant output and setting up the PV plant in th e OpenDSS simulation. The toolbox contains functions for modeling the OpenDSS distribution feeder on satellite images with GPS coordinates. Finally, example simulations functions are included tomore » show potential uses of the toolbox functions. Each function i n the toolbox is documented with the function use syntax, full description, function input list, function output list, example use, and example output.« less
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2014-01-01
This report describes a modeling and simulation approach for disturbance patterns representative of the environment experienced by a digital system in an electromagnetic reverberation chamber. The disturbance is modeled by a multi-variate statistical distribution based on empirical observations. Extended versions of the Rejection Samping and Inverse Transform Sampling techniques are developed to generate multi-variate random samples of the disturbance. The results show that Inverse Transform Sampling returns samples with higher fidelity relative to the empirical distribution. This work is part of an ongoing effort to develop a resilience assessment methodology for complex safety-critical distributed systems.
Analysis of Road Network Pattern Considering Population Distribution and Central Business District
Zhao, Fangxia; Sun, Huijun; Wu, Jianjun; Gao, Ziyou; Liu, Ronghui
2016-01-01
This paper proposes a road network growing model with the consideration of population distribution and central business district (CBD) attraction. In the model, the relative neighborhood graph (RNG) is introduced as the connection mechanism to capture the characteristics of road network topology. The simulation experiment is set up to illustrate the effects of population distribution and CBD attraction on the characteristics of road network. Moreover, several topological attributes of road network is evaluated by using coverage, circuitness, treeness and total length in the experiment. Finally, the suggested model is verified in the simulation of China and Beijing Highway networks. PMID:26981857
NASA Astrophysics Data System (ADS)
Han, Xue; Sandels, Claes; Zhu, Kun; Nordström, Lars
2013-08-01
There has been a large body of statements claiming that the large-scale deployment of Distributed Energy Resources (DERs) could eventually reshape the future distribution grid operation in numerous ways. Thus, it is necessary to introduce a framework to measure to what extent the power system operation will be changed by various parameters of DERs. This article proposed a modelling framework for an overview analysis on the correlation between DERs. Furthermore, to validate the framework, the authors described the reference models of different categories of DERs with their unique characteristics, comprising distributed generation, active demand and electric vehicles. Subsequently, quantitative analysis was made on the basis of the current and envisioned DER deployment scenarios proposed for Sweden. Simulations are performed in two typical distribution network models for four seasons. The simulation results show that in general the DER deployment brings in the possibilities to reduce the power losses and voltage drops by compensating power from the local generation and optimizing the local load profiles.
Modeling and Simulation of Quenching and Tempering Process in steels
NASA Astrophysics Data System (ADS)
Deng, Xiaohu; Ju, Dongying
Quenching and tempering (Q&T) is a combined heat treatment process to achieve maximum toughness and ductility at a specified hardness and strength. It is important to develop a mathematical model for quenching and tempering process for satisfy requirement of mechanical properties with low cost. This paper presents a modified model to predict structural evolution and hardness distribution during quenching and tempering process of steels. The model takes into account tempering parameters, carbon content, isothermal and non-isothermal transformations. Moreover, precipitation of transition carbides, decomposition of retained austenite and precipitation of cementite can be simulated respectively. Hardness distributions of quenched and tempered workpiece are predicted by experimental regression equation. In order to validate the model, it is employed to predict the tempering of 80MnCr5 steel. The predicted precipitation dynamics of transition carbides and cementite is consistent with the previous experimental and simulated results from literature. Then the model is implemented within the framework of the developed simulation code COSMAP to simulate microstructure, stress and distortion in the heat treated component. It is applied to simulate Q&T process of J55 steel. The calculated results show a good agreement with the experimental ones. This agreement indicates that the model is effective for simulation of Q&T process of steels.
EFFECTS OF ELECTROOSMOSIS ON SOIL TEMPERATURE AND HYDRAULIC HEAD: II. NUMERICAL SIMULATION
A numerical model to simulate the distributions of voltage, soil temperature, and hydraulic head during the field test of electroosmosis was developed. The two-dimensional governing equations for the distributions of voltage, soil temperature, and hydraulic head within a cylindri...
NASA Technical Reports Server (NTRS)
Kurzeja, R. J.; Haggard, K. V.; Grose, W. L.
1981-01-01
Three experiments have been performed using a three-dimensional, spectral quasi-geostrophic model in order to investigate the sensitivity of ozone transport to tropospheric orographic and thermal effects and to the zonal wind distribution. In the first experiment, the ozone distribution averaged over the last 30 days of a 60 day transport simulation was determined; in the second experiment, the transport simulation was repeated, but nonzonal orographic and thermal forcing was omitted; and in the final experiment, the simulation was conducted with the intensity and position of the stratospheric jets altered by addition of a Newtonian cooling term to the zonal-mean diabatic heating rate. Results of the three experiments are summarized by comparing the zonal-mean ozone distribution, the amplitude of eddy geopotential height, the zonal winds, and zonal-mean diabatic heating.
Modeling complexity in engineered infrastructure system: Water distribution network as an example
NASA Astrophysics Data System (ADS)
Zeng, Fang; Li, Xiang; Li, Ke
2017-02-01
The complex topology and adaptive behavior of infrastructure systems are driven by both self-organization of the demand and rigid engineering solutions. Therefore, engineering complex systems requires a method balancing holism and reductionism. To model the growth of water distribution networks, a complex network model was developed following the combination of local optimization rules and engineering considerations. The demand node generation is dynamic and follows the scaling law of urban growth. The proposed model can generate a water distribution network (WDN) similar to reported real-world WDNs on some structural properties. Comparison with different modeling approaches indicates that a realistic demand node distribution and co-evolvement of demand node and network are important for the simulation of real complex networks. The simulation results indicate that the efficiency of water distribution networks is exponentially affected by the urban growth pattern. On the contrary, the improvement of efficiency by engineering optimization is limited and relatively insignificant. The redundancy and robustness, on another aspect, can be significantly improved through engineering methods.
Finite Element Aircraft Simulation of Turbulence
NASA Technical Reports Server (NTRS)
McFarland, R. E.
1997-01-01
A turbulence model has been developed for realtime aircraft simulation that accommodates stochastic turbulence and distributed discrete gusts as a function of the terrain. This model is applicable to conventional aircraft, V/STOL aircraft, and disc rotor model helicopter simulations. Vehicle angular activity in response to turbulence is computed from geometrical and temporal relationships rather than by using the conventional continuum approximations that assume uniform gust immersion and low frequency responses. By using techniques similar to those recently developed for blade-element rotor models, the angular-rate filters of conventional turbulence models are not required. The model produces rotational rates as well as air mass translational velocities in response to both stochastic and deterministic disturbances, where the discrete gusts and turbulence magnitudes may be correlated with significant terrain features or ship models. Assuming isotropy, a two-dimensional vertical turbulence field is created. A novel Gaussian interpolation technique is used to distribute vertical turbulence on the wing span or lateral rotor disc, and this distribution is used to compute roll responses. Air mass velocities are applied at significant centers of pressure in the computation of the aircraft's pitch and roll responses.
NASA Astrophysics Data System (ADS)
Condon, L. E.; Maxwell, R. M.; Kollet, S. J.; Maher, K.; Haggerty, R.; Forrester, M. M.
2016-12-01
Although previous studies have demonstrated fractal residence time distributions in small watersheds, analyzing residence time scaling over large spatial areas is difficult with existing observational methods. For this study we use a fully integrated groundwater surface water simulation combined with Lagrangian particle tracking to evaluate connections between residence time distributions and watershed characteristics such as geology, topography and climate. Our simulation spans more than six million square kilometers of the continental US, encompassing a broad range of watershed sizes and physiographic settings. Simulated results demonstrate power law residence time distributions with peak age rages from 1.5 to 10.5 years. These ranges agree well with previous observational work and demonstrate the feasibility of using integrated models to simulate residence times. Comparing behavior between eight major watersheds, we show spatial variability in both the peak and the variance of the residence time distributions that can be related to model inputs. Peak age is well correlated with basin averaged hydraulic conductivity and the semi-variance corresponds to aridity. While power law age distributions have previously been attributed to fractal topography, these results illustrate the importance of subsurface characteristics and macro climate as additional controls on groundwater configuration and residence times.
Transverse Momentum Distributions of Electron in Simulated QED Model
NASA Astrophysics Data System (ADS)
Kaur, Navdeep; Dahiya, Harleen
2018-05-01
In the present work, we have studied the transverse momentum distributions (TMDs) for the electron in simulated QED model. We have used the overlap representation of light-front wave functions where the spin-1/2 relativistic composite system consists of spin-1/2 fermion and spin-1 vector boson. The results have been obtained for T-even TMDs in transverse momentum plane for fixed value of longitudinal momentum fraction x.
Zhang, Baihua; Li, Jianhua; Yue, Yong; Qian, Wei
2017-01-01
Using computational fluid dynamics (CFD) method, the feasibility of simulating transient airflow in a CT-based airway tree with more than 100 outlets for a whole respiratory period is studied, and the influence of truncations of terminal bronchi on CFD characteristics is investigated. After an airway model with 122 outlets is extracted from CT images, the transient airflow is simulated. Spatial and temporal variations of flow velocity, wall pressure, and wall shear stress are presented; the flow pattern and lobar distribution of air are gotten as well. All results are compared with those of a truncated model with 22 outlets. It is found that the flow pattern shows lobar heterogeneity that the near-wall air in the trachea is inhaled into the upper lobe while the center flow enters the other lobes, and the lobar distribution of air is significantly correlated with the outlet area ratio. The truncation decreases airflow to right and left upper lobes and increases the deviation of airflow distributions between inspiration and expiration. Simulating the transient airflow in an airway tree model with 122 bronchi using CFD is feasible. The model with more terminal bronchi decreases the difference between the lobar distributions at inspiration and at expiration. PMID:29333194
Wealth distribution on complex networks
NASA Astrophysics Data System (ADS)
Ichinomiya, Takashi
2012-12-01
We study the wealth distribution of the Bouchaud-Mézard model on complex networks. It is known from numerical simulations that this distribution depends on the topology of the network; however, no one has succeeded in explaining it. Using “adiabatic” and “independent” assumptions along with the central-limit theorem, we derive equations that determine the probability distribution function. The results are compared to those of simulations for various networks. We find good agreement between our theory and the simulations, except for the case of Watts-Strogatz networks with a low rewiring rate due to the breakdown of independent assumption.
NASA Astrophysics Data System (ADS)
Girard, L.; Weiss, J.; Molines, J. M.; Barnier, B.; Bouillon, S.
2009-08-01
Sea ice drift and deformation from models are evaluated on the basis of statistical and scaling properties. These properties are derived from two observation data sets: the RADARSAT Geophysical Processor System (RGPS) and buoy trajectories from the International Arctic Buoy Program (IABP). Two simulations obtained with the Louvain-la-Neuve Ice Model (LIM) coupled to a high-resolution ocean model and a simulation obtained with the Los Alamos Sea Ice Model (CICE) were analyzed. Model ice drift compares well with observations in terms of large-scale velocity field and distributions of velocity fluctuations although a significant bias on the mean ice speed is noted. On the other hand, the statistical properties of ice deformation are not well simulated by the models: (1) The distributions of strain rates are incorrect: RGPS distributions of strain rates are power law tailed, i.e., exhibit "wild randomness," whereas models distributions remain in the Gaussian attraction basin, i.e., exhibit "mild randomness." (2) The models are unable to reproduce the spatial and temporal correlations of the deformation fields: In the observations, ice deformation follows spatial and temporal scaling laws that express the heterogeneity and the intermittency of deformation. These relations do not appear in simulated ice deformation. Mean deformation in models is almost scale independent. The statistical properties of ice deformation are a signature of the ice mechanical behavior. The present work therefore suggests that the mechanical framework currently used by models is inappropriate. A different modeling framework based on elastic interactions could improve the representation of the statistical and scaling properties of ice deformation.
Glyph-based analysis of multimodal directional distributions in vector field ensembles
NASA Astrophysics Data System (ADS)
Jarema, Mihaela; Demir, Ismail; Kehrer, Johannes; Westermann, Rüdiger
2015-04-01
Ensemble simulations are increasingly often performed in the geosciences in order to study the uncertainty and variability of model predictions. Describing ensemble data by mean and standard deviation can be misleading in case of multimodal distributions. We present first results of a glyph-based visualization of multimodal directional distributions in 2D and 3D vector ensemble data. Directional information on the circle/sphere is modeled using mixtures of probability density functions (pdfs), which enables us to characterize the distributions with relatively few parameters. The resulting mixture models are represented by 2D and 3D lobular glyphs showing direction, spread and strength of each principal mode of the distributions. A 3D extension of our approach is realized by means of an efficient GPU rendering technique. We demonstrate our method in the context of ensemble weather simulations.
Simplified energy-balance model for pragmatic multi-dimensional device simulation
NASA Astrophysics Data System (ADS)
Chang, Duckhyun; Fossum, Jerry G.
1997-11-01
To pragmatically account for non-local carrier heating and hot-carrier effects such as velocity overshoot and impact ionization in multi-dimensional numerical device simulation, a new simplified energy-balance (SEB) model is developed and implemented in FLOODS[16] as a pragmatic option. In the SEB model, the energy-relaxation length is estimated from a pre-process drift-diffusion simulation using the carrier-velocity distribution predicted throughout the device domain, and is used without change in a subsequent simpler hydrodynamic (SHD) simulation. The new SEB model was verified by comparison of two-dimensional SHD and full HD DC simulations of a submicron MOSFET. The SHD simulations yield detailed distributions of carrier temperature, carrier velocity, and impact-ionization rate, which agree well with the full HD simulation results obtained with FLOODS. The most noteworthy feature of the new SEB/SHD model is its computational efficiency, which results from reduced Newton iteration counts caused by the enhanced linearity. Relative to full HD, SHD simulation times can be shorter by as much as an order of magnitude since larger voltage steps for DC sweeps and larger time steps for transient simulations can be used. The improved computational efficiency can enable pragmatic three-dimensional SHD device simulation as well, for which the SEB implementation would be straightforward as it is in FLOODS or any robust HD simulator.
NASA Astrophysics Data System (ADS)
Kuras, P. K.; Weiler, M.; Alila, Y.; Spittlehouse, D.; Winkler, R.
2006-12-01
Hydrologic models have been increasingly used in forest hydrology to overcome the limitations of paired watershed experiments, where vegetative recovery and natural variability obscure the inferences and conclusions that can be drawn from such studies. Models, however, are also plagued by uncertainty stemming from a limited understanding of hydrological processes in forested catchments and parameter equifinality is a common concern. This has created the necessity to improve our understanding of how hydrological systems work, through the development of hydrological measures, analyses and models that address the question: are we getting the right answers for the right reasons? Hence, physically-based, spatially-distributed hydrologic models should be validated with high-quality experimental data describing multiple concurrent internal catchment processes under a range of hydrologic regimes. The distributed hydrology soil vegetation model (DHSVM) frequently used in forest management applications is an example of a process-based model used to address the aforementioned circumstances, and this study takes a novel approach at collectively examining the ability of a pre-calibrated model application to realistically simulate outlet flows along with the spatial-temporal variation of internal catchment processes including: continuous groundwater dynamics at 9 locations, stream and road network flow at 67 locations for six individual days throughout the freshet, and pre-melt season snow distribution. Model efficiency was improved over prior evaluations due to continuous efforts in improving the quality of meteorological data in the watershed. Road and stream network flows were very well simulated for a range of hydrological conditions, and the spatial distribution of the pre-melt season snowpack was in general agreement with observed values. The model was effective in simulating the spatial variability of subsurface flow generation, except at locations where strong stream-groundwater interactions existed, as the model is not capable of simulating such processes and subsurface flows always drain to the stream network. The model has proven overall to be quite capable in realistically simulating internal catchment processes in the watershed, which creates more confidence in future model applications exploring the effects of various forest management scenarios on the watershed's hydrological processes.
A THREE-DIMENSIONAL MODEL ASSESSMENT OF THE GLOBAL DISTRIBUTION OF HEXACHLOROBENZENE
The distributions of persistent organic pollutants (POPs) in the global environment have been studied typically with box/fugacity models with simplified treatments of atmospheric transport processes1. Such models are incapable of simulating the complex three-dimensional mechanis...
Secretarial Administration: Project In/Vest: Insurance Simulation Insures Learning
ERIC Educational Resources Information Center
Geier, Charlene
1978-01-01
Describes a simulated model office to replicate various insurance occupations set up in Greenfield High School, Wisconsin. Local insurance agents and students from other disciplines, such as distributive education, are involved in the simulation. The training is applicable to other business office positions, as it models not only an insurance…
SIMULATING SUB-DECADAL CHANNEL MORPHOLOGIC CHANGE IN EPHEMERAL STREAM NETWORKS
A distributed watershed model was modified to simulate cumulative channel morphologic
change from multiple runoff events in ephemeral stream networks. The model incorporates the general design of the event-based Kinematic Runoff and" Erosion Model (KINEROS), which describes t...
The Distribution of Snow Black Carbon observed in the Arctic and Compared to the GISS-PUCCINI Model
NASA Technical Reports Server (NTRS)
Dou, T.; Xiao, C.; Shindell, D. T.; Liu, J.; Eleftheriadis, K.; Ming, J.; Qin, D.
2012-01-01
In this study, we evaluate the ability of the latest NASA GISS composition-climate model, GISS-E2- PUCCINI, to simulate the spatial distribution of snow BC (sBC) in the Arctic relative to present-day observations. Radiative forcing due to BC deposition onto Arctic snow and sea ice is also estimated. Two sets of model simulations are analyzed, where meteorology is linearly relaxed towards National Centers for Environmental Prediction (NCEP) and towards NASA Modern Era Reanalysis for Research and Applications (MERRA) reanalyses. Results indicate that the modeled concentrations of sBC are comparable with presentday observations in and around the Arctic Ocean, except for apparent underestimation at a few sites in the Russian Arctic. That said, the model has some biases in its simulated spatial distribution of BC deposition to the Arctic. The simulations from the two model runs are roughly equal, indicating that discrepancies between model and observations come from other sources. Underestimation of biomass burning emissions in Northern Eurasia may be the main cause of the low biases in the Russian Arctic. Comparisons of modeled aerosol BC (aBC) with long-term surface observations at Barrow, Alert, Zeppelin and Nord stations show significant underestimation in winter and spring concentrations in the Arctic (most significant in Alaska), although the simulated seasonality of aBC has been greatly improved relative to earlier model versions. This is consistent with simulated biases in vertical profiles of aBC, with underestimation in the lower and middle troposphere but overestimation in the upper troposphere and lower stratosphere, suggesting that the wet removal processes in the current model may be too weak or that vertical transport is too rapid, although the simulated BC lifetime seems reasonable. The combination of observations and modeling provides a comprehensive distribution of sBC over the Arctic. On the basis of this distribution, we estimate the decrease in snow and sea ice albedo and the resulting radiative forcing. We suggest that the albedo reduction due to BC deposition presents significant space-time variations, with highest mean reductions of 1.25% in the Russian Arctic, which are much larger than those in other Arctic regions (0.39% to 0.64 %). The averaged value over the Arctic north of 66degN is 0.4-0.6% during spring, leading to regional surface radiative forcings of 0.7, 1.1 and 1.0Wm(exp-2) in spring 2007, 2008 and 2009, respectively.
A compositional reservoir simulator on distributed memory parallel computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rame, M.; Delshad, M.
1995-12-31
This paper presents the application of distributed memory parallel computes to field scale reservoir simulations using a parallel version of UTCHEM, The University of Texas Chemical Flooding Simulator. The model is a general purpose highly vectorized chemical compositional simulator that can simulate a wide range of displacement processes at both field and laboratory scales. The original simulator was modified to run on both distributed memory parallel machines (Intel iPSC/960 and Delta, Connection Machine 5, Kendall Square 1 and 2, and CRAY T3D) and a cluster of workstations. A domain decomposition approach has been taken towards parallelization of the code. Amore » portion of the discrete reservoir model is assigned to each processor by a set-up routine that attempts a data layout as even as possible from the load-balance standpoint. Each of these subdomains is extended so that data can be shared between adjacent processors for stencil computation. The added routines that make parallel execution possible are written in a modular fashion that makes the porting to new parallel platforms straight forward. Results of the distributed memory computing performance of Parallel simulator are presented for field scale applications such as tracer flood and polymer flood. A comparison of the wall-clock times for same problems on a vector supercomputer is also presented.« less
NASA Astrophysics Data System (ADS)
Kuznetsova, M. M.; Liu, Y. H.; Rastaetter, L.; Pembroke, A. D.; Chen, L. J.; Hesse, M.; Glocer, A.; Komar, C. M.; Dorelli, J.; Roytershteyn, V.
2016-12-01
The presentation will provide overview of new tools, services and models implemented at the Community Coordinated Modeling Center (CCMC) to facilitate MMS dayside results analysis. We will provide updates on implementation of Particle-in-Cell (PIC) simulations at the CCMC and opportunities for on-line visualization and analysis of results of PIC simulations of asymmetric magnetic reconnection for different guide fields and boundary conditions. Fields, plasma parameters, particle distribution moments as well as particle distribution functions calculated in selected regions of the vicinity of reconnection sites can be analyzed through the web-based interactive visualization system. In addition there are options to request distribution functions in user selected regions of interest and to fly through simulated magnetic reconnection configurations and a map of distributions to facilitate comparisons with observations. A broad collection of global magnetosphere models hosted at the CCMC provide opportunity to put MMS observations and local PIC simulations into global context. We recently implemented the RECON-X post processing tool (Glocer et al, 2016) which allows users to determine the location of separator surface around closed field lines and between open field lines and solar wind field lines. The tool also finds the separatrix line where the two surfaces touch and positions of magnetic nulls. The surfaces and the separatrix line can be visualized relative to satellite positions in the dayside magnetosphere using an interactive HTML-5 visualization for each time step processed. To validate global magnetosphere models' capability to simulate locations of dayside magnetosphere boundaries we will analyze the proximity of MMS to simulated separatrix locations for a set of MMS diffusion region crossing events.
Garcia-Menendez, Fernando; Hu, Yongtao; Odman, Mehmet T
2014-09-15
Air quality forecasts generated with chemical transport models can provide valuable information about the potential impacts of fires on pollutant levels. However, significant uncertainties are associated with fire-related emission estimates as well as their distribution on gridded modeling domains. In this study, we explore the sensitivity of fine particulate matter concentrations predicted by a regional-scale air quality model to the spatial and temporal allocation of fire emissions. The assessment was completed by simulating a fire-related smoke episode in which air quality throughout the Atlanta metropolitan area was affected on February 28, 2007. Sensitivity analyses were carried out to evaluate the significance of emission distribution among the model's vertical layers, along the horizontal plane, and into hourly inputs. Predicted PM2.5 concentrations were highly sensitive to emission injection altitude relative to planetary boundary layer height. Simulations were also responsive to the horizontal allocation of fire emissions and their distribution into single or multiple grid cells. Additionally, modeled concentrations were greatly sensitive to the temporal distribution of fire-related emissions. The analyses demonstrate that, in addition to adequate estimates of emitted mass, successfully modeling the impacts of fires on air quality depends on an accurate spatiotemporal allocation of emissions. Copyright © 2014 Elsevier B.V. All rights reserved.
Simulation modeling for the health care manager.
Kennedy, Michael H
2009-01-01
This article addresses the use of simulation software to solve administrative problems faced by health care managers. Spreadsheet add-ins, process simulation software, and discrete event simulation software are available at a range of costs and complexity. All use the Monte Carlo method to realistically integrate probability distributions into models of the health care environment. Problems typically addressed by health care simulation modeling are facility planning, resource allocation, staffing, patient flow and wait time, routing and transportation, supply chain management, and process improvement.
WATER QUALITY MODELING AND SAMPLING STUDY IN A DISTRIBUTION SYSTEM
A variety of computer based models have been developed and used by the water industry to access the movement and fate of contaminants within the distribution system. uch models include: ynamic and steady state hydraulic models which simulate the flow quantity, flow direction, and...
Liu, Yupeng; Yu, Deyong; Xun, Bin; Sun, Yun; Hao, Ruifang
2014-01-01
Climate changes may have immediate implications for forest productivity and may produce dramatic shifts in tree species distributions in the future. Quantifying these implications is significant for both scientists and managers. Cunninghamia lanceolata is an important coniferous timber species due to its fast growth and wide distribution in China. This paper proposes a methodology aiming at enhancing the distribution and productivity of C. lanceolata against a background of climate change. First, we simulated the potential distributions and establishment probabilities of C. lanceolata based on a species distribution model. Second, a process-based model, the PnET-II model, was calibrated and its parameterization of water balance improved. Finally, the improved PnET-II model was used to simulate the net primary productivity (NPP) of C. lanceolata. The simulated NPP and potential distribution were combined to produce an integrated indicator, the estimated total NPP, which serves to comprehensively characterize the productivity of the forest under climate change. The results of the analysis showed that (1) the distribution of C. lanceolata will increase in central China, but the mean probability of establishment will decrease in the 2050s; (2) the PnET-II model was improved, calibrated, and successfully validated for the simulation of the NPP of C. lanceolata in China; and (3) all scenarios predicted a reduction in total NPP in the 2050s, with a markedly lower reduction under the a2 scenario than under the b2 scenario. The changes in NPP suggested that forest productivity will show a large decrease in southern China and a mild increase in central China. All of these findings could improve our understanding of the impact of climate change on forest ecosystem structure and function and could provide a basis for policy-makers to apply adaptive measures and overcome the unfavorable influences of climate change.
Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow
NASA Astrophysics Data System (ADS)
Gupta, Atma Ram; Kumar, Ashwani
2017-12-01
Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.
Measurement with microscopic MRI and simulation of flow in different aneurysm models.
Edelhoff, Daniel; Walczak, Lars; Frank, Frauke; Heil, Marvin; Schmitz, Inge; Weichert, Frank; Suter, Dieter
2015-10-01
The impact and the development of aneurysms depend to a significant degree on the exchange of liquid between the regular vessel and the pathological extension. A better understanding of this process will lead to improved prediction capabilities. The aim of the current study was to investigate fluid-exchange in aneurysm models of different complexities by combining microscopic magnetic resonance measurements with numerical simulations. In order to evaluate the accuracy and applicability of these methods, the fluid-exchange process between the unaltered vessel lumen and the aneurysm phantoms was analyzed quantitatively using high spatial resolution. Magnetic resonance flow imaging was used to visualize fluid-exchange in two different models produced with a 3D printer. One model of an aneurysm was based on histological findings. The flow distribution in the different models was measured on a microscopic scale using time of flight magnetic resonance imaging. The whole experiment was simulated using fast graphics processing unit-based numerical simulations. The obtained simulation results were compared qualitatively and quantitatively with the magnetic resonance imaging measurements, taking into account flow and spin-lattice relaxation. The results of both presented methods compared well for the used aneurysm models and the chosen flow distributions. The results from the fluid-exchange analysis showed comparable characteristics concerning measurement and simulation. Similar symmetry behavior was observed. Based on these results, the amount of fluid-exchange was calculated. Depending on the geometry of the models, 7% to 45% of the liquid was exchanged per second. The result of the numerical simulations coincides well with the experimentally determined velocity field. The rate of fluid-exchange between vessel and aneurysm was well-predicted. Hence, the results obtained by simulation could be validated by the experiment. The observed deviations can be caused by the noise in the measurement and by the limited resolution of the simulation. The resulting differences are small enough to allow reliable predictions of the flow distribution in vessels with stents and for pulsed blood flow.
NASA Astrophysics Data System (ADS)
Abas, Norzaida; Daud, Zalina M.; Yusof, Fadhilah
2014-11-01
A stochastic rainfall model is presented for the generation of hourly rainfall data in an urban area in Malaysia. In view of the high temporal and spatial variability of rainfall within the tropical rain belt, the Spatial-Temporal Neyman-Scott Rectangular Pulse model was used. The model, which is governed by the Neyman-Scott process, employs a reasonable number of parameters to represent the physical attributes of rainfall. A common approach is to attach each attribute to a mathematical distribution. With respect to rain cell intensity, this study proposes the use of a mixed exponential distribution. The performance of the proposed model was compared to a model that employs the Weibull distribution. Hourly and daily rainfall data from four stations in the Damansara River basin in Malaysia were used as input to the models, and simulations of hourly series were performed for an independent site within the basin. The performance of the models was assessed based on how closely the statistical characteristics of the simulated series resembled the statistics of the observed series. The findings obtained based on graphical representation revealed that the statistical characteristics of the simulated series for both models compared reasonably well with the observed series. However, a further assessment using the AIC, BIC and RMSE showed that the proposed model yields better results. The results of this study indicate that for tropical climates, the proposed model, using a mixed exponential distribution, is the best choice for generation of synthetic data for ungauged sites or for sites with insufficient data within the limit of the fitted region.
NASA Astrophysics Data System (ADS)
Wang, Huihui; Sukhomlinov, Vladimir S.; Kaganovich, Igor D.; Mustafaev, Alexander S.
2017-02-01
Using the Monte Carlo collision method, we have performed simulations of ion velocity distribution functions (IVDF) taking into account both elastic collisions and charge exchange collisions of ions with atoms in uniform electric fields for argon and helium background gases. The simulation results are verified by comparison with the experiment data of the ion mobilities and the ion transverse diffusion coefficients in argon and helium. The recently published experimental data for the first seven coefficients of the Legendre polynomial expansion of the ion energy and angular distribution functions are used to validate simulation results for IVDF. Good agreement between measured and simulated IVDFs shows that the developed simulation model can be used for accurate calculations of IVDFs.
NASA Astrophysics Data System (ADS)
Clarke, Peter; Varghese, Philip; Goldstein, David
2018-01-01
A discrete velocity method is developed for gas mixtures of diatomic molecules with both rotational and vibrational energy states. A full quantized model is described, and rotation-translation and vibration-translation energy exchanges are simulated using a Larsen-Borgnakke exchange model. Elastic and inelastic molecular interactions are modeled during every simulated collision to help produce smooth internal energy distributions. The method is verified by comparing simulations of homogeneous relaxation by our discrete velocity method to numerical solutions of the Jeans and Landau-Teller equations, and to direct simulation Monte Carlo. We compute the structure of a 1D shock using this method, and determine how the rotational energy distribution varies with spatial location in the shock and with position in velocity space.
NASA Astrophysics Data System (ADS)
Liu, Gang; Zhao, Rong; Liu, Jiping; Zhang, Qingpu
2007-06-01
The Lancang River Basin is so narrow and its hydrological and meteorological information are so flexible. The Rainfall, evaporation, glacial melt water and groundwater affect the runoff whose replenishment forms changing notable with the season in different areas at the basin. Characters of different kind of distributed model and conceptual hydrological model are analyzed. A semi-distributed hydrological model of relation between monthly runoff and rainfall, temperate and soil type has been built in Changdu County based on Visual Basic and ArcObject. The way of discretization of distributed hydrological model was used in the model, and principles of conceptual model are taken into account. The sub-catchment of Changdu is divided into regular cells, and all kinds of hydrological and meteorological information and land use classes and slope extracted from 1:250000 digital elevation models are distributed in each cell. The model does not think of the rainfall-runoff hydro-physical process but use the conceptual model to simulate the whole contributes to the runoff of the area. The affection of evapotranspiration loss and underground water is taken into account at the same time. The spatial distribute characteristics of the monthly runoff in the area are simulated and analyzed with a few parameters.
NASA Astrophysics Data System (ADS)
Hansen, Kenneth C.; Altwegg, Kathrin; Bieler, Andre; Berthelier, Jean-Jacques; Calmonte, Ursina; Combi, Michael R.; De Keyser, Johan; Fiethe, Björn; Fougere, Nicolas; Fuselier, Stephen; Gombosi, T. I.; Hässig, Myrtha; Huang, Zhenguang; Le Roy, Léna; Rubin, Martin; Tenishev, Valeriy; Toth, Gabor; Tzou, Chia-Yu; ROSINA Team
2016-10-01
We have previously used results from the AMPS DSMC (Adaptive Mesh Particle Simulator Direct Simulation Monte Carlo) model to create an empirical model of the near comet water (H2O) coma of comet 67P/Churyumov-Gerasimenko. In this work we create additional empirical models for the coma distributions of CO2 and CO. The AMPS simulations are based on ROSINA DFMS (Rosetta Orbiter Spectrometer for Ion and Neutral Analysis, Double Focusing Mass Spectrometer) data taken over the entire timespan of the Rosetta mission. The empirical model is created using AMPS DSMC results which are extracted from simulations at a range of radial distances, rotation phases and heliocentric distances. The simulation results are then averaged over a comet rotation and fitted to an empirical model distribution. Model coefficients are then fitted to piecewise-linear functions of heliocentric distance. The final product is an empirical model of the coma distribution which is a function of heliocentric distance, radial distance, and sun-fixed longitude and latitude angles. The model clearly mimics the behavior of water shifting production from North to South across the inbound equinox while the CO2 production is always in the South.The empirical model can be used to de-trend the spacecraft motion from the ROSINA COPS and DFMS data. The ROSINA instrument measures the neutral coma density at a single point and the measured value is influenced by the location of the spacecraft relative to the comet and the comet-sun line. Using the empirical coma model we can correct for the position of the spacecraft and compute a total production rate based on single point measurements. In this presentation we will present the coma production rates as a function of heliocentric distance for the entire Rosetta mission.This work was supported by contracts JPL#1266313 and JPL#1266314 from the US Rosetta Project and NASA grant NNX14AG84G from the Planetary Atmospheres Program.
Stratospheric temperatures and tracer transport in a nudged 4-year middle atmosphere GCM simulation
NASA Astrophysics Data System (ADS)
van Aalst, M. K.; Lelieveld, J.; Steil, B.; Brühl, C.; Jöckel, P.; Giorgetta, M. A.; Roelofs, G.-J.
2005-02-01
We have performed a 4-year simulation with the Middle Atmosphere General Circulation Model MAECHAM5/MESSy, while slightly nudging the model's meteorology in the free troposphere (below 113 hPa) towards ECMWF analyses. We show that the nudging 5 technique, which leaves the middle atmosphere almost entirely free, enables comparisons with synoptic observations. The model successfully reproduces many specific features of the interannual variability, including details of the Antarctic vortex structure. In the Arctic, the model captures general features of the interannual variability, but falls short in reproducing the timing of sudden stratospheric warmings. A 10 detailed comparison of the nudged model simulations with ECMWF data shows that the model simulates realistic stratospheric temperature distributions and variabilities, including the temperature minima in the Antarctic vortex. Some small (a few K) model biases were also identified, including a summer cold bias at both poles, and a general cold bias in the lower stratosphere, most pronounced in midlatitudes. A comparison 15 of tracer distributions with HALOE observations shows that the model successfully reproduces specific aspects of the instantaneous circulation. The main tracer transport deficiencies occur in the polar lowermost stratosphere. These are related to the tropopause altitude as well as the tracer advection scheme and model resolution. The additional nudging of equatorial zonal winds, forcing the quasi-biennial oscillation, sig20 nificantly improves stratospheric temperatures and tracer distributions.
NASA Astrophysics Data System (ADS)
Steenbakkers, Rudi J. A.; Tzoumanekas, Christos; Li, Ying; Liu, Wing Kam; Kröger, Martin; Schieber, Jay D.
2014-01-01
We present a method to map the full equilibrium distribution of the primitive-path (PP) length, obtained from multi-chain simulations of polymer melts, onto a single-chain mean-field ‘target’ model. Most previous works used the Doi-Edwards tube model as a target. However, the average number of monomers per PP segment, obtained from multi-chain PP networks, has consistently shown a discrepancy of a factor of two with respect to tube-model estimates. Part of the problem is that the tube model neglects fluctuations in the lengths of PP segments, the number of entanglements per chain and the distribution of monomers among PP segments, while all these fluctuations are observed in multi-chain simulations. Here we use a recently proposed slip-link model, which includes fluctuations in all these variables as well as in the spatial positions of the entanglements. This turns out to be essential to obtain qualitative and quantitative agreement with the equilibrium PP-length distribution obtained from multi-chain simulations. By fitting this distribution, we are able to determine two of the three parameters of the model, which govern its equilibrium properties. This mapping is executed for four different linear polymers and for different molecular weights. The two parameters are found to depend on chemistry, but not on molecular weight. The model predicts a constant plateau modulus minus a correction inversely proportional to molecular weight. The value for well-entangled chains, with the parameters determined ab initio, lies in the range of experimental data for the materials investigated.
NASA Technical Reports Server (NTRS)
Convery, P. D.; Schriver, D.; Ashour-Abdalla, M.; Richard, R. L.
2002-01-01
Nongyrotropic plasma distribution functions can be formed in regions of space where guiding center motion breaks down as a result of strongly curved and weak ambient magnetic fields. Such are the conditions near the current sheet in the Earth's middle and distant magnetotail, where observations of nongyrotropic ion distributions have been made. Here a systematic parameter study of nongyrotropic proton distributions using electromagnetic hybrid simulations is made. We model the observed nongyrotropic distributions by removing a number of arc length segments from a cold ring distribution and find significant differences with the results of simulations that initially have a gyrotropic ring distribution. Model nongyrotropic distributions with initially small perpendicular thermalization produce growing fluctuations that diffuse the ions into a stable Maxwellian-like distribution within a few proton gyro periods. The growing waves produced by nongyrotropic distributions are similar to the electromagnetic proton cyclotron waves produced by a gyrotropic proton ring distribution in that they propagate parallel to the background magnetic field and occur at frequencies on the order of the proton gyrofrequency, The maximum energy of the fluctuating magnetic field increases as the initial proton distribution is made more nongyrotropic, that is, more highly bunched in perpendicular velocity space. This increase can be as much as twice the energy produced in the gyrotropic case.
PRMS-IV, the precipitation-runoff modeling system, version 4
Markstrom, Steven L.; Regan, R. Steve; Hay, Lauren E.; Viger, Roland J.; Webb, Richard M.; Payn, Robert A.; LaFontaine, Jacob H.
2015-01-01
Computer models that simulate the hydrologic cycle at a watershed scale facilitate assessment of variability in climate, biota, geology, and human activities on water availability and flow. This report describes an updated version of the Precipitation-Runoff Modeling System. The Precipitation-Runoff Modeling System is a deterministic, distributed-parameter, physical-process-based modeling system developed to evaluate the response of various combinations of climate and land use on streamflow and general watershed hydrology. Several new model components were developed, and all existing components were updated, to enhance performance and supportability. This report describes the history, application, concepts, organization, and mathematical formulation of the Precipitation-Runoff Modeling System and its model components. This updated version provides improvements in (1) system flexibility for integrated science, (2) verification of conservation of water during simulation, (3) methods for spatial distribution of climate boundary conditions, and (4) methods for simulation of soil-water flow and storage.
USDA-ARS?s Scientific Manuscript database
Accurately predicting phenology in crop simulation models is critical for correctly simulating crop production. While extensive work in modeling phenology has focused on the temperature response function (resulting in robust phenology models), limited work on quantifying the phenological responses t...
NASA Astrophysics Data System (ADS)
Henneberg, Olga; Ament, Felix; Grützun, Verena
2018-05-01
Soil moisture amount and distribution control evapotranspiration and thus impact the occurrence of convective precipitation. Many recent model studies demonstrate that changes in initial soil moisture content result in modified convective precipitation. However, to quantify the resulting precipitation changes, the chaotic behavior of the atmospheric system needs to be considered. Slight changes in the simulation setup, such as the chosen model domain, also result in modifications to the simulated precipitation field. This causes an uncertainty due to stochastic variability, which can be large compared to effects caused by soil moisture variations. By shifting the model domain, we estimate the uncertainty of the model results. Our novel uncertainty estimate includes 10 simulations with shifted model boundaries and is compared to the effects on precipitation caused by variations in soil moisture amount and local distribution. With this approach, the influence of soil moisture amount and distribution on convective precipitation is quantified. Deviations in simulated precipitation can only be attributed to soil moisture impacts if the systematic effects of soil moisture modifications are larger than the inherent simulation uncertainty at the convection-resolving scale. We performed seven experiments with modified soil moisture amount or distribution to address the effect of soil moisture on precipitation. Each of the experiments consists of 10 ensemble members using the deep convection-resolving COSMO model with a grid spacing of 2.8 km. Only in experiments with very strong modification in soil moisture do precipitation changes exceed the model spread in amplitude, location or structure. These changes are caused by a 50 % soil moisture increase in either the whole or part of the model domain or by drying the whole model domain. Increasing or decreasing soil moisture both predominantly results in reduced precipitation rates. Replacing the soil moisture with realistic fields from different days has an insignificant influence on precipitation. The findings of this study underline the need for uncertainty estimates in soil moisture studies based on convection-resolving models.
Pore-scale modeling of saturated permeabilities in random sphere packings.
Pan, C; Hilpert, M; Miller, C T
2001-12-01
We use two pore-scale approaches, lattice-Boltzmann (LB) and pore-network modeling, to simulate single-phase flow in simulated sphere packings that vary in porosity and sphere-size distribution. For both modeling approaches, we determine the size of the representative elementary volume with respect to the permeability. Permeabilities obtained by LB modeling agree well with Rumpf and Gupte's experiments in sphere packings for small Reynolds numbers. The LB simulations agree well with the empirical Ergun equation for intermediate but not for small Reynolds numbers. We suggest a modified form of Ergun's equation to describe both low and intermediate Reynolds number flows. The pore-network simulations agree well with predictions from the effective-medium approximation but underestimate the permeability due to the simplified representation of the porous media. Based on LB simulations in packings with log-normal sphere-size distributions, we suggest a permeability relation with respect to the porosity, as well as the mean and standard deviation of the sphere diameter.
Evaluation of column-averaged methane in models and TCCON with a focus on the stratosphere
NASA Astrophysics Data System (ADS)
Ostler, Andreas; Sussmann, Ralf; Patra, Prabir K.; Houweling, Sander; De Bruine, Marko; Stiller, Gabriele P.; Haenel, Florian J.; Plieninger, Johannes; Bousquet, Philippe; Yin, Yi; Saunois, Marielle; Walker, Kaley A.; Deutscher, Nicholas M.; Griffith, David W. T.; Blumenstock, Thomas; Hase, Frank; Warneke, Thorsten; Wang, Zhiting; Kivi, Rigel; Robinson, John
2016-09-01
The distribution of methane (CH4) in the stratosphere can be a major driver of spatial variability in the dry-air column-averaged CH4 mixing ratio (XCH4), which is being measured increasingly for the assessment of CH4 surface emissions. Chemistry-transport models (CTMs) therefore need to simulate the tropospheric and stratospheric fractional columns of XCH4 accurately for estimating surface emissions from XCH4. Simulations from three CTMs are tested against XCH4 observations from the Total Carbon Column Network (TCCON). We analyze how the model-TCCON agreement in XCH4 depends on the model representation of stratospheric CH4 distributions. Model equivalents of TCCON XCH4 are computed with stratospheric CH4 fields from both the model simulations and from satellite-based CH4 distributions from MIPAS (Michelson Interferometer for Passive Atmospheric Sounding) and MIPAS CH4 fields adjusted to ACE-FTS (Atmospheric Chemistry Experiment Fourier Transform Spectrometer) observations. Using MIPAS-based stratospheric CH4 fields in place of model simulations improves the model-TCCON XCH4 agreement for all models. For the Atmospheric Chemistry Transport Model (ACTM) the average XCH4 bias is significantly reduced from 38.1 to 13.7 ppb, whereas small improvements are found for the models TM5 (Transport Model, version 5; from 8.7 to 4.3 ppb) and LMDz (Laboratoire de Météorologie Dynamique model with zooming capability; from 6.8 to 4.3 ppb). Replacing model simulations with MIPAS stratospheric CH4 fields adjusted to ACE-FTS reduces the average XCH4 bias for ACTM (3.3 ppb), but increases the average XCH4 bias for TM5 (10.8 ppb) and LMDz (20.0 ppb). These findings imply that model errors in simulating stratospheric CH4 contribute to model biases. Current satellite instruments cannot definitively measure stratospheric CH4 to sufficient accuracy to eliminate these biases. Applying transport diagnostics to the models indicates that model-to-model differences in the simulation of stratospheric transport, notably the age of stratospheric air, can largely explain the inter-model spread in stratospheric CH4 and, hence, its contribution to XCH4. Therefore, it would be worthwhile to analyze how individual model components (e.g., physical parameterization, meteorological data sets, model horizontal/vertical resolution) impact the simulation of stratospheric CH4 and XCH4.
Hevesi, Joseph A.; Flint, Alan L.; Flint, Lorraine E.
2003-01-01
This report presents the development and application of the distributed-parameter watershed model, INFILv3, for estimating the temporal and spatial distribution of net infiltration and potential recharge in the Death Valley region, Nevada and California. The estimates of net infiltration quantify the downward drainage of water across the lower boundary of the root zone and are used to indicate potential recharge under variable climate conditions and drainage basin characteristics. Spatial variability in recharge in the Death Valley region likely is high owing to large differences in precipitation, potential evapotranspiration, bedrock permeability, soil thickness, vegetation characteristics, and contributions to recharge along active stream channels. The quantity and spatial distribution of recharge representing the effects of variable climatic conditions and drainage basin characteristics on recharge are needed to reduce uncertainty in modeling ground-water flow. The U.S. Geological Survey, in cooperation with the Department of Energy, developed a regional saturated-zone ground-water flow model of the Death Valley regional ground-water flow system to help evaluate the current hydrogeologic system and the potential effects of natural or human-induced changes. Although previous estimates of recharge have been made for most areas of the Death Valley region, including the area defined by the boundary of the Death Valley regional ground-water flow system, the uncertainty of these estimates is high, and the spatial and temporal variability of the recharge in these basins has not been quantified. To estimate the magnitude and distribution of potential recharge in response to variable climate and spatially varying drainage basin characteristics, the INFILv3 model uses a daily water-balance model of the root zone with a primarily deterministic representation of the processes controlling net infiltration and potential recharge. The daily water balance includes precipitation (as either rain or snow), snow accumulation, sublimation, snowmelt, infiltration into the root zone, evapotranspiration, drainage, water content change throughout the root-zone profile (represented as a 6-layered system), runoff (defined as excess rainfall and snowmelt) and surface water run-on (defined as runoff that is routed downstream), and net infiltration (simulated as drainage from the bottom root-zone layer). Potential evapotranspiration is simulated using an hourly solar radiation model to simulate daily net radiation, and daily evapotranspiration is simulated as an empirical function of root zone water content and potential evapotranspiration. The model uses daily climate records of precipitation and air temperature from a regionally distributed network of 132 climate stations and a spatially distributed representation of drainage basin characteristics defined by topography, geology, soils, and vegetation to simulate daily net infiltration at all locations, including stream channels with intermittent streamflow in response to runoff from rain and snowmelt. The temporal distribution of daily, monthly, and annual net infiltration can be used to evaluate the potential effect of future climatic conditions on potential recharge. The INFILv3 model inputs representing drainage basin characteristics were developed using a geographic information system (GIS) to define a set of spatially distributed input parameters uniquely assigned to each grid cell of the INFILv3 model grid. The model grid, which was defined by a digital elevation model (DEM) of the Death Valley region, consists of 1,252,418 model grid cells with a uniform grid cell dimension of 278.5 meters in the north-south and east-west directions. The elevation values from the DEM were used with monthly regression models developed from the daily climate data to estimate the spatial distribution of daily precipitation and air temperature. The elevation values were also used to simulate atmosp
Saeedi, Mostafa; Vahidi, Omid; Goodarzi, Vahabodin; Saeb, Mohammad Reza; Izadi, Leila; Mozafari, Masoud
2017-11-01
Distribution patterns/performance of magnetic nanoparticles (MNPs) was visualized by computer simulation and experimental validation on agarose gel tissue-mimicking phantom (AGTMP) models. The geometry of a complex three-dimensional mathematical phantom model of a cancer tumor was examined by tomography imaging. The capability of mathematical model to predict distribution patterns/performance in AGTMP model was captured. The temperature profile vs. hyperthermia duration was obtained by solving bio-heat equations for four different MNPs distribution patterns and correlated with cell death rate. The outcomes indicated that bio-heat model was able to predict temperature profile throughout the tissue model with a reasonable precision, to be applied for complex tissue geometries. The simulation results on the cancer tumor model shed light on the effectiveness of the studied parameters. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lei, Mingfeng; Lin, Dayong; Liu, Jianwen; Shi, Chenghua; Ma, Jianjun; Yang, Weichao; Yu, Xiaoniu
2018-03-01
For the purpose of investigating lining concrete durability, this study derives a modified chloride diffusion model for concrete based on the odd continuation of boundary conditions and Fourier transform. In order to achieve this, the linear stress distribution on a sectional structure is considered, detailed procedures and methods are presented for model verification and parametric analysis. Simulation results show that the chloride diffusion model can reflect the effects of linear stress distribution of the sectional structure on the chloride diffusivity with reliable accuracy. Along with the natural environmental characteristics of practical engineering structures, reference value ranges of model parameters are provided. Furthermore, a chloride diffusion model is extended for the consideration of multi-factor coupling of linear stress distribution, chloride concentration and diffusion time. Comparison between model simulation and typical current research results shows that the presented model can produce better considerations with a greater universality.
Abbott, Lauren J; Stevens, Mark J
2015-12-28
A coarse-grained (CG) model is developed for the thermoresponsive polymer poly(N-isopropylacrylamide) (PNIPAM), using a hybrid top-down and bottom-up approach. Nonbonded parameters are fit to experimental thermodynamic data following the procedures of the SDK (Shinoda, DeVane, and Klein) CG force field, with minor adjustments to provide better agreement with radial distribution functions from atomistic simulations. Bonded parameters are fit to probability distributions from atomistic simulations using multi-centered Gaussian-based potentials. The temperature-dependent potentials derived for the PNIPAM CG model in this work properly capture the coil-globule transition of PNIPAM single chains and yield a chain-length dependence consistent with atomistic simulations.
Modeling of Antarctic sea ice in a general circulation model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Xingren; Budd, W.F.; Simmonds, I.
1997-04-01
A dynamic-thermodynamic sea ice model is developed and coupled with the Melbourne University general circulation model to simulate the seasonal cycle of the Antarctic sea ice distributions The model is efficient, rapid to compute, and useful for a range of climate studies. The thermodynamic part of the sea ice model is similar to that developed by Parkinson and Washington, the dynamics contain a simplified ice rheology that resists compression. The thermodynamics is based on energy conservation at the top surface of the ice/snow, the ice/water interface, and the open water area to determine the ice formation, accretion, and ablation. Amore » lead parameterization is introduced with an effective partitioning scheme for freezing between and under the ice floes. The dynamic calculation determines the motion of ice, which is forced with the atmospheric wind, taking account of ice resistance and rafting. The simulated sea ice distribution compares reasonably well with observations. The seasonal cycle of ice extent is well simulated in phase as well as in magnitude. Simulated sea ice thickness and concentration are also in good agreement with observations over most regions and serve to indicate the importance of advection and ocean drift in the determination of the sea ice distribution. 64 refs., 15 figs., 2 tabs.« less
Halford, Keith J.; Plume, Russell W.
2011-01-01
Assessing hydrologic effects of developing groundwater supplies in Snake Valley required numerical, groundwater-flow models to estimate the timing and magnitude of capture from streams, springs, wetlands, and phreatophytes. Estimating general water-table decline also required groundwater simulation. The hydraulic conductivity of basin fill and transmissivity of basement-rock distributions in Spring and Snake Valleys were refined by calibrating a steady state, three-dimensional, MODFLOW model of the carbonate-rock province to predevelopment conditions. Hydraulic properties and boundary conditions were defined primarily from the Regional Aquifer-System Analysis (RASA) model except in Spring and Snake Valleys. This locally refined model was referred to as the Great Basin National Park calibration (GBNP-C) model. Groundwater discharges from phreatophyte areas and springs in Spring and Snake Valleys were simulated as specified discharges in the GBNP-C model. These discharges equaled mapped rates and measured discharges, respectively. Recharge, hydraulic conductivity, and transmissivity were distributed throughout Spring and Snake Valleys with pilot points and interpolated to model cells with kriging in geologically similar areas. Transmissivity of the basement rocks was estimated because thickness is correlated poorly with transmissivity. Transmissivity estimates were constrained by aquifer-test results in basin-fill and carbonate-rock aquifers. Recharge, hydraulic conductivity, and transmissivity distributions of the GBNP-C model were estimated by minimizing a weighted composite, sum-of-squares objective function that included measurement and Tikhonov regularization observations. Tikhonov regularization observations were equations that defined preferred relations between the pilot points. Measured water levels, water levels that were simulated with RASA, depth-to-water beneath distributed groundwater and spring discharges, land-surface altitudes, spring discharge at Fish Springs, and changes in discharge on selected creek reaches were measurement observations. The effects of uncertain distributed groundwater-discharge estimates in Spring and Snake Valleys on transmissivity estimates were bounded with alternative models. Annual distributed groundwater discharges from Spring and Snake Valleys in the alternative models totaled 151,000 and 227,000 acre-feet, respectively and represented 20 percent differences from the 187,000 acre-feet per year that discharges from the GBNP-C model. Transmissivity estimates in the basin fill between Baker and Big Springs changed less than 50 percent between the two alternative models. Potential effects of pumping from Snake Valley were estimated with the Great Basin National Park predictive (GBNP-P) model, which is a transient groundwater-flow model. The hydraulic conductivity of basin fill and transmissivity of basement rock were the GBNP-C model distributions. Specific yields were defined from aquifer tests. Captures of distributed groundwater and spring discharges were simulated in the GBNP-P model using a combination of well and drain packages in MODFLOW. Simulated groundwater captures could not exceed measured groundwater-discharge rates. Four groundwater-development scenarios were investigated where total annual withdrawals ranged from 10,000 to 50,000 acre-feet during a 200-year pumping period. Four additional scenarios also were simulated that added the effects of existing pumping in Snake Valley. Potential groundwater pumping locations were limited to nine proposed points of diversion. Results are presented as maps of groundwater capture and drawdown, time series of drawdowns and discharges from selected wells, and time series of discharge reductions from selected springs and control volumes. Simulated drawdown propagation was attenuated where groundwater discharge could be captured. General patterns of groundwater capture and water-table declines were similar for all scenarios. Simulated drawdowns greater than 1 ft propagated outside of Spring and Snake Valleys after 200 years of pumping in all scenarios.
NASA Astrophysics Data System (ADS)
Deeb, R.; Kulasegaram, S.; Karihaloo, B. L.
2014-12-01
In part I of this two-part paper, a three-dimensional Lagrangian smooth particle hydrodynamics method has been used to model the flow of self-compacting concrete (SCC) with or without short steel fibres in the slump cone test. The constitutive behaviour of this non-Newtonian viscous fluid is described by a Bingham-type model. The 3D simulation of SCC without fibres is focused on the distribution of large aggregates (larger than or equal to 8 mm) during the flow. The simulation of self-compacting high- and ultra-high- performance concrete containing short steel fibres is focused on the distribution of fibres and their orientation during the flow. The simulation results show that the fibres and/or heavier aggregates do not precipitate but remain homogeneously distributed in the mix throughout the flow.
NASA Astrophysics Data System (ADS)
Mitchell, M. F.; Goodrich, D. C.; Gochis, D. J.; Lahmers, T. M.
2017-12-01
In semi-arid environments with complex terrain, redistribution of moisture occurs through runoff, stream infiltration, and regional groundwater flow. In semi-arid regions, stream infiltration has been shown to account for 10-40% of total recharge in high runoff years. These processes can potentially significantly alter land-atmosphere interactions through changes in sensible and latent heat release. However, currently, their overall impact is still unclear as historical model simulations generally made use of a coarse grid resolution, where these smaller-scale processes were either parameterized or not accounted for. To improve our understanding on the importance of stream infiltration and our ability to represent them in a coupled land-atmosphere model, this study focuses on the Walnut Gulch Experimental Watershed (WGEW) and Long-Term Agro-ecosystem Research (LTAR) site, surrounding the city of Tombstone, AZ. High-resolution surface precipitation, meteorological forcing and distributed runoff measurements have been obtained in WGEW since the 1960s. These data will be used as input for the spatially distributed WRF-Hydro model, a spatially distributed hydrological model that uses the NOAH-MP land surface model. Recently, we have implemented an infiltration loss scheme to WRF-Hydro. We will present the performance of WRF-Hydro to account for stream infiltration by comparing model simulation with in-situ observations. More specifically, as the performance of the model simulations has been shown to depend on the used model grid resolution, in the current work results will present WRF-Hydro simulations obtained at different pixel resolution (10-1000m).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ngirmang, Gregory K., E-mail: ngirmang.1@osu.edu; Orban, Chris; Feister, Scott
We present 3D Particle-in-Cell (PIC) modeling of an ultra-intense laser experiment by the Extreme Light group at the Air Force Research Laboratory using the Large Scale Plasma (LSP) PIC code. This is the first time PIC simulations have been performed in 3D for this experiment which involves an ultra-intense, short-pulse (30 fs) laser interacting with a water jet target at normal incidence. The laser-energy-to-ejected-electron-energy conversion efficiency observed in 2D(3v) simulations were comparable to the conversion efficiencies seen in the 3D simulations, but the angular distribution of ejected electrons in the 2D(3v) simulations displayed interesting differences with the 3D simulations' angular distribution;more » the observed differences between the 2D(3v) and 3D simulations were more noticeable for the simulations with higher intensity laser pulses. An analytic plane-wave model is discussed which provides some explanation for the angular distribution and energies of ejected electrons in the 2D(3v) simulations. We also performed a 3D simulation with circularly polarized light and found a significantly higher conversion efficiency and peak electron energy, which is promising for future experiments.« less
NASA Astrophysics Data System (ADS)
Mertens, Christopher; Moyers, Michael; Walker, Steven; Tweed, John
Recent developments in NASA's High Charge and Energy Transport (HZETRN) code have included lateral broadening of primary ion beams due to small-angle multiple Coulomb scattering, and coupling of the ion-nuclear scattering interactions with energy loss and straggling. The new version of HZETRN based on Green function methods, GRNTRN, is suitable for modeling transport with both space environment and laboratory boundary conditions. Multiple scattering processes are a necessary extension to GRNTRN in order to accurately model ion beam experiments, to simulate the physical and biological-effective radiation dose, and to develop new methods and strategies for light ion radiation therapy. In this paper we compare GRNTRN simulations of proton lateral scattering distributions with beam measurements taken at Loma Linda Medical University. The simulated and measured lateral proton distributions will be compared for a 250 MeV proton beam on aluminum, polyethylene, polystyrene, bone, iron, and lead target materials.
Zhu, Lin; Gong, Huili; Chen, Yun; Li, Xiaojuan; Chang, Xiang; Cui, Yijiao
2016-03-01
Hydraulic conductivity is a major parameter affecting the output accuracy of groundwater flow and transport models. The most commonly used semi-empirical formula for estimating conductivity is Kozeny-Carman equation. However, this method alone does not work well with heterogeneous strata. Two important parameters, grain size and porosity, often show spatial variations at different scales. This study proposes a method for estimating conductivity distributions by combining a stochastic hydrofacies model with geophysical methods. The Markov chain model with transition probability matrix was adopted to re-construct structures of hydrofacies for deriving spatial deposit information. The geophysical and hydro-chemical data were used to estimate the porosity distribution through the Archie's law. Results show that the stochastic simulated hydrofacies model reflects the sedimentary features with an average model accuracy of 78% in comparison with borehole log data in the Chaobai alluvial fan. The estimated conductivity is reasonable and of the same order of magnitude of the outcomes of the pumping tests. The conductivity distribution is consistent with the sedimentary distributions. This study provides more reliable spatial distributions of the hydraulic parameters for further numerical modeling.
Xing, Chao; Elston, Robert C
2006-07-01
The multipoint lod score and mod score methods have been advocated for their superior power in detecting linkage. However, little has been done to determine the distribution of multipoint lod scores or to examine the properties of mod scores. In this paper we study the distribution of multipoint lod scores both analytically and by simulation. We also study by simulation the distribution of maximum multipoint lod scores when maximized over different penetrance models. The multipoint lod score is approximately normally distributed with mean and variance that depend on marker informativity, marker density, specified genetic model, number of pedigrees, pedigree structure, and pattern of affection status. When the multipoint lod scores are maximized over a set of assumed penetrances models, an excess of false positive indications of linkage appear under dominant analysis models with low penetrances and under recessive analysis models with high penetrances. Therefore, caution should be taken in interpreting results when employing multipoint lod score and mod score approaches, in particular when inferring the level of linkage significance and the mode of inheritance of a trait.
Simulation of Electromigration Based on Resistor Networks
NASA Astrophysics Data System (ADS)
Patrinos, Anthony John
A two dimensional computer simulation of electromigration based on resistor networks was designed and implemented. The model utilizes a realistic grain structure generated by the Monte Carlo method and takes specific account of the local effects through which electromigration damage progresses. The dynamic evolution of the simulated thin film is governed by the local current and temperature distributions. The current distribution is calculated by superimposing a two dimensional electrical network on the lattice whose nodes correspond to the particles in the lattice and the branches to interparticle bonds. Current is assumed to flow from site to site via nearest neighbor bonds. The current distribution problem is solved by applying Kirchhoff's rules on the resulting electrical network. The calculation of the temperature distribution in the lattice proceeds by discretizing the partial differential equation for heat conduction, with appropriate material parameters chosen for the lattice and its defects. SEReNe (for Simulation of Electromigration using Resistor Networks) was tested by applying it to common situations arising in experiments with real films with satisfactory results. Specifically, the model successfully reproduces the expected grain size, line width and bamboo effects, the lognormal failure time distribution and the relationship between current density exponent and current density. It has also been modified to simulate temperature ramp experiments but with mixed, in this case, results.
Design Tool for Planning Permanganate Injection Systems
2010-08-01
Chemical Spill 10 CSTR continuously stirred tank reactors CT contact time EDB ethylene dibromide ESTCP Environmental Security Technology...63 6.2 Simulating Oxidant Distribution Using a Series of CSTRs ...ER- 0625. 6.2 SIMULATING OXIDANT DISTRIBUTION USING A SERIES OF CSTRS 6.2.1 MODEL DEVELOPMENT The transport and consumption of permanganate
Impervious surface is known to negatively affect catchment hydrology through both its extent and spatial distribution. In this study, we empirically quantify via model simulations the impacts of different configurations of impervious surface on watershed response to rainfall. An ...
The Role of Simulation Approaches in Statistics
ERIC Educational Resources Information Center
Wood, Michael
2005-01-01
This article explores the uses of a simulation model (the two bucket story)--implemented by a stand-alone computer program, or an Excel workbook (both on the web)--that can be used for deriving bootstrap confidence intervals, and simulating various probability distributions. The strengths of the model are its generality, the fact that it provides…
Effects of ignition location models on the burn patterns of simulated wildfires
Bar-Massada, A.; Syphard, A.D.; Hawbaker, T.J.; Stewart, S.I.; Radeloff, V.C.
2011-01-01
Fire simulation studies that use models such as FARSITE often assume that ignition locations are distributed randomly, because spatially explicit information about actual ignition locations are difficult to obtain. However, many studies show that the spatial distribution of ignition locations, whether human-caused or natural, is non-random. Thus, predictions from fire simulations based on random ignitions may be unrealistic. However, the extent to which the assumption of ignition location affects the predictions of fire simulation models has never been systematically explored. Our goal was to assess the difference in fire simulations that are based on random versus non-random ignition location patterns. We conducted four sets of 6000 FARSITE simulations for the Santa Monica Mountains in California to quantify the influence of random and non-random ignition locations and normal and extreme weather conditions on fire size distributions and spatial patterns of burn probability. Under extreme weather conditions, fires were significantly larger for non-random ignitions compared to random ignitions (mean area of 344.5 ha and 230.1 ha, respectively), but burn probability maps were highly correlated (r = 0.83). Under normal weather, random ignitions produced significantly larger fires than non-random ignitions (17.5 ha and 13.3 ha, respectively), and the spatial correlations between burn probability maps were not high (r = 0.54), though the difference in the average burn probability was small. The results of the study suggest that the location of ignitions used in fire simulation models may substantially influence the spatial predictions of fire spread patterns. However, the spatial bias introduced by using a random ignition location model may be minimized if the fire simulations are conducted under extreme weather conditions when fire spread is greatest. ?? 2010 Elsevier Ltd.
Evaluation of column-averaged methane in models and TCCON with a focus on the stratosphere
Ostler, Andreas; Sussmann, Ralf; Patra, Prabir K.; ...
2016-09-28
The distribution of methane (CH 4) in the stratosphere can be a major driver of spatial variability in the dry-air column-averaged CH 4 mixing ratio (XCH 4), which is being measured increasingly for the assessment of CH 4 surface emissions. Chemistry-transport models (CTMs) therefore need to simulate the tropospheric and stratospheric fractional columns of XCH 4 accurately for estimating surface emissions from XCH 4. Simulations from three CTMs are tested against XCH 4 observations from the Total Carbon Column Network (TCCON). We analyze how the model–TCCON agreement in XCH 4 depends on the model representation of stratospheric CH 4 distributions.more » Model equivalents of TCCON XCH 4 are computed with stratospheric CH 4 fields from both the model simulations and from satellite-based CH 4 distributions from MIPAS (Michelson Interferometer for Passive Atmospheric Sounding) and MIPAS CH 4 fields adjusted to ACE-FTS (Atmospheric Chemistry Experiment Fourier Transform Spectrometer) observations. Using MIPAS-based stratospheric CH 4 fields in place of model simulations improves the model–TCCON XCH 4 agreement for all models. For the Atmospheric Chemistry Transport Model (ACTM) the average XCH 4 bias is significantly reduced from 38.1 to 13.7 ppb, whereas small improvements are found for the models TM5 (Transport Model, version 5; from 8.7 to 4.3 ppb) and LMDz (Laboratoire de Météorologie Dynamique model with zooming capability; from 6.8 to 4.3 ppb). Replacing model simulations with MIPAS stratospheric CH 4 fields adjusted to ACE-FTS reduces the average XCH 4 bias for ACTM (3.3 ppb), but increases the average XCH 4 bias for TM5 (10.8 ppb) and LMDz (20.0 ppb). These findings imply that model errors in simulating stratospheric CH 4 contribute to model biases. Current satellite instruments cannot definitively measure stratospheric CH 4 to sufficient accuracy to eliminate these biases. Applying transport diagnostics to the models indicates that model-to-model differences in the simulation of stratospheric transport, notably the age of stratospheric air, can largely explain the inter-model spread in stratospheric CH 4 and, hence, its contribution to XCH 4. Furthermore, it would be worthwhile to analyze how individual model components (e.g., physical parameterization, meteorological data sets, model horizontal/vertical resolution) impact the simulation of stratospheric CH 4 and XCH 4.« less
Evaluation of column-averaged methane in models and TCCON with a focus on the stratosphere
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ostler, Andreas; Sussmann, Ralf; Patra, Prabir K.
The distribution of methane (CH 4) in the stratosphere can be a major driver of spatial variability in the dry-air column-averaged CH 4 mixing ratio (XCH 4), which is being measured increasingly for the assessment of CH 4 surface emissions. Chemistry-transport models (CTMs) therefore need to simulate the tropospheric and stratospheric fractional columns of XCH 4 accurately for estimating surface emissions from XCH 4. Simulations from three CTMs are tested against XCH 4 observations from the Total Carbon Column Network (TCCON). We analyze how the model–TCCON agreement in XCH 4 depends on the model representation of stratospheric CH 4 distributions.more » Model equivalents of TCCON XCH 4 are computed with stratospheric CH 4 fields from both the model simulations and from satellite-based CH 4 distributions from MIPAS (Michelson Interferometer for Passive Atmospheric Sounding) and MIPAS CH 4 fields adjusted to ACE-FTS (Atmospheric Chemistry Experiment Fourier Transform Spectrometer) observations. Using MIPAS-based stratospheric CH 4 fields in place of model simulations improves the model–TCCON XCH 4 agreement for all models. For the Atmospheric Chemistry Transport Model (ACTM) the average XCH 4 bias is significantly reduced from 38.1 to 13.7 ppb, whereas small improvements are found for the models TM5 (Transport Model, version 5; from 8.7 to 4.3 ppb) and LMDz (Laboratoire de Météorologie Dynamique model with zooming capability; from 6.8 to 4.3 ppb). Replacing model simulations with MIPAS stratospheric CH 4 fields adjusted to ACE-FTS reduces the average XCH 4 bias for ACTM (3.3 ppb), but increases the average XCH 4 bias for TM5 (10.8 ppb) and LMDz (20.0 ppb). These findings imply that model errors in simulating stratospheric CH 4 contribute to model biases. Current satellite instruments cannot definitively measure stratospheric CH 4 to sufficient accuracy to eliminate these biases. Applying transport diagnostics to the models indicates that model-to-model differences in the simulation of stratospheric transport, notably the age of stratospheric air, can largely explain the inter-model spread in stratospheric CH 4 and, hence, its contribution to XCH 4. Furthermore, it would be worthwhile to analyze how individual model components (e.g., physical parameterization, meteorological data sets, model horizontal/vertical resolution) impact the simulation of stratospheric CH 4 and XCH 4.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soltani, M; Sefidgar, M; Bazmara, H
2015-06-15
Purpose: In this study, a mathematical model is utilized to simulate FDG distribution in tumor tissue. In contrast to conventional compartmental modeling, tracer distributions across space and time are directly linked together (i.e. moving beyond ordinary differential equations (ODEs) to utilizing partial differential equations (PDEs) coupling space and time). The diffusion and convection transport mechanisms are both incorporated to model tracer distribution. We aimed to investigate the contributions of these two mechanisms on FDG distribution for various tumor geometries obtained from PET/CT images. Methods: FDG transport was simulated via a spatiotemporal distribution model (SDM). The model is based on amore » 5K compartmental model. We model the fact that tracer concentration in the second compartment (extracellular space) is modulated via convection and diffusion. Data from n=45 patients with pancreatic tumors as imaged using clinical FDG PET/CT imaging were analyzed, and geometrical information from the tumors including size, shape, and aspect ratios were classified. Tumors with varying shapes and sizes were assessed in order to investigate the effects of convection and diffusion mechanisms on FDG transport. Numerical methods simulating interstitial flow and solute transport in tissue were utilized. Results: We have shown the convection mechanism to depend on the shape and size of tumors whereas diffusion mechanism is seen to exhibit low dependency on shape and size. Results show that concentration distribution of FDG is relatively similar for the considered tumors; and that the diffusion mechanism of FDG transport significantly dominates the convection mechanism. The Peclet number which shows the ratio of convection to diffusion rates was shown to be of the order of 10−{sup 3} for all considered tumors. Conclusion: We have demonstrated that even though convection leads to varying tracer distribution profiles depending on tumor shape and size, the domination of the diffusion phenomenon prevents these factors from modulating FDG distribution.« less
Distributed phased array architecture study
NASA Technical Reports Server (NTRS)
Bourgeois, Brian
1987-01-01
Variations in amplifiers and phase shifters can cause degraded antenna performance, depending also on the environmental conditions and antenna array architecture. The implementation of distributed phased array hardware was studied with the aid of the DISTAR computer program as a simulation tool. This simulation provides guidance in hardware simulation. Both hard and soft failures of the amplifiers in the T/R modules are modeled. Hard failures are catastrophic: no power is transmitted to the antenna elements. Noncatastrophic or soft failures are modeled as a modified Gaussian distribution. The resulting amplitude characteristics then determine the array excitation coefficients. The phase characteristics take on a uniform distribution. Pattern characteristics such as antenna gain, half power beamwidth, mainbeam phase errors, sidelobe levels, and beam pointing errors were studied as functions of amplifier and phase shifter variations. General specifications for amplifier and phase shifter tolerances in various architecture configurations for C band and S band were determined.
Multinomial mixture model with heterogeneous classification probabilities
Holland, M.D.; Gray, B.R.
2011-01-01
Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.
NASA Technical Reports Server (NTRS)
Wrotniak, J. A.; Yodh, G. B.
1985-01-01
The x-y controversy is studied by introducing models with as many features (except for x and y distributions) in common, as possible, to avoid an extrapolation problem, only primary energies of 500 TeV are considered. To prove the point, Monte Carlo simulations are performed of EAS generated by 500 TeV vertical primary protons. Four different nuclear interaction models were used. Two of them are described elsewhere. Two are: (1) Model M-Y00 - with inclusive x and y distributions behaving in a scaling way; and (2) Model M-F00 - at and below ISR energies (1 TeV in Lab) exactly equivalent to the above, then gradually changing to provide the distributions in rapidity at 155 TeV as given by SPS proton-antiproton. This was achieved by gradual decrease in the scale unit in x distributions of produced secondaries, as interaction energy increases. Other modifications to the M-Y00 model were made.
An Urban Diffusion Simulation Model for Carbon Monoxide
ERIC Educational Resources Information Center
Johnson, W. B.; And Others
1973-01-01
A relatively simple Gaussian-type diffusion simulation model for calculating urban carbon (CO) concentrations as a function of local meteorology and the distribution of traffic is described. The model can be used in two ways: in the synoptic mode and in the climatological mode. (Author/BL)
Evaluation of the whole body physiologically based pharmacokinetic (WB-PBPK) modeling of drugs.
Munir, Anum; Azam, Shumaila; Fazal, Sahar; Bhatti, A I
2018-08-14
The Physiologically based pharmacokinetic (PBPK) modeling is a supporting tool in drug discovery and improvement. Simulations produced by these models help to save time and aids in examining the effects of different variables on the pharmacokinetics of drugs. For this purpose, Sheila and Peters suggested a PBPK model capable of performing simulations to study a given drug absorption. There is a need to extend this model to the whole body entailing all another process like distribution, metabolism, and elimination, besides absorption. The aim of this scientific study is to hypothesize a WB-PBPK model through integrating absorption, distribution, metabolism, and elimination processes with the existing PBPK model.Absorption, distribution, metabolism, and elimination models are designed, integrated with PBPK model and validated. For validation purposes, clinical records of few drugs are collected from the literature. The developed WB-PBPK model is affirmed by comparing the simulations produced by the model against the searched clinical data. . It is proposed that the WB-PBPK model may be used in pharmaceutical industries to create of the pharmacokinetic profiles of drug candidates for better outcomes, as it is advance PBPK model and creates comprehensive PK profiles for drug ADME in concentration-time plots. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Guan, Fada
Monte Carlo method has been successfully applied in simulating the particles transport problems. Most of the Monte Carlo simulation tools are static and they can only be used to perform the static simulations for the problems with fixed physics and geometry settings. Proton therapy is a dynamic treatment technique in the clinical application. In this research, we developed a method to perform the dynamic Monte Carlo simulation of proton therapy using Geant4 simulation toolkit. A passive-scattering treatment nozzle equipped with a rotating range modulation wheel was modeled in this research. One important application of the Monte Carlo simulation is to predict the spatial dose distribution in the target geometry. For simplification, a mathematical model of a human body is usually used as the target, but only the average dose over the whole organ or tissue can be obtained rather than the accurate spatial dose distribution. In this research, we developed a method using MATLAB to convert the medical images of a patient from CT scanning into the patient voxel geometry. Hence, if the patient voxel geometry is used as the target in the Monte Carlo simulation, the accurate spatial dose distribution in the target can be obtained. A data analysis tool---root was used to score the simulation results during a Geant4 simulation and to analyze the data and plot results after simulation. Finally, we successfully obtained the accurate spatial dose distribution in part of a human body after treating a patient with prostate cancer using proton therapy.
Investigation on the radial micro-motion about piston of axial piston pump
NASA Astrophysics Data System (ADS)
Xu, Bing; Zhang, Junhui; Yang, Huayong; Zhang, Bin
2013-03-01
The limit working parameters and service life of axial piston pump are determined by the carrying ability and lubrication characteristic of its key friction pairs. Therefore, the design and optimization of the key friction pairs are always a key and difficult problem in the research on axial piston pump. In the traditional research on piston/cylinder pair, the assembly relationship of piston and cylinder bore is simplified into ideal cylindrical pair, which can not be used to analyze the influences of radial micro-motion of piston on the distribution characteristics of oil-film thickness and pressure in details. In this paper, based on the lubrication theory of the oil film, a numerical simulation model is built, taking the influences of roughness, elastic deformation of piston and pressure-viscosity effect into consideration. With the simulation model, the dynamic characteristics of the radial micro-motion and pressure distribution are analyzed, and the relationships between radial micro-motion and carrying ability, lubrication condition, and abrasion are discussed. Furthermore, a model pump for pressure distribution measurement of oil film between piston and cylinder bore is designed. The comparison of simulation and experimental results of pressure distribution shows that the simulation model has high accuracy. The experiment and simulation results demonstrate that the pressure distribution has peak values that are much higher than the boundary pressure in the piston chamber due to the radial micro-motion, and the abrasion of piston takes place mainly on the hand close to piston ball. In addition, improvement of manufacturing roundness and straightness of piston and cylinder bore is helpful to improve the carrying ability of piston/cylinder pair. The proposed research provides references for designing piston/cylinder pair, and helps to prolong the service life of axial piston pump.
Scherer, Michael D; McGlumphy, Edwin A; Seghi, Robert R; Campagni, Wayne V
2013-01-01
The purpose of this investigation was to evaluate the effects of number and distribution of implants upon in vitro dislodging forces to a simulated implant-supported overdenture and to examine differences between several different attachment systems. An experiment was undertaken utilizing a model simulating a mandibular edentulous ridge with dental implants in positions on the model approximating tooth positions in the natural dentition. A cobalt-chromium-cast testing framework was used to measure the peak load required to disconnect an attachment. Four different types of commercially available attachments were used in various positions on the model in sequence to evaluate the effects of retention and stability of overdentures based on implant number and distribution: (1) ERA, (2) O-Ring, (3) Locator, and (4) Ball. For each group, 10 measurements were made of peak dislodging forces. Means were calculated and differences among the systems, directions, and groups were identified using a repeated measured analysis of variance (α = .05). The interactions between the attachment system, direction of force, and implant number and distribution were statistically significant. Vertical dislodging forces of the simulated overdenture prosthesis increased with additional widely spaced implants. Oblique dislodging forces of the simulated prosthesis increased with additional widely spaced implants except in the two-implant model with all attachments, and in the four-implant groups with Locator attachments. Anteroposterior dislodging forces of a simulated overdenture prosthesis increased with additional widely spaced implants except in the four-implant groups with Ball and Locator attachments. Ball attachments reported the highest levels of retention and stability followed by Locator, O-Ring, and ERA. Within the limitations of this study, retention and stability of an implant overdenture prosthesis are significantly affected by implant number, implant distribution, and abutment type.
Modeling field-scale cosolvent flooding for DNAPL source zone remediation
NASA Astrophysics Data System (ADS)
Liang, Hailian; Falta, Ronald W.
2008-02-01
A three-dimensional, compositional, multiphase flow simulator was used to model a field-scale test of DNAPL removal by cosolvent flooding. The DNAPL at this site was tetrachloroethylene (PCE), and the flooding solution was an ethanol/water mixture, with up to 95% ethanol. The numerical model, UTCHEM accounts for the equilibrium phase behavior and multiphase flow of a ternary ethanol-PCE-water system. Simulations of enhanced cosolvent flooding using a kinetic interphase mass transfer approach show that when a very high concentration of alcohol is injected, the DNAPL/water/alcohol mixture forms a single phase and local mass transfer limitations become irrelevant. The field simulations were carried out in three steps. At the first level, a simple uncalibrated layered model is developed. This model is capable of roughly reproducing the production well concentrations of alcohol, but not of PCE. A more refined (but uncalibrated) permeability model is able to accurately simulate the breakthrough concentrations of injected alcohol from the production wells, but is unable to accurately predict the PCE removal. The final model uses a calibration of the initial PCE distribution to get good matches with the PCE effluent curves from the extraction wells. It is evident that the effectiveness of DNAPL source zone remediation is mainly affected by characteristics of the spatial heterogeneity of porous media and the variable (and unknown) DNAPL distribution. The inherent uncertainty in the DNAPL distribution at real field sites means that some form of calibration of the initial contaminant distribution will almost always be required to match contaminant effluent breakthrough curves.
Modeling field-scale cosolvent flooding for DNAPL source zone remediation.
Liang, Hailian; Falta, Ronald W
2008-02-19
A three-dimensional, compositional, multiphase flow simulator was used to model a field-scale test of DNAPL removal by cosolvent flooding. The DNAPL at this site was tetrachloroethylene (PCE), and the flooding solution was an ethanol/water mixture, with up to 95% ethanol. The numerical model, UTCHEM accounts for the equilibrium phase behavior and multiphase flow of a ternary ethanol-PCE-water system. Simulations of enhanced cosolvent flooding using a kinetic interphase mass transfer approach show that when a very high concentration of alcohol is injected, the DNAPL/water/alcohol mixture forms a single phase and local mass transfer limitations become irrelevant. The field simulations were carried out in three steps. At the first level, a simple uncalibrated layered model is developed. This model is capable of roughly reproducing the production well concentrations of alcohol, but not of PCE. A more refined (but uncalibrated) permeability model is able to accurately simulate the breakthrough concentrations of injected alcohol from the production wells, but is unable to accurately predict the PCE removal. The final model uses a calibration of the initial PCE distribution to get good matches with the PCE effluent curves from the extraction wells. It is evident that the effectiveness of DNAPL source zone remediation is mainly affected by characteristics of the spatial heterogeneity of porous media and the variable (and unknown) DNAPL distribution. The inherent uncertainty in the DNAPL distribution at real field sites means that some form of calibration of the initial contaminant distribution will almost always be required to match contaminant effluent breakthrough curves.
NASA Astrophysics Data System (ADS)
Keilbach, D.; Drews, C.; Taut, A.; Wimmer-Schweingruber, R. F.
2016-12-01
Recent studies of the inflow direction of the local insterstellar medium from PUI density distributions have shown that the extrema of the longitudinal distribution of PUI velocities (with respect to the solar wind speed) can be attributed to the radial velocity of the interstellar neutral seed population and is symmetric around the inflow direction of the local interstellar medium. This work is aimed to model pickup ion injection rates from photoionization (which is the main process of interstellar PUI production) throughout the heliosphere. To that end a seed population of interstellar neutrals is injected into a model heliosphere at 60 AU distance from the sun, whereas each particle's initial speed is given by a maxwellian distribution at a temperature of 1 eV and an inflow speed of 22 km/s. Then the density of the interstellar neutrals is integrated over the model heliosphere, while the movement of the neutrals is simulated using timestep methods. To model the focusing of the interstellar neutral trajectories from the sun's gravitational potential the model heliosphere contains a central gravitational potential.Each neutral test particle can be ionized via photoionization with a per-timestep probability antiproportional to the neutral's distance to the sun squared. By tracking the ionization rate location-dependently, PUI injection rates have been determined. Therefore using these simulations the density distributions of different species of interstellar neutrals have been calculated. In addition location-dependent injection rates of different species of PUIs have been calculated, which show an increased rate of PUI production in the focusing cone region (e.g. for He+ PUIs), but also in the crescent region (e.g. for O+ PUIs).Furthermore the longitudinal distribution of the neutrals' velocity at 1 AU is calculated from the simulation's results in order to estimate the PUI cut-off as a function of ecliptic longitude. Figure: Simulated He neutral density (left) and simulated He PUI production rates from photoionization (right). The sun is located at 0 AU at both x-and y-axes.
Eddy, Sean R.
2008-01-01
Sequence database searches require accurate estimation of the statistical significance of scores. Optimal local sequence alignment scores follow Gumbel distributions, but determining an important parameter of the distribution (λ) requires time-consuming computational simulation. Moreover, optimal alignment scores are less powerful than probabilistic scores that integrate over alignment uncertainty (“Forward” scores), but the expected distribution of Forward scores remains unknown. Here, I conjecture that both expected score distributions have simple, predictable forms when full probabilistic modeling methods are used. For a probabilistic model of local sequence alignment, optimal alignment bit scores (“Viterbi” scores) are Gumbel-distributed with constant λ = log 2, and the high scoring tail of Forward scores is exponential with the same constant λ. Simulation studies support these conjectures over a wide range of profile/sequence comparisons, using 9,318 profile-hidden Markov models from the Pfam database. This enables efficient and accurate determination of expectation values (E-values) for both Viterbi and Forward scores for probabilistic local alignments. PMID:18516236
NASA Astrophysics Data System (ADS)
Ulfah, S.; Awalludin, S. A.; Wahidin
2018-01-01
Advection-diffusion model is one of the mathematical models, which can be used to understand the distribution of air pollutant in the atmosphere. It uses the 2D advection-diffusion model with time-dependent to simulate air pollution distribution in order to find out whether the pollutants are more concentrated at ground level or near the source of emission under particular atmospheric conditions such as stable, unstable, and neutral conditions. Wind profile, eddy diffusivity, and temperature are considered in the model as parameters. The model is solved by using explicit finite difference method, which is then visualized by a computer program developed using Lazarus programming software. The results show that the atmospheric conditions alone influencing the level of concentration of pollutants is not conclusive as the parameters in the model have their own effect on each atmospheric condition.
Synthetic Survey of the Kepler Field
NASA Astrophysics Data System (ADS)
Wells, Mark; Prša, Andrej
2018-01-01
In the era of large scale surveys, including LSST and Gaia, binary population studies will flourish due to the large influx of data. In addition to probing binary populations as a function of galactic latitude, under-sampled groups such as low mass binaries will be observed at an unprecedented rate. To prepare for these missions, binary population simulations need to be carried out at high fidelity. These simulations will enable the creation of simulated data and, through comparison with real data, will allow the underlying binary parameter distributions to be explored. In order for the simulations to be considered robust, they should reproduce observed distributions accurately. To this end we have developed a simulator which takes input models and creates a synthetic population of eclipsing binaries. Starting from a galactic single star model, implemented using Galaxia, a code by Sharma et al. (2011), and applying observed multiplicity, mass-ratio, period, and eccentricity distributions, as reported by Raghavan et al. (2010), Duchêne & Kraus (2013), and Moe & Di Stefano (2017), we are able to generate synthetic binary surveys that correspond to any survey cadences. In order to calibrate our input models we compare the results of our synthesized eclipsing binary survey to the Kepler Eclipsing Binary catalog.
Temporal rainfall estimation using input data reduction and model inversion
NASA Astrophysics Data System (ADS)
Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.
2016-12-01
Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.
NASA Astrophysics Data System (ADS)
César Mansur Filho, Júlio; Dickman, Ronald
2011-05-01
We study symmetric sleepy random walkers, a model exhibiting an absorbing-state phase transition in the conserved directed percolation (CDP) universality class. Unlike most examples of this class studied previously, this model possesses a continuously variable control parameter, facilitating analysis of critical properties. We study the model using two complementary approaches: analysis of the numerically exact quasistationary (QS) probability distribution on rings of up to 22 sites, and Monte Carlo simulation of systems of up to 32 000 sites. The resulting estimates for critical exponents β, \\beta /\
Vogstad, A R; Moxley, R A; Erickson, G E; Klopfenstein, T J; Smith, D R
2014-06-01
Pens of cattle with high Escherichia coli O157:H7 (STEC O157) prevalence at harvest may present a greater risk to food safety than pens of lower prevalence. Vaccination of live cattle against STEC O157 has been proposed as an approach to reduce STEC O157 prevalence in live cattle. Our objective was to create a stochastic simulation model to evaluate the effectiveness of pre-harvest interventions. We used the model to compare STEC O157 prevalence distributions for summer- and winter-fed cattle to summer-fed cattle immunized with a type III secreted protein (TTSP) vaccine. Model inputs were an estimate of vaccine efficacy, observed frequency distributions for number of animals within a pen, and pen-level faecal shedding prevalence for summer and winter. Uncertainty about vaccine efficacy was simulated using a log-normal distribution (mean = 58%, SE = 0.14). Model outputs were distributions of STEC O157 faecal pen prevalence of summer-fed cattle unvaccinated and vaccinated, and winter-fed cattle unvaccinated. The simulation was performed 5000 times. Summer faecal prevalence ranged from 0% to 80% (average = 30%). Thirty-six per cent of summer-fed pens had STEC O157 prevalence >40%. Winter faecal prevalence ranged from 0% to 60% (average = 10%). Seven per cent of winter-fed pens had STEC O157 prevalence >40%. Faecal prevalence for summer-fed pens vaccinated with a 58% efficacious vaccine product ranged from 0% to 52% (average = 13%). Less than one per cent of vaccinated pens had STEC O157 prevalence >40%. In this simulation, vaccination mitigated the risk of STEC O157 faecal shedding to levels comparable to winter, with the major effects being reduced average shedding prevalence, reduced variability in prevalence distribution, and a reduction in the occurrence of the highest prevalence pens. Food safety decision-makers may find this modelling approach useful for evaluating the value of pre-harvest interventions. © 2013 Blackwell Verlag GmbH.
Shao, J Y; Shu, C; Huang, H B; Chew, Y T
2014-03-01
A free-energy-based phase-field lattice Boltzmann method is proposed in this work to simulate multiphase flows with density contrast. The present method is to improve the Zheng-Shu-Chew (ZSC) model [Zheng, Shu, and Chew, J. Comput. Phys. 218, 353 (2006)] for correct consideration of density contrast in the momentum equation. The original ZSC model uses the particle distribution function in the lattice Boltzmann equation (LBE) for the mean density and momentum, which cannot properly consider the effect of local density variation in the momentum equation. To correctly consider it, the particle distribution function in the LBE must be for the local density and momentum. However, when the LBE of such distribution function is solved, it will encounter a severe numerical instability. To overcome this difficulty, a transformation, which is similar to the one used in the Lee-Lin (LL) model [Lee and Lin, J. Comput. Phys. 206, 16 (2005)] is introduced in this work to change the particle distribution function for the local density and momentum into that for the mean density and momentum. As a result, the present model still uses the particle distribution function for the mean density and momentum, and in the meantime, considers the effect of local density variation in the LBE as a forcing term. Numerical examples demonstrate that both the present model and the LL model can correctly simulate multiphase flows with density contrast, and the present model has an obvious improvement over the ZSC model in terms of solution accuracy. In terms of computational time, the present model is less efficient than the ZSC model, but is much more efficient than the LL model.
Money-center structures in dynamic banking systems
NASA Astrophysics Data System (ADS)
Li, Shouwei; Zhang, Minghui
2016-10-01
In this paper, we propose a dynamic model for banking systems based on the description of balance sheets. It generates some features identified through empirical analysis. Through simulation analysis of the model, we find that banking systems have the feature of money-center structures, that bank asset distributions are power-law distributions, and that contract size distributions are log-normal distributions.
2014-06-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited MODELING AND... Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction...12a. DISTRIBUTION / AVAILABILITY STATEMENT Approved for public release; distribution is unlimited 12b. DISTRIBUTION CODE A 13. ABSTRACT
A spatially distributed energy balance snowmelt model for application in mountain basins
Marks, D.; Domingo, J.; Susong, D.; Link, T.; Garen, D.
1999-01-01
Snowmelt is the principal source for soil moisture, ground-water re-charge, and stream-flow in mountainous regions of the western US, Canada, and other similar regions of the world. Information on the timing, magnitude, and contributing area of melt under variable or changing climate conditions is required for successful water and resource management. A coupled energy and mass-balance model ISNOBAL is used to simulate the development and melting of the seasonal snowcover in several mountain basins in California, Idaho, and Utah. Simulations are done over basins varying from 1 to 2500 km2, with simulation periods varying from a few days for the smallest basin, Emerald Lake watershed in California, to multiple snow seasons for the Park City area in Utah. The model is driven by topographically corrected estimates of radiation, temperature, humidity, wind, and precipitation. Simulation results in all basins closely match independently measured snow water equivalent, snow depth, or runoff during both the development and depletion of the snowcover. Spatially distributed estimates of snow deposition and melt allow us to better understand the interaction between topographic structure, climate, and moisture availability in mountain basins of the western US. Application of topographically distributed models such as this will lead to improved water resource and watershed management.Snowmelt is the principal source for soil moisture, ground-water re-charge, and stream-flow in mountainous regions of the western US, Canada, and other similar regions of the world. Information on the timing, magnitude, and contributing area of melt under variable or changing climate conditions is required for successful water and resource management. A coupled energy and mass-balance model ISNOBAL is used to simulate the development and melting of the seasonal snowcover in several mountain basins in California, Idaho, and Utah. Simulations are done over basins varying from 1 to 2500 km2, with simulation periods varying from a few days for the smallest basin, Emerald Lake watershed in California, to multiple snow seasons for the Park City area in Utah. The model is driven by topographically corrected estimates of radiation, temperature, humidity, wind, and precipitation. Simulation results in all basins closely match independently measured snow water equivalent, snow depth, or runoff during both the development and depletion of the snowcover. Spatially distributed estimates of snow deposition and melt allow us to better understand the interaction between topographic structure, climate, and moisture availability in mountain basins of the western US. Application of topographically distributed models such as this will lead to improved water resource and watershed management.
Suzuki, Yuma; Shimizu, Tetsuhide; Yang, Ming
2017-01-01
The quantitative evaluation of the biomolecules transport with multi-physics in nano/micro scale is demanded in order to optimize the design of microfluidics device for the biomolecules detection with high detection sensitivity and rapid diagnosis. This paper aimed to investigate the effectivity of the computational simulation using the numerical model of the biomolecules transport with multi-physics near a microchannel surface on the development of biomolecules-detection devices. The biomolecules transport with fluid drag force, electric double layer (EDL) force, and van der Waals force was modeled by Newtonian Equation of motion. The model validity was verified in the influence of ion strength and flow velocity on biomolecules distribution near the surface compared with experimental results of previous studies. The influence of acting forces on its distribution near the surface was investigated by the simulation. The trend of its distribution to ion strength and flow velocity was agreement with the experimental result by the combination of all acting forces. Furthermore, EDL force dominantly influenced its distribution near its surface compared with fluid drag force except for the case of high velocity and low ion strength. The knowledges from the simulation might be useful for the design of biomolecules-detection devices and the simulation can be expected to be applied on its development as the design tool for high detection sensitivity and rapid diagnosis in the future.
Numerical simulation of failure behavior of granular debris flows based on flume model tests.
Zhou, Jian; Li, Ye-xun; Jia, Min-cai; Li, Cui-na
2013-01-01
In this study, the failure behaviors of debris flows were studied by flume model tests with artificial rainfall and numerical simulations (PFC(3D)). Model tests revealed that grain sizes distribution had profound effects on failure mode, and the failure in slope of medium sand started with cracks at crest and took the form of retrogressive toe sliding failure. With the increase of fine particles in soil, the failure mode of the slopes changed to fluidized flow. The discrete element method PFC(3D) can overcome the hypothesis of the traditional continuous medium mechanic and consider the simple characteristics of particle. Thus, a numerical simulations model considering liquid-solid coupled method has been developed to simulate the debris flow. Comparing the experimental results, the numerical simulation result indicated that the failure mode of the failure of medium sand slope was retrogressive toe sliding, and the failure of fine sand slope was fluidized sliding. The simulation result is consistent with the model test and theoretical analysis, and grain sizes distribution caused different failure behavior of granular debris flows. This research should be a guide to explore the theory of debris flow and to improve the prevention and reduction of debris flow.
Evers, J B; Vos, J; Yin, X; Romero, P; van der Putten, P E L; Struik, P C
2010-05-01
Intimate relationships exist between form and function of plants, determining many processes governing their growth and development. However, in most crop simulation models that have been created to simulate plant growth and, for example, predict biomass production, plant structure has been neglected. In this study, a detailed simulation model of growth and development of spring wheat (Triticum aestivum) is presented, which integrates degree of tillering and canopy architecture with organ-level light interception, photosynthesis, and dry-matter partitioning. An existing spatially explicit 3D architectural model of wheat development was extended with routines for organ-level microclimate, photosynthesis, assimilate distribution within the plant structure according to organ demands, and organ growth and development. Outgrowth of tiller buds was made dependent on the ratio between assimilate supply and demand of the plants. Organ-level photosynthesis, biomass production, and bud outgrowth were simulated satisfactorily. However, to improve crop simulation results more efforts are needed mechanistically to model other major plant physiological processes such as nitrogen uptake and distribution, tiller death, and leaf senescence. Nevertheless, the work presented here is a significant step forwards towards a mechanistic functional-structural plant model, which integrates plant architecture with key plant processes.
NASA Astrophysics Data System (ADS)
Kleinn, J.; Frei, C.; Gurtz, J.; Vidale, P. L.; Schär, C.
2003-04-01
The consequences of extreme runoff and extreme water levels are within the most important weather induced natural hazards. The question about the impact of a global climate change on the runoff regime, especially on the frequency of floods, is of utmost importance. In winter-time, two possible climate effects could influence the runoff statistis of large Central European rivers: the shift from snowfall to rain as a consequence of higher temperatures and the increase of heavy precipitation events due to an intensification of the hydrological cycle. The combined effect on the runoff statistics is examined in this study for the river Rhine. To this end, sensitivity experiments with a model chain including a regional climate model and a distributed runoff model are presented. The experiments are based on an idealized surrogate climate change scenario which stipulates a uniform increase in temperature by 2 Kelvin and an increase in atmospheric specific humidity by 15% (resulting from unchanged relative humidity) in the forcing fields for the regional climate model. The regional climate model CHRM is based on the mesoscale weather prediction model HRM of the German Weather Service (DWD) and has been adapted for climate simulations. The model is being used in a nested mode with horizontal resolutions of 56 km and 14 km. The boundary conditions are taken from the original ECMWF reanalysis and from a modified version representing the surrogate scenario. The distributed runoff model (WaSiM) is used at a horizontal resolution of 1 km for the whole Rhine basin down to Cologne. The coupling of the models is provided by a downscaling of the climate model fields (precipitaion, temperature, radiation, humidity, and wind) to the resolution of the distributed runoff model. The simulations cover the period of September 1987 to January 1994 with a special emphasis on the five winter seasons 1989/90 until 1993/94, each from November until January. A detailed validation of the control simulation shows a good correspondence of the precipitation fields from the regional climate model with measured fields regarding the distribution of precipitation at the scale of the Rhine basin. Systematic errors are visible at the scale of single subcatchements, in the altitudinal distribution and in the frequency distribution of precipitation. These errors only marginally affect the runoff simulations, which show good correspondence with runoff observations. The presentation includes results from the scenario simulations for the whole basin as well as for Alpine and lowland subcatchements. The change in the runoff statistics is being analyzed with respect to the changes in snowfall and to the fequency distribution of precipitation.
Hydrological and water quality processes simulation by the integrated MOHID model
NASA Astrophysics Data System (ADS)
Epelde, Ane; Antiguedad, Iñaki; Brito, David; Eduardo, Jauch; Neves, Ramiro; Sauvage, Sabine; Sánchez-Pérez, José Miguel
2016-04-01
Different modelling approaches have been used in recent decades to study the water quality degradation caused by non-point source pollution. In this study, the MOHID fully distributed and physics-based model has been employed to simulate hydrological processes and nitrogen dynamics in a nitrate vulnerable zone: the Alegria River watershed (Basque Country, Northern Spain). The results of this study indicate that the MOHID code is suitable for hydrological processes simulation at the watershed scale, as the model shows satisfactory performance at simulating the discharge (with NSE: 0.74 and 0.76 during calibration and validation periods, respectively). The agronomical component of the code, allowed the simulation of agricultural practices, which lead to adequate crop yield simulation in the model. Furthermore, the nitrogen exportation also shows satisfactory performance (with NSE: 0.64 and 0.69 during calibration and validation periods, respectively). While the lack of field measurements do not allow to evaluate the nutrient cycling processes in depth, it has been observed that the MOHID model simulates the annual denitrification according to general ranges established for agricultural watersheds (in this study, 9 kg N ha-1 year-1). In addition, the model has simulated coherently the spatial distribution of the denitrification process, which is directly linked to the simulated hydrological conditions. Thus, the model has localized the highest rates nearby the discharge zone of the aquifer and also where the aquifer thickness is low. These results evidence the strength of this model to simulate watershed scale hydrological processes as well as the crop production and the agricultural activity derived water quality degradation (considering both nutrient exportation and nutrient cycling processes).
Can we improve streamflow simulation by using higher resolution rainfall information?
NASA Astrophysics Data System (ADS)
Lobligeois, Florent; Andréassian, Vazken; Perrin, Charles
2013-04-01
The catchment response to rainfall is the interplay between space-time variability of precipitation, catchment characteristics and antecedent hydrological conditions. Precipitation dominates the high frequency hydrological response, and its simulation is thus dependent on the way rainfall is represented. One of the characteristics which distinguishes distributed from lumped models is their ability to represent explicitly the spatial variability of precipitation and catchment characteristics. The sensitivity of runoff hydrographs to the spatial variability of forcing data has been a major concern of researchers over the last three decades. However, although the literature on the relationship between spatial rainfall and runoff response is abundant, results are contrasted and sometimes contradictory. Several studies concluded that including information on rainfall spatial distribution improves discharge simulation (e.g. Ajami et al., 2004, among others) whereas other studies showed the lack of significant improvement in simulations with better information on rainfall spatial pattern (e.g. Andréassian et al., 2004, among others). The difficulties to reach a clear consensus is mainly due to the fact that each modeling study is implemented only on a few catchments whereas the impact of the spatial distribution of rainfall on runoff is known to be catchment and event characteristics-dependent. Many studies are virtual experiments and only compare flow simulations, which makes it difficult to reach conclusions transposable to real-life case studies. Moreover, the hydrological rainfall-runoff models differ between the studies and the parameterization strategies sometimes tend to advantage the distributed approach (or the lumped one). Recently, Météo-France developed a rainfall reanalysis over the whole French territory at the 1-kilometer resolution and the hourly time step over a 10-year period combining radar data and raingauge measurements: weather radar data were corrected and adjusted with both hourly and daily raingauge data. Based on this new high resolution product, we propose a framework to evaluate the improvements in streamflow simulation by using higher resolution rainfall information. Semi-distributed modelling is performed for different spatial resolution of precipitation forcing: from lumped to semi-distributed simulations. Here we do not work on synthetic (simulated) streamflow, but with actual measurements, on a large set of 181 French catchments representing a variety of size and climate. The rainfall-runoff model is re-calibrated for each resolution of rainfall spatial distribution over a 5-year sub-period and evaluated on the complementary sub-period in validation mode. The results are analysed by catchment classes based on catchment area and for various types of rainfall events based on the spatial variability of precipitation. References Ajami, N. K., Gupta, H. V, Wagener, T. & Sorooshian, S. (2004) Calibration of a semi-distributed hydrologic model for streamflow estimation along a river system. Journal of Hydrology 298(1-4), 112-135. Andréassian, V., Oddos, A., Michel, C., Anctil, F., Perrin, C. & Loumagne, C. (2004) Impact of spatial aggregation of inputs and parameters on the efficiency of rainfall-runoff models: A theoretical study using chimera watersheds. Water Resources Research 40(5), 1-9.
Chang, Yang; Zhao, Xiao-zhuo; Wang, Cheng; Ning, Fang-gang; Zhang, Guo-an
2015-01-01
Inhalation injury is an important cause of death after thermal burns. This study was designed to simulate the velocity and temperature distribution of inhalation thermal injury in the upper airway in humans using computational fluid dynamics. Cervical computed tomography images of three Chinese adults were imported to Mimics software to produce three-dimensional models. After grids were established and boundary conditions were defined, the simulation time was set at 1 minute and the gas temperature was set to 80 to 320°C using ANSYS software (ANSYS, Canonsburg, PA) to simulate the velocity and temperature distribution of inhalation thermal injury. Cross-sections were cut at 2-mm intervals, and maximum airway temperature and velocity were recorded for each cross-section. The maximum velocity peaked in the lower part of the nasal cavity and then decreased with air flow. The velocities in the epiglottis and glottis were higher than those in the surrounding areas. Further, the maximum airway temperature decreased from the nasal cavity to the trachea. Computational fluid dynamics technology can be used to simulate the velocity and temperature distribution of inhaled heated air.
Simulation of electromagnetic ion cyclotron triggered emissions in the Earth's inner magnetosphere
NASA Astrophysics Data System (ADS)
Shoji, Masafumi; Omura, Yoshiharu
2011-05-01
In a recent observation by the Cluster spacecraft, emissions triggered by electromagnetic ion cyclotron (EMIC) waves were discovered in the inner magnetosphere. We perform hybrid simulations to reproduce the EMIC triggered emissions. We develop a self-consistent one-dimensional hybrid code with a cylindrical geometry of the background magnetic field. We assume a parabolic magnetic field to model the dipole magnetic field in the equatorial region of the inner magnetosphere. Triggering EMIC waves are driven by a left-handed polarized external current assumed at the magnetic equator in the simulation model. Cold proton, helium, and oxygen ions, which form branches of the dispersion relation of the EMIC waves, are uniformly distributed in the simulation space. Energetic protons with a loss cone distribution function are also assumed as resonant particles. We reproduce rising tone emissions in the simulation space, finding a good agreement with the nonlinear wave growth theory. In the energetic proton velocity distribution we find formation of a proton hole, which is assumed in the nonlinear wave growth theory. A substantial amount of the energetic protons are scattered into the loss cone, while some of the resonant protons are accelerated to higher pitch angles, forming a pancake velocity distribution.
USDA-ARS?s Scientific Manuscript database
Modeling routines of the Integrated Farm System Model (IFSM version 4.2) and Dairy Gas Emission Model (DairyGEM version 3.2), two whole-farm simulation models developed and maintained by USDA-ARS, were revised with new components for: (1) simulation of ammonia (NH3) and greenhouse gas emissions gene...
NASA Astrophysics Data System (ADS)
Vivoni, Enrique R.; Mascaro, Giuseppe; Mniszewski, Susan; Fasel, Patricia; Springer, Everett P.; Ivanov, Valeriy Y.; Bras, Rafael L.
2011-10-01
SummaryA major challenge in the use of fully-distributed hydrologic models has been the lack of computational capabilities for high-resolution, long-term simulations in large river basins. In this study, we present the parallel model implementation and real-world hydrologic assessment of the Triangulated Irregular Network (TIN)-based Real-time Integrated Basin Simulator (tRIBS). Our parallelization approach is based on the decomposition of a complex watershed using the channel network as a directed graph. The resulting sub-basin partitioning divides effort among processors and handles hydrologic exchanges across boundaries. Through numerical experiments in a set of nested basins, we quantify parallel performance relative to serial runs for a range of processors, simulation complexities and lengths, and sub-basin partitioning methods, while accounting for inter-run variability on a parallel computing system. In contrast to serial simulations, the parallel model speed-up depends on the variability of hydrologic processes. Load balancing significantly improves parallel speed-up with proportionally faster runs as simulation complexity (domain resolution and channel network extent) increases. The best strategy for large river basins is to combine a balanced partitioning with an extended channel network, with potential savings through a lower TIN resolution. Based on these advances, a wider range of applications for fully-distributed hydrologic models are now possible. This is illustrated through a set of ensemble forecasts that account for precipitation uncertainty derived from a statistical downscaling model.
Better Water Demand and Pipe Description Improve the Distribution Network Modeling Results
Distribution system modeling simplifies pipe network in skeletonization and simulates the flow and water quality by using generalized water demand patterns. While widely used, the approach has not been examined fully on how it impacts the modeling fidelity. This study intends to ...
Simulating maize yield and biomass with spatial variability of soil field capacity
USDA-ARS?s Scientific Manuscript database
Spatial variability in field soil water and other properties is a challenge for system modelers who use only representative values for model inputs, rather than their distributions. In this study, we compared simulation results from a calibrated model with spatial variability of soil field capacity ...
An approach for modelling snowcover ablation and snowmelt runoff in cold region environments
NASA Astrophysics Data System (ADS)
Dornes, Pablo Fernando
Reliable hydrological model simulations are the result of numerous complex interactions among hydrological inputs, landscape properties, and initial conditions. Determination of the effects of these factors is one of the main challenges in hydrological modelling. This situation becomes even more difficult in cold regions due to the ungauged nature of subarctic and arctic environments. This research work is an attempt to apply a new approach for modelling snowcover ablation and snowmelt runoff in complex subarctic environments with limited data while retaining integrity in the process representations. The modelling strategy is based on the incorporation of both detailed process understanding and inputs along with information gained from observations of basin-wide streamflow phenomenon; essentially a combination of deductive and inductive approaches. The study was conducted in the Wolf Creek Research Basin, Yukon Territory, using three models, a small-scale physically based hydrological model, a land surface scheme, and a land surface hydrological model. The spatial representation was based on previous research studies and observations, and was accomplished by incorporating landscape units, defined according to topography and vegetation, as the spatial model elements. Comparisons between distributed and aggregated modelling approaches showed that simulations incorporating distributed initial snowcover and corrected solar radiation were able to properly simulate snowcover ablation and snowmelt runoff whereas the aggregated modelling approaches were unable to represent the differential snowmelt rates and complex snowmelt runoff dynamics. Similarly, the inclusion of spatially distributed information in a land surface scheme clearly improved simulations of snowcover ablation. Application of the same modelling approach at a larger scale using the same landscape based parameterisation showed satisfactory results in simulating snowcover ablation and snowmelt runoff with minimal calibration. Verification of this approach in an arctic basin illustrated that landscape based parameters are a feasible regionalisation framework for distributed and physically based models. In summary, the proposed modelling philosophy, based on the combination of an inductive and deductive reasoning, is a suitable strategy for reliable predictions of snowcover ablation and snowmelt runoff in cold regions and complex environments.
Garza, Sarah J.; Miller, Ryan S.
2015-01-01
Livestock distribution in the United States (U.S.) can only be mapped at a county-level or worse resolution. We developed a spatial microsimulation model called the Farm Location and Agricultural Production Simulator (FLAPS) that simulated the distribution and populations of individual livestock farms throughout the conterminous U.S. Using domestic pigs (Sus scrofa domesticus) as an example species, we customized iterative proportional-fitting algorithms for the hierarchical structure of the U.S. Census of Agriculture and imputed unpublished state- or county-level livestock population totals that were redacted to ensure confidentiality. We used a weighted sampling design to collect data on the presence and absence of farms and used them to develop a national-scale distribution model that predicted the distribution of individual farms at a 100 m resolution. We implemented microsimulation algorithms that simulated the populations and locations of individual farms using output from our imputed Census of Agriculture dataset and distribution model. Approximately 19% of county-level pig population totals were unpublished in the 2012 Census of Agriculture and needed to be imputed. Using aerial photography, we confirmed the presence or absence of livestock farms at 10,238 locations and found livestock farms were correlated with open areas, cropland, and roads, and also areas with cooler temperatures and gentler topography. The distribution of swine farms was highly variable, but cross-validation of our distribution model produced an area under the receiver-operating characteristics curve value of 0.78, which indicated good predictive performance. Verification analyses showed FLAPS accurately imputed and simulated Census of Agriculture data based on absolute percent difference values of < 0.01% at the state-to-national scale, 3.26% for the county-to-state scale, and 0.03% for the individual farm-to-county scale. Our output data have many applications for risk management of agricultural systems including epidemiological studies, food safety, biosecurity issues, emergency-response planning, and conflicts between livestock and other natural resources. PMID:26571497
Burdett, Christopher L; Kraus, Brian R; Garza, Sarah J; Miller, Ryan S; Bjork, Kathe E
2015-01-01
Livestock distribution in the United States (U.S.) can only be mapped at a county-level or worse resolution. We developed a spatial microsimulation model called the Farm Location and Agricultural Production Simulator (FLAPS) that simulated the distribution and populations of individual livestock farms throughout the conterminous U.S. Using domestic pigs (Sus scrofa domesticus) as an example species, we customized iterative proportional-fitting algorithms for the hierarchical structure of the U.S. Census of Agriculture and imputed unpublished state- or county-level livestock population totals that were redacted to ensure confidentiality. We used a weighted sampling design to collect data on the presence and absence of farms and used them to develop a national-scale distribution model that predicted the distribution of individual farms at a 100 m resolution. We implemented microsimulation algorithms that simulated the populations and locations of individual farms using output from our imputed Census of Agriculture dataset and distribution model. Approximately 19% of county-level pig population totals were unpublished in the 2012 Census of Agriculture and needed to be imputed. Using aerial photography, we confirmed the presence or absence of livestock farms at 10,238 locations and found livestock farms were correlated with open areas, cropland, and roads, and also areas with cooler temperatures and gentler topography. The distribution of swine farms was highly variable, but cross-validation of our distribution model produced an area under the receiver-operating characteristics curve value of 0.78, which indicated good predictive performance. Verification analyses showed FLAPS accurately imputed and simulated Census of Agriculture data based on absolute percent difference values of < 0.01% at the state-to-national scale, 3.26% for the county-to-state scale, and 0.03% for the individual farm-to-county scale. Our output data have many applications for risk management of agricultural systems including epidemiological studies, food safety, biosecurity issues, emergency-response planning, and conflicts between livestock and other natural resources.
Yang, H T; Viswanathan, S; Balachandran, W; Ray, M B
2003-06-01
This paper presents the simulation and experimental results of the distribution of droplets produced by electrostatic nozzles inside a venturi scrubber. The simulation model takes into account initial liquid momentum, hydrodynamic, gravitational and electric forces, and eddy diffusion. The velocity and concentration profile of charged droplets injected from an electrostatic nozzle in the scrubber under the combined influence of hydrodynamic and electric fields were simulated. The effects of operating parameters, such as gas velocity, diameter of the scrubbing droplets, charge-to-mass ratio, and liquid-to-gas ratio on the distribution of the water droplets within the scrubber, were also investigated. The flux distribution of scrubbing liquid in the presence of electric field is improved considerably over a conventional venturi scrubber, and the effect increases with the increase in charge-to-mass ratio. Improved flux distribution using charged droplets increases the calculated overall collection efficiency of the submicron particles. However, the effect of an electric field on the droplet distribution pattern for small drop sizes in strong hydrodynamic field conditions is negligible. Simulated results are in good agreement with the experimental data obtained in the laboratory.
Diciotti, Stefano; Nobis, Alessandro; Ciulli, Stefano; Landini, Nicholas; Mascalchi, Mario; Sverzellati, Nicola; Innocenti, Bernardo
2017-09-01
To develop an innovative finite element (FE) model of lung parenchyma which simulates pulmonary emphysema on CT imaging. The model is aimed to generate a set of digital phantoms of low-attenuation areas (LAA) images with different grades of emphysema severity. Four individual parameter configurations simulating different grades of emphysema severity were utilized to generate 40 FE models using ten randomizations for each setting. We compared two measures of emphysema severity (relative area (RA) and the exponent D of the cumulative distribution function of LAA clusters size) between the simulated LAA images and those computed directly on the models output (considered as reference). The LAA images obtained from our model output can simulate CT-LAA images in subjects with different grades of emphysema severity. Both RA and D computed on simulated LAA images were underestimated as compared to those calculated on the models output, suggesting that measurements in CT imaging may not be accurate in the assessment of real emphysema extent. Our model is able to mimic the cluster size distribution of LAA on CT imaging of subjects with pulmonary emphysema. The model could be useful to generate standard test images and to design physical phantoms of LAA images for the assessment of the accuracy of indexes for the radiologic quantitation of emphysema.
Simulated laser fluorosensor signals from subsurface chlorophyll distributions
NASA Technical Reports Server (NTRS)
Venable, D. D.; Khatun, S.; Punjabi, A.; Poole, L.
1986-01-01
A semianalytic Monte Carlo model has been used to simulate laser fluorosensor signals returned from subsurface distributions of chlorophyll. This study assumes the only constituent of the ocean medium is the common coastal zone dinoflagellate Prorocentrum minimum. The concentration is represented by Gaussian distributions in which the location of the distribution maximum and the standard deviation are variable. Most of the qualitative features observed in the fluorescence signal for total chlorophyll concentrations up to 1.0 microg/liter can be accounted for with a simple analytic solution assuming a rectangular chlorophyll distribution function.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Timothy M.; Palmintier, Bryan; Suryanarayanan, Siddharth
As more Smart Grid technologies (e.g., distributed photovoltaic, spatially distributed electric vehicle charging) are integrated into distribution grids, static distribution simulations are no longer sufficient for performing modeling and analysis. GridLAB-D is an agent-based distribution system simulation environment that allows fine-grained end-user models, including geospatial and network topology detail. A problem exists in that, without outside intervention, once the GridLAB-D simulation begins execution, it will run to completion without allowing the real-time interaction of Smart Grid controls, such as home energy management systems and aggregator control. We address this lack of runtime interaction by designing a flexible communication interface, Bus.pymore » (pronounced bus-dot-pie), that uses Python to pass messages between one or more GridLAB-D instances and a Smart Grid simulator. This work describes the design and implementation of Bus.py, discusses its usefulness in terms of some Smart Grid scenarios, and provides an example of an aggregator-based residential demand response system interacting with GridLAB-D through Bus.py. The small scale example demonstrates the validity of the interface and shows that an aggregator using said interface is able to control residential loads in GridLAB-D during runtime to cause a reduction in the peak load on the distribution system in (a) peak reduction and (b) time-of-use pricing cases.« less
A distributed snow-evolution modeling system (SnowModel)
Glen E. Liston; Kelly Elder
2006-01-01
SnowModel is a spatially distributed snow-evolution modeling system designed for application in landscapes, climates, and conditions where snow occurs. It is an aggregation of four submodels: MicroMet defines meteorological forcing conditions, EnBal calculates surface energy exchanges, SnowPack simulates snow depth and water-equivalent evolution, and SnowTran-3D...
Distributed intelligent scheduling of FMS
NASA Astrophysics Data System (ADS)
Wu, Zuobao; Cheng, Yaodong; Pan, Xiaohong
1995-08-01
In this paper, a distributed scheduling approach of a flexible manufacturing system (FMS) is presented. A new class of Petri nets called networked time Petri nets (NTPN) for system modeling of networking environment is proposed. The distributed intelligent scheduling is implemented by three schedulers which combine NTPN models with expert system techniques. The simulation results are shown.
Linking laser scanning to snowpack modeling: Data processing and visualization
NASA Astrophysics Data System (ADS)
Teufelsbauer, H.
2009-07-01
SnowSim is a newly developed physical snowpack model that can use three-dimensional terrestrial laser scanning data to generate model domains. This greatly simplifies the input and numerical simulation of snow covers in complex terrains. The program can model two-dimensional cross sections of general slopes, with complicated snow distributions. The model predicts temperature distributions and snow settlements in this cross section. Thus, the model can be used for a wide range of problems in snow science and engineering, including numerical investigations of avalanche formation. The governing partial differential equations are solved by means of the finite element method, using triangular elements. All essential data for defining the boundary conditions and evaluating the simulation results are gathered by automatic weather and snow measurement sites. This work focuses on the treatment of these measurements and the simulation results, and presents a pre- and post-processing graphical user interface (GUI) programmed in Matlab.
Improved first-order uncertainty method for water-quality modeling
Melching, C.S.; Anmangandla, S.
1992-01-01
Uncertainties are unavoidable in water-quality modeling and subsequent management decisions. Monte Carlo simulation and first-order uncertainty analysis (involving linearization at central values of the uncertain variables) have been frequently used to estimate probability distributions for water-quality model output due to their simplicity. Each method has its drawbacks: Monte Carlo simulation's is mainly computational time; and first-order analysis are mainly questions of accuracy and representativeness, especially for nonlinear systems and extreme conditions. An improved (advanced) first-order method is presented, where the linearization point varies to match the output level whose exceedance probability is sought. The advanced first-order method is tested on the Streeter-Phelps equation to estimate the probability distribution of critical dissolved-oxygen deficit and critical dissolved oxygen using two hypothetical examples from the literature. The advanced first-order method provides a close approximation of the exceedance probability for the Streeter-Phelps model output estimated by Monte Carlo simulation using less computer time - by two orders of magnitude - regardless of the probability distributions assumed for the uncertain model parameters.
[Low Fidelity Simulation of a Zero-Y Robot
NASA Technical Reports Server (NTRS)
Sweet, Adam
2001-01-01
The item to be cleared is a low-fidelity software simulation model of a hypothetical freeflying robot designed for use in zero gravity environments. This simulation model works with the HCC simulation system that was developed by Xerox PARC and NASA Ames Research Center. HCC has been previously cleared for distribution. When used with the HCC software, the model computes the location and orientation of the simulated robot over time. Failures (such as a broken motor) can be injected into the simulation to produce simulated behavior corresponding to the failure. Release of this simulation will allow researchers to test their software diagnosis systems by attempting to diagnose the simulated failure from the simulated behavior. This model does not contain any encryption software nor can it perform any control tasks that might be export controlled.
Cold dark matter. 1: The formation of dark halos
NASA Technical Reports Server (NTRS)
Gelb, James M.; Bertschinger, Edmund
1994-01-01
We use numerical simulations of critically closed cold dark matter (CDM) models to study the effects of numerical resolution on observable quantities. We study simulations with up to 256(exp 3) particles using the particle-mesh (PM) method and with up to 144(exp 3) particles using the adaptive particle-particle-mesh (P3M) method. Comparisons of galaxy halo distributions are made among the various simulations. We also compare distributions with observations, and we explore methods for identifying halos, including a new algorithm that finds all particles within closed contours of the smoothed density field surrounding a peak. The simulated halos show more substructure than predicted by the Press-Schechter theory. We are able to rule out all omega = 1 CDM models for linear amplitude sigma(sub 8) greater than or approximately = 0.5 because the simulations produce too many massive halos compared with the observations. The simulations also produce too many low-mass halos. The distribution of halos characterized by their circular velocities for the P3M simulations is in reasonable agreement with the observations for 150 km/s less than or = V(sub circ) less than or = 350 km/s.
Simulating the decentralized processes of the human immune system in a virtual anatomy model.
Sarpe, Vladimir; Jacob, Christian
2013-01-01
Many physiological processes within the human body can be perceived and modeled as large systems of interacting particles or swarming agents. The complex processes of the human immune system prove to be challenging to capture and illustrate without proper reference to the spatial distribution of immune-related organs and systems. Our work focuses on physical aspects of immune system processes, which we implement through swarms of agents. This is our first prototype for integrating different immune processes into one comprehensive virtual physiology simulation. Using agent-based methodology and a 3-dimensional modeling and visualization environment (LINDSAY Composer), we present an agent-based simulation of the decentralized processes in the human immune system. The agents in our model - such as immune cells, viruses and cytokines - interact through simulated physics in two different, compartmentalized and decentralized 3-dimensional environments namely, (1) within the tissue and (2) inside a lymph node. While the two environments are separated and perform their computations asynchronously, an abstract form of communication is allowed in order to replicate the exchange, transportation and interaction of immune system agents between these sites. The distribution of simulated processes, that can communicate across multiple, local CPUs or through a network of machines, provides a starting point to build decentralized systems that replicate larger-scale processes within the human body, thus creating integrated simulations with other physiological systems, such as the circulatory, endocrine, or nervous system. Ultimately, this system integration across scales is our goal for the LINDSAY Virtual Human project. Our current immune system simulations extend our previous work on agent-based simulations by introducing advanced visualizations within the context of a virtual human anatomy model. We also demonstrate how to distribute a collection of connected simulations over a network of computers. As a future endeavour, we plan to use parameter tuning techniques on our model to further enhance its biological credibility. We consider these in silico experiments and their associated modeling and optimization techniques as essential components in further enhancing our capabilities of simulating a whole-body, decentralized immune system, to be used both for medical education and research as well as for virtual studies in immunoinformatics.
NASA Astrophysics Data System (ADS)
Dion, Lukas; Kiss, László I.; Poncsák, Sándor; Lagacé, Charles-Luc
2018-04-01
Perfluorocarbons are important contributors to aluminum production greenhouse gas inventories. Tetrafluoromethane and hexafluoroethane are produced in the electrolysis process when a harmful event called anode effect occurs in the cell. This incident is strongly related to the lack of alumina and the current distribution in the cell and can be classified into two categories: high-voltage and low-voltage anode effects. The latter is hard to detect during the normal electrolysis process and, therefore, new tools are necessary to predict this event and minimize its occurrence. This paper discusses a new approach to model the alumina distribution behavior in an electrolysis cell by dividing the electrolytic bath into non-homogenous concentration zones using discrete elements. The different mechanisms related to the alumina distribution are discussed in detail. Moreover, with a detailed electrical model, it is possible to calculate the current distribution among the different anodic assemblies. With this information, the model can evaluate if low-voltage emissions are likely to be present under the simulated conditions. Using the simulator will help the understanding of the role of the alumina distribution which, in turn, will improve the cell energy consumption and stability while reducing the occurrence of high- and low-voltage anode effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Kyoo Sil; Barker, Erin; Cheng, Guang
2016-01-06
In this paper, a three-dimensional (3D) microstructure-based finite element modeling method (i.e., extrinsic modeling method) is developed, which can be used in examining the effects of porosity on the ductility/fracture of Mg castings. For this purpose, AM60 Mg tensile samples were generated under high-pressure die-casting in a specially-designed mold. Before the tensile test, the samples were CT-scanned to obtain the pore distributions within the samples. 3D microstructure-based finite element models were then developed based on the obtained actual pore distributions of the gauge area. The input properties for the matrix material were determined by fitting the simulation result to themore » experimental result of a selected sample, and then used for all the other samples’ simulation. The results show that the ductility and fracture locations predicted from simulations agree well with the experimental results. This indicates that the developed 3D extrinsic modeling method may be used to examine the influence of various aspects of pore sizes/distributions as well as intrinsic properties (i.e., matrix properties) on the ductility/fracture of Mg castings.« less
Rupture Propagation for Stochastic Fault Models
NASA Astrophysics Data System (ADS)
Favreau, P.; Lavallee, D.; Archuleta, R.
2003-12-01
The inversion of strong motion data of large earhquakes give the spatial distribution of pre-stress on the ruptured faults and it can be partially reproduced by stochastic models, but a fundamental question remains: how rupture propagates, constrained by the presence of spatial heterogeneity? For this purpose we investigate how the underlying random variables, that control the pre-stress spatial variability, condition the propagation of the rupture. Two stochastic models of prestress distributions are considered, respectively based on Cauchy and Gaussian random variables. The parameters of the two stochastic models have values corresponding to the slip distribution of the 1979 Imperial Valley earthquake. We use a finite difference code to simulate the spontaneous propagation of shear rupture on a flat fault in a 3D continuum elastic body. The friction law is the slip dependent friction law. The simulations show that the propagation of the rupture front is more complex, incoherent or snake-like for a prestress distribution based on Cauchy random variables. This may be related to the presence of a higher number of asperities in this case. These simulations suggest that directivity is stronger in the Cauchy scenario, compared to the smoother rupture of the Gauss scenario.
The distribution of density in supersonic turbulence
NASA Astrophysics Data System (ADS)
Squire, Jonathan; Hopkins, Philip F.
2017-11-01
We propose a model for the statistics of the mass density in supersonic turbulence, which plays a crucial role in star formation and the physics of the interstellar medium (ISM). The model is derived by considering the density to be arranged as a collection of strong shocks of width ˜ M^{-2}, where M is the turbulent Mach number. With two physically motivated parameters, the model predicts all density statistics for M>1 turbulence: the density probability distribution and its intermittency (deviation from lognormality), the density variance-Mach number relation, power spectra and structure functions. For the proposed model parameters, reasonable agreement is seen between model predictions and numerical simulations, albeit within the large uncertainties associated with current simulation results. More generally, the model could provide a useful framework for more detailed analysis of future simulations and observational data. Due to the simple physical motivations for the model in terms of shocks, it is straightforward to generalize to more complex physical processes, which will be helpful in future more detailed applications to the ISM. We see good qualitative agreement between such extensions and recent simulations of non-isothermal turbulence.
NASA Astrophysics Data System (ADS)
Nolte, C. G.; Otte, T. L.; Bowden, J. H.; Otte, M. J.
2010-12-01
There is disagreement in the regional climate modeling community as to the appropriateness of the use of internal nudging. Some investigators argue that the regional model should be minimally constrained and allowed to respond to regional-scale forcing, while others have noted that in the absence of interior nudging, significant large-scale discrepancies develop between the regional model solution and the driving coarse-scale fields. These discrepancies lead to reduced confidence in the ability of regional climate models to dynamically downscale global climate model simulations under climate change scenarios, and detract from the usability of the regional simulations for impact assessments. The advantages and limitations of interior nudging schemes for regional climate modeling are investigated in this study. Multi-year simulations using the WRF model driven by reanalysis data over the continental United States at 36km resolution are conducted using spectral nudging, grid point nudging, and for a base case without interior nudging. The means, distributions, and inter-annual variability of temperature and precipitation will be evaluated in comparison to regional analyses.
Gulf Coast megaregion evacuation traffic simulation modeling and analysis.
DOT National Transportation Integrated Search
2015-12-01
This paper describes a project to develop a micro-level traffic simulation for a megaregion. To : accomplish this, a mass evacuation event was modeled using a traffic demand generation process that : created a spatial and temporal distribution of dep...
Modelling the economic impact of three lameness causing diseases using herd and cow level evidence.
Ettema, Jehan; Østergaard, Søren; Kristensen, Anders Ringgaard
2010-06-01
Diseases to the cow's hoof, interdigital skin and legs are highly prevalent and of large economic impact in modern dairy farming. In order to support farmer's decisions on preventing and treating lameness and its underlying causes, decision support models can be used to predict the economic profitability of such actions. An existing approach of modelling lameness as one health disorder in a dynamic, stochastic and mechanistic simulation model has been improved in two ways. First of all, three underlying diseases causing lameness were modelled: digital dermatitis, interdigital hyperplasia and claw horn diseases. Secondly, the existing simulation model was set-up in way that it uses hyper-distributions describing diseases risk of the three lameness causing diseases. By combining information on herd level risk factors with prevalence of lameness or prevalence of underlying diseases among cows, marginal posterior probability distributions for disease prevalence in the specific herd are created in a Bayesian network. Random draws from these distributions are used by the simulation model to describe disease risk. Hereby field data on prevalence is used systematically and uncertainty around herd specific risk is represented. Besides the fact that estimated profitability of halving disease risk depended on the hyper-distributions used, the estimates differed for herds with different levels of diseases risk and reproductive efficiency. (c) 2010 Elsevier B.V. All rights reserved.
2015-01-01
Background Multiscale approaches for integrating submodels of various levels of biological organization into a single model became the major tool of systems biology. In this paper, we have constructed and simulated a set of multiscale models of spatially distributed microbial communities and study an influence of unevenly distributed environmental factors on the genetic diversity and evolution of the community members. Results Haploid Evolutionary Constructor software http://evol-constructor.bionet.nsc.ru/ was expanded by adding the tool for the spatial modeling of a microbial community (1D, 2D and 3D versions). A set of the models of spatially distributed communities was built to demonstrate that the spatial distribution of cells affects both intensity of selection and evolution rate. Conclusion In spatially heterogeneous communities, the change in the direction of the environmental flow might be reflected in local irregular population dynamics, while the genetic structure of populations (frequencies of the alleles) remains stable. Furthermore, in spatially heterogeneous communities, the chemotaxis might dramatically affect the evolution of community members. PMID:25708911
Simulation and performance of brushless dc motor actuators
NASA Astrophysics Data System (ADS)
Gerba, A., Jr.
1985-12-01
The simulation model for a Brushless D.C. Motor and the associated commutation power conditioner transistor model are presented. The necessary conditions for maximum power output while operating at steady-state speed and sinusoidally distributed air-gap flux are developed. Comparison of simulated model with the measured performance of a typical motor are done both on time response waveforms and on average performance characteristics. These preliminary results indicate good agreement. Plans for model improvement and testing of a motor-driven positioning device for model evaluation are outlined.
Chen, W P; Tang, F T; Ju, C W
2001-08-01
To quantify stress distribution of the foot during mid-stance to push-off in barefoot gait using 3-D finite element analysis. To simulate the foot structure and facilitate later consideration of footwear. Finite element model was generated and loading condition simulating barefoot gait during mid-stance to push-off was used to quantify the stress distributions. A computational model can provide overall stress distributions of the foot subject to various loading conditions. A preliminary 3-D finite element foot model was generated based on the computed tomography data of a male subject and the bone and soft tissue structures were modeled. Analysis was performed for loading condition simulating barefoot gait during mid-stance to push-off. The peak plantar pressure ranged from 374 to 1003 kPa and the peak von Mises stress in the bone ranged from 2.12 to 6.91 MPa at different instants. The plantar pressure patterns were similar to measurement result from previous literature. The present study provides a preliminary computational model that is capable of estimating the overall plantar pressure and bone stress distributions. It can also provide quantitative analysis for normal and pathological foot motion. This model can identify areas of increased pressure and correlate the pressure with foot pathology. Potential applications can be found in the study of foot deformities, footwear, surgical interventions. It may assist pre-treatment planning, design of pedorthotic appliances, and predict the treatment effect of foot orthosis.
Explicit simulation of ice particle habits in a Numerical Weather Prediction Model
NASA Astrophysics Data System (ADS)
Hashino, Tempei
2007-05-01
This study developed a scheme for explicit simulation of ice particle habits in Numerical Weather Prediction (NWP) Models. The scheme is called Spectral Ice Habit Prediction System (SHIPS), and the goal is to retain growth history of ice particles in the Eulerian dynamics framework. It diagnoses characteristics of ice particles based on a series of particle property variables (PPVs) that reflect history of microphysieal processes and the transport between mass bins and air parcels in space. Therefore, categorization of ice particles typically used in bulk microphysical parameterization and traditional bin models is not necessary, so that errors that stem from the categorization can be avoided. SHIPS predicts polycrystals as well as hexagonal monocrystals based on empirically derived habit frequency and growth rate, and simulates the habit-dependent aggregation and riming processes by use of the stochastic collection equation with predicted PPVs. Idealized two dimensional simulations were performed with SHIPS in a NWP model. The predicted spatial distribution of ice particle habits and types, and evolution of particle size distributions showed good quantitative agreement with observation This comprehensive model of ice particle properties, distributions, and evolution in clouds can be used to better understand problems facing wide range of research disciplines, including microphysics processes, radiative transfer in a cloudy atmosphere, data assimilation, and weather modification.
ERIC Educational Resources Information Center
Wang, Zhen; Yao, Lihua
2013-01-01
The current study used simulated data to investigate the properties of a newly proposed method (Yao's rater model) for modeling rater severity and its distribution under different conditions. Our study examined the effects of rater severity, distributions of rater severity, the difference between item response theory (IRT) models with rater effect…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chalise, Roshan, E-mail: plasma.roshan@gmail.com; Khanal, Raju
2015-11-15
We have developed a self-consistent 1d3v (one dimension in space and three dimension in velocity) Kinetic Trajectory Simulation (KTS) model, which can be used for modeling various situations of interest and yields results of high accuracy. Exact ion trajectories are followed, to calculate along them the ion distribution function, assuming an arbitrary injection ion distribution. The electrons, on the other hand, are assumed to have a cut-off Maxwellian velocity distribution at injection and their density distribution is obtained analytically. Starting from an initial guess, the potential profile is iterated towards the final time-independent self-consistent state. We have used it tomore » study plasma sheath region formed in presence of an oblique magnetic field. Our results agree well with previous works from other models, and hence, we expect our 1d3v KTS model to provide a basis for the studying of all types of magnetized plasmas, yielding more accurate results.« less
NASA Technical Reports Server (NTRS)
Weinberg, David H.; Gott, J. Richard, III; Melott, Adrian L.
1987-01-01
Many models for the formation of galaxies and large-scale structure assume a spectrum of random phase (Gaussian), small-amplitude density fluctuations as initial conditions. In such scenarios, the topology of the galaxy distribution on large scales relates directly to the topology of the initial density fluctuations. Here a quantitative measure of topology - the genus of contours in a smoothed density distribution - is described and applied to numerical simulations of galaxy clustering, to a variety of three-dimensional toy models, and to a volume-limited sample of the CfA redshift survey. For random phase distributions the genus of density contours exhibits a universal dependence on threshold density. The clustering simulations show that a smoothing length of 2-3 times the mass correlation length is sufficient to recover the topology of the initial fluctuations from the evolved galaxy distribution. Cold dark matter and white noise models retain a random phase topology at shorter smoothing lengths, but massive neutrino models develop a cellular topology.
Modeling the Influence of Hemispheric Transport on Trends in O3 Distributions
We describe the development and application of the hemispheric version of the CMAQ to examine the influence of long-range pollutant transport on trends in surface level O3 distributions. The WRF-CMAQ model is expanded to hemispheric scales and multi-decadal model simulations were...
NASA Astrophysics Data System (ADS)
Khan, Urooj; Tuteja, Narendra; Ajami, Hoori; Sharma, Ashish
2014-05-01
While the potential uses and benefits of distributed catchment simulation models is undeniable, their practical usage is often hindered by the computational resources they demand. To reduce the computational time/effort in distributed hydrological modelling, a new approach of modelling over an equivalent cross-section is investigated where topographical and physiographic properties of first-order sub-basins are aggregated to constitute modelling elements. To formulate an equivalent cross-section, a homogenization test is conducted to assess the loss in accuracy when averaging topographic and physiographic variables, i.e. length, slope, soil depth and soil type. The homogenization test indicates that the accuracy lost in weighting the soil type is greatest, therefore it needs to be weighted in a systematic manner to formulate equivalent cross-sections. If the soil type remains the same within the sub-basin, a single equivalent cross-section is formulated for the entire sub-basin. If the soil type follows a specific pattern, i.e. different soil types near the centre of the river, middle of hillslope and ridge line, three equivalent cross-sections (left bank, right bank and head water) are required. If the soil types are complex and do not follow any specific pattern, multiple equivalent cross-sections are required based on the number of soil types. The equivalent cross-sections are formulated for a series of first order sub-basins by implementing different weighting methods of topographic and physiographic variables of landforms within the entire or part of a hillslope. The formulated equivalent cross-sections are then simulated using a 2-dimensional, Richards' equation based distributed hydrological model. The simulated fluxes are multiplied by the weighted area of each equivalent cross-section to calculate the total fluxes from the sub-basins. The simulated fluxes include horizontal flow, transpiration, soil evaporation, deep drainage and soil moisture. To assess the accuracy of equivalent cross-section approach, the sub-basins are also divided into equally spaced multiple hillslope cross-sections. These cross-sections are simulated in a fully distributed settings using the 2-dimensional, Richards' equation based distributed hydrological model. The simulated fluxes are multiplied by the contributing area of each cross-section to get total fluxes from each sub-basin referred as reference fluxes. The equivalent cross-section approach is investigated for seven first order sub-basins of the McLaughlin catchment of the Snowy River, NSW, Australia, and evaluated in Wagga-Wagga experimental catchment. Our results show that the simulated fluxes using an equivalent cross-section approach are very close to the reference fluxes whereas computational time is reduced of the order of ~4 to ~22 times in comparison to the fully distributed settings. The transpiration and soil evaporation are the dominant fluxes and constitute ~85% of actual rainfall. Overall, the accuracy achieved in dominant fluxes is higher than the other fluxes. The simulated soil moistures from equivalent cross-section approach are compared with the in-situ soil moisture observations in the Wagga-Wagga experimental catchment in NSW, and results found to be consistent. Our results illustrate that the equivalent cross-section approach reduces the computational time significantly while maintaining the same order of accuracy in predicting the hydrological fluxes. As a result, this approach provides a great potential for implementation of distributed hydrological models at regional scales.
Abbott, Lauren J.; Stevens, Mark J.
2015-12-22
In this study, a coarse-grained (CG) model is developed for the thermoresponsive polymer poly(N-isopropylacrylamide) (PNIPAM), using a hybrid top-down and bottom-up approach. Nonbonded parameters are fit to experimental thermodynamic data following the procedures of the SDK (Shinoda, DeVane, and Klein) CG force field, with minor adjustments to provide better agreement with radial distribution functions from atomistic simulations. Bonded parameters are fit to probability distributions from atomistic simulations using multi-centered Gaussian-based potentials. The temperature-dependent potentials derived for the PNIPAM CG model in this work properly capture the coil–globule transition of PNIPAM single chains and yield a chain-length dependence consistent with atomisticmore » simulations.« less
Impact of baryonic physics on intrinsic alignments
Tenneti, Ananth; Gnedin, Nickolay Y.; Feng, Yu
2017-01-11
We explore the effects of specific assumptions in the subgrid models of star formation and stellar and AGN feedback on intrinsic alignments of galaxies in cosmological simulations of "MassiveBlack-II" family. Using smaller volume simulations, we explored the parameter space of the subgrid star formation and feedback model and found remarkable robustness of the observable statistical measures to the details of subgrid physics. The one observational probe most sensitive to modeling details is the distribution of misalignment angles. We hypothesize that the amount of angular momentum carried away by the galactic wind is the primary physical quantity that controls the orientationmore » of the stellar distribution. Finally, our results are also consistent with a similar study by the EAGLE simulation team.« less
How can model comparison help improving species distribution models?
Gritti, Emmanuel Stephan; Gaucherel, Cédric; Crespo-Perez, Maria-Veronica; Chuine, Isabelle
2013-01-01
Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs). However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT) that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagussylvatica L., Quercusrobur L. and Pinussylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes.
How Can Model Comparison Help Improving Species Distribution Models?
Gritti, Emmanuel Stephan; Gaucherel, Cédric; Crespo-Perez, Maria-Veronica; Chuine, Isabelle
2013-01-01
Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs). However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT) that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagus sylvatica L., Quercus robur L. and Pinus sylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes. PMID:23874779
Simulation of Distributed PV Power Output in Oahu Hawaii
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lave, Matthew Samuel
2016-08-01
Distributed solar photovoltaic (PV) power generation in Oahu has grown rapidly since 2008. For applications such as determining the value of energy storage, it is important to have PV power output timeseries. Since these timeseries of not typically measured, here we produce simulated distributed PV power output for Oahu. Simulated power output is based on (a) satellite-derived solar irradiance, (b) PV permit data by neighborhood, and (c) population data by census block. Permit and population data was used to model locations of distributed PV, and irradiance data was then used to simulate power output. PV power output simulations are presentedmore » by sub-neighborhood polygons, neighborhoods, and for the whole island of Oahu. Summary plots of annual PV energy and a sample week timeseries of power output are shown, and a the files containing the entire timeseries are described.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fitzgerald, James W.; Hoppel, William A.; Frick, Glendon M.
1998-07-01
The dynamics of aerosols in the marine boundary layer (MBL) are simulated with the marine boundary layer aerosol model (MARBLES), a one-dimensional, multicomponent sectional aerosol model [{ital Fitzgerald} {ital et al.}, this issue; {ital Gelbard} {ital et al.}, this issue]. First, to illustrate how the various aerosol processes influence the particle size distribution, the model was run with one or two processes operating on the same initial size distribution. Because of current interest in the effects of cloud processing of aerosols and exchange of aerosols with the free troposphere (FT) on marine aerosol size distributions, these two processes are examinedmore » in considerable detail. The simulations show that the effect of cloud processing (characteristic double-peaked size distribution) in the upper part of the MBL is manifested at the surface on a timescale that is much faster than changes due to exchange with the FT, assuming a typical exchange velocity of 0.6 cmthinsps{sup {minus}1}. The model predicts that the FT can be a significant source of particles for the MBL in the size range of the cloud-processing minimum, between the unactivated interstitial particles and the cloud condensation nuclei (CCN) which have grown as a result of conversion of dissolved SO{sub 2} to sulfate in cloud droplets. The model was also used to simulate the evolution of the aerosol size distribution in an air mass advecting from the east coast of the United States out over the ocean for up to 10 days. The modification of a continental aerosol size distribution to one that is remote marine in character occurs on a timescale of 6{endash}8 days. Nucleation was not observed in the base case 10-day advection simulation which assumed rather typical meteorological conditions. However, significant nucleation was predicted under a more favorable (albeit, atypical) combination of conditions which included significant precipitation scavenging (5 mmthinsph{sup {minus}1} of rain for 12 hours), colder temperatures by 10thinsp{degree}C (283 K at the surface decreasing to 278 K at 1000 m) and a high DMS flux (40 {mu}molthinspm{sup {minus}2}thinspd{sup {minus}1}). In a test of model self initialization, long-term (8{endash}10 days) predictions of marine aerosol size distributions were found to be essentially independent of initial conditions. {copyright} 1998 American Geophysical Union« less
Multiplicative Modeling of Children's Growth and Its Statistical Properties
NASA Astrophysics Data System (ADS)
Kuninaka, Hiroto; Matsushita, Mitsugu
2014-03-01
We develop a numerical growth model that can predict the statistical properties of the height distribution of Japanese children. Our previous studies have clarified that the height distribution of schoolchildren shows a transition from the lognormal distribution to the normal distribution during puberty. In this study, we demonstrate by simulation that the transition occurs owing to the variability of the onset of puberty.
Analysis and numerical simulation research of the heating process in the oven
NASA Astrophysics Data System (ADS)
Chen, Yawei; Lei, Dingyou
2016-10-01
How to use the oven to bake delicious food is the most concerned problem of the designers and users of the oven. For this intent, this paper analyzed the heat distribution in the oven based on the basic operation principles and proceeded the data simulation of the temperature distribution on the rack section. Constructing the differential equation model of the temperature distribution changes in the pan when the oven works based on the heat radiation and heat transmission, based on the idea of utilizing cellular automation to simulate heat transfer process, used ANSYS software to proceed the numerical simulation analysis to the rectangular, round-cornered rectangular, elliptical and circular pans and giving out the instantaneous temperature distribution of the corresponding shapes of the pans. The temperature distribution of the rectangular and circular pans proves that the product gets overcooked easily at the corners and edges of rectangular pans but not of a round pan.
NASA Technical Reports Server (NTRS)
Dum, C. T.
1990-01-01
Particle simulation experiments were used to study the basic physical ingredients needed for building a global model of foreshock wave phenomena. In particular, the generation of Langmuir waves by a gentle bump-on-tail electron distribution is analyzed. It is shown that, with appropriately designed simulations experiments, quasi-linear theory can be quantitatively verified for parameters corresponding to the electron foreshock.
Numerical Simulation of Abandoned Gob Methane Drainage through Surface Vertical Wells
Hu, Guozhong
2015-01-01
The influence of the ventilation system on the abandoned gob weakens, so the gas seepage characteristics in the abandoned gob are significantly different from those in a normal mining gob. In connection with this, this study physically simulated the movement of overlying rock strata. A spatial distribution function for gob permeability was derived. A numerical model using FLUENT for abandoned gob methane drainage through surface wells was established, and the derived spatial distribution function for gob permeability was imported into the numerical model. The control range of surface wells, flow patterns and distribution rules for static pressure in the abandoned gob under different well locations were determined using the calculated results from the numerical model. PMID:25955438
Cluster dynamics and cluster size distributions in systems of self-propelled particles
NASA Astrophysics Data System (ADS)
Peruani, F.; Schimansky-Geier, L.; Bär, M.
2010-12-01
Systems of self-propelled particles (SPP) interacting by a velocity alignment mechanism in the presence of noise exhibit rich clustering dynamics. Often, clusters are responsible for the distribution of (local) information in these systems. Here, we investigate the properties of individual clusters in SPP systems, in particular the asymmetric spreading behavior of clusters with respect to their direction of motion. In addition, we formulate a Smoluchowski-type kinetic model to describe the evolution of the cluster size distribution (CSD). This model predicts the emergence of steady-state CSDs in SPP systems. We test our theoretical predictions in simulations of SPP with nematic interactions and find that our simple kinetic model reproduces qualitatively the transition to aggregation observed in simulations.
Masters, Stacey C; Elliott, Sandi; Boyd, Sarah; Dunbar, James A
2017-10-01
There is a lack of access to simulation-based education (SBE) for professional entry students (PES) and health professionals at rural and remote locations. A descriptive study. Health and education facilities in regional South Australia and south-west Victoria. Number of training recipients who participated in SBE; geographical distribution and locations where SBE was delivered; number of rural clinical educators providing SBE. A distributed model to deliver SBE in rural and remote locations in collaboration with local health and community services, education providers and the general public. Face-to-face meetings with health services and education providers identified gaps in locally delivered clinical skills training and availability of simulation resources. Clinical leadership, professional development and community of practice strategies were implemented to enhance capacity of rural clinical educators to deliver SBE. The number of SBE participants and training hours delivered exceeded targets. The distributed model enabled access to regular, localised training for PES and health professionals, minimising travel and staff backfill costs incurred when attending regional centres. The skills acquired by local educators remain in rural areas to support future training. The distributed collaborative model substantially increased access to clinical skills training for PES and health professionals in rural and remote locations. Developing the teaching skills of rural clinicians optimised the use of simulation resources. Consequently, health services were able to provide students with flexible and realistic learning opportunities in clinical procedures, communication techniques and teamwork skills. © 2017 National Rural Health Alliance Inc.
A GCM simulation of the earth-atmosphere radiation balance for winter and summer
NASA Technical Reports Server (NTRS)
Wu, M. L. C.
1979-01-01
The radiation balance of the earth-atmosphere system simulated by using the general circulation model (GCM) of the Laboratory for Atmospheric Sciences (GLAS) is examined in regards to its graphical distribution, zonally-averaged distribution, and global mean. Most of the main features of the radiation balance at the top of the atmosphere are reasonably simulated, with some differences in the detailed structure of the patterns and intensities for both summer and winter in comparison with values as derived from Nimbus and NOAA (National Oceanic and Atmospheric Administration) satellite observations. Both the capability and defects of the model are discussed.
Modeling flash floods in southern France for road management purposes
NASA Astrophysics Data System (ADS)
Vincendon, Béatrice; Édouard, Simon; Dewaele, Hélène; Ducrocq, Véronique; Lespinas, Franck; Delrieu, Guy; Anquetin, Sandrine
2016-10-01
Flash-floods are among the most devastating hazards in the Mediterranean. A major subset of damage and casualties caused by flooding is related to road submersion. Distributed hydrological nowcasting can be used for road flooding monitoring. This requires rainfall-runoff simulations at a high space and time resolution. Distributed hydrological models, such as the ISBA-TOP coupled system used in this study, are designed to simulate discharges for any cross-section of a river but they are generally calibrated for certain outlets and give deteriorated results for the sub-catchment outlets. The paper first analyses ISBA-TOP discharge simulations in the French Mediterranean region for target points different from the outlets used for calibration. The sensitivity of the model to its governing factors is examined to highlight the validity of results obtained for ungauged river sections compared with those obtained for the main gauged outlets. The use of improved model inputs is found beneficial for sub-catchments simulation. The calibration procedure however provides the parameters' values for the main outlets only and these choices influence the simulations for ungauged catchments or sub-catchments. As a result, a new version of ISBA-TOP system without any parameter to calibrate is used to produce diagnostics relevant for quantifying the risk of road submersion. A first diagnostic is the simulated runoff spatial distribution, it provides a useful information about areas with a high risk of submersion. Then an indicator of the flood severity is given by simulated discharges presented with respect to return periods. The latter has to be used together with information about the vulnerability of road-river cross-sections.
Software Comparison for Renewable Energy Deployment in a Distribution Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, David Wenzhong; Muljadi, Eduard; Tian, Tian
The main objective of this report is to evaluate different software options for performing robust distributed generation (DG) power system modeling. The features and capabilities of four simulation tools, OpenDSS, GridLAB-D, CYMDIST, and PowerWorld Simulator, are compared to analyze their effectiveness in analyzing distribution networks with DG. OpenDSS and GridLAB-D, two open source software, have the capability to simulate networks with fluctuating data values. These packages allow the running of a simulation each time instant by iterating only the main script file. CYMDIST, a commercial software, allows for time-series simulation to study variations on network controls. PowerWorld Simulator, another commercialmore » tool, has a batch mode simulation function through the 'Time Step Simulation' tool, which obtains solutions for a list of specified time points. PowerWorld Simulator is intended for analysis of transmission-level systems, while the other three are designed for distribution systems. CYMDIST and PowerWorld Simulator feature easy-to-use graphical user interfaces (GUIs). OpenDSS and GridLAB-D, on the other hand, are based on command-line programs, which increase the time necessary to become familiar with the software packages.« less
ASSESSING ASTROPHYSICAL UNCERTAINTIES IN DIRECT DETECTION WITH GALAXY SIMULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sloane, Jonathan D.; Buckley, Matthew R.; Brooks, Alyson M.
2016-11-01
We study the local dark matter velocity distribution in simulated Milky Way-mass galaxies, generated at high resolution with both dark matter and baryons. We find that the dark matter in the solar neighborhood is influenced appreciably by the inclusion of baryons, increasing the speed of dark matter particles compared to dark matter-only simulations. The gravitational potential due to the presence of a baryonic disk increases the amount of high velocity dark matter, resulting in velocity distributions that are more similar to the Maxwellian Standard Halo Model than predicted from dark matter-only simulations. Furthermore, the velocity structures present in baryonic simulationsmore » possess a greater diversity than expected from dark matter-only simulations. We show that the impact on the direct detection experiments LUX, DAMA/Libra, and CoGeNT using our simulated velocity distributions, and explore how resolution and halo mass within the Milky Way’s estimated mass range impact the results. A Maxwellian fit to the velocity distribution tends to overpredict the amount of dark matter in the high velocity tail, even with baryons, and thus leads to overly optimistic direct detection bounds on models that are dependent on this region of phase space for an experimental signal. Our work further demonstrates that it is critical to transform simulated velocity distributions to the lab frame of reference, due to the fact that velocity structure in the solar neighborhood appears when baryons are included. There is more velocity structure present when baryons are included than in dark matter-only simulations. Even when baryons are included, the importance of the velocity structure is not as apparent in the Galactic frame of reference as in the Earth frame.« less
,
2008-01-01
This report documents the computer program INFIL3.0, which is a grid-based, distributed-parameter, deterministic water-balance watershed model that calculates the temporal and spatial distribution of daily net infiltration of water across the lower boundary of the root zone. The bottom of the root zone is the estimated maximum depth below ground surface affected by evapotranspiration. In many field applications, net infiltration below the bottom of the root zone can be assumed to equal net recharge to an underlying water-table aquifer. The daily water balance simulated by INFIL3.0 includes precipitation as either rain or snow; snowfall accumulation, sublimation, and snowmelt; infiltration into the root zone; evapotranspiration from the root zone; drainage and water-content redistribution within the root-zone profile; surface-water runoff from, and run-on to, adjacent grid cells; and net infiltration across the bottom of the root zone. The water-balance model uses daily climate records of precipitation and air temperature and a spatially distributed representation of drainage-basin characteristics defined by topography, geology, soils, and vegetation to simulate daily net infiltration at all locations, including stream channels with intermittent streamflow in response to runoff from rain and snowmelt. The model does not simulate streamflow originating as ground-water discharge. Drainage-basin characteristics are represented in the model by a set of spatially distributed input variables uniquely assigned to each grid cell of a model grid. The report provides a description of the conceptual model of net infiltration on which the INFIL3.0 computer code is based and a detailed discussion of the methods by which INFIL3.0 simulates the net-infiltration process. The report also includes instructions for preparing input files necessary for an INFIL3.0 simulation, a description of the output files that are created as part of an INFIL3.0 simulation, and a sample problem that illustrates application of the code to a field setting. Brief descriptions of the main program routine and of each of the modules and subroutines of the INFIL3.0 code, as well as definitions of the variables used in each subroutine, are provided in an appendix.
Simulation study on electric field intensity above train roof
NASA Astrophysics Data System (ADS)
Fan, Yizhe; Li, Huawei; Yang, Shasha
2018-04-01
In order to understand the distribution of electric field in the space above the train roof accurately and select the installation position of the detection device reasonably, in this paper, the 3D model of pantograph-catenary is established by using SolidWorks software, and the spatial electric field distribution of pantograph-catenary model is simulated based on Comsol software. According to the electric field intensity analysis within the 0.4m space above train roof, we give a reasonable installation of the detection device.
Airport Simulations Using Distributed Computational Resources
NASA Technical Reports Server (NTRS)
McDermott, William J.; Maluf, David A.; Gawdiak, Yuri; Tran, Peter; Clancy, Daniel (Technical Monitor)
2002-01-01
The Virtual National Airspace Simulation (VNAS) will improve the safety of Air Transportation. In 2001, using simulation and information management software running over a distributed network of super-computers, researchers at NASA Ames, Glenn, and Langley Research Centers developed a working prototype of a virtual airspace. This VNAS prototype modeled daily operations of the Atlanta airport by integrating measured operational data and simulation data on up to 2,000 flights a day. The concepts and architecture developed by NASA for this prototype are integral to the National Airspace Simulation to support the development of strategies improving aviation safety, identifying precursors to component failure.
Precipitation From a Multiyear Database of Convection-Allowing WRF Simulations
NASA Astrophysics Data System (ADS)
Goines, D. C.; Kennedy, A. D.
2018-03-01
Convection-allowing models (CAMs) have become frequently used for operational forecasting and, more recently, have been utilized for general circulation model downscaling. CAM forecasts have typically been analyzed for a few case studies or over short time periods, but this limits the ability to judge the overall skill of deterministic simulations. Analysis over long time periods can yield a better understanding of systematic model error. Four years of warm season (April-August, 2010-2013)-simulated precipitation has been accumulated from two Weather Research and Forecasting (WRF) models with 4 km grid spacing. The simulations were provided by the National Center for Environmental Prediction (NCEP) and the National Severe Storms Laboratory (NSSL), each with different dynamic cores and parameterization schemes. These simulations are evaluated against the NCEP Stage-IV precipitation data set with similar 4 km grid spacing. The spatial distribution and diurnal cycle of precipitation in the central United States are analyzed using Hovmöller diagrams, grid point correlations, and traditional verification skill scoring (i.e., ETS; Equitable Threat Score). Although NCEP-WRF had a high positive error in total precipitation, spatial characteristics were similar to observations. For example, the spatial distribution of NCEP-WRF precipitation correlated better than NSSL-WRF for the Northern Plains. Hovmöller results exposed a delay in initiation and decay of diurnal precipitation by NCEP-WRF while both models had difficulty in reproducing the timing and location of propagating precipitation. ETS was highest for NSSL-WRF in all domains at all times. ETS was also higher in areas of propagating precipitation compared to areas of unorganized diurnal scattered precipitation. Monthly analysis identified unique differences between the two models in their abilities to correctly simulate the spatial distribution and zonal motion of precipitation through the warm season.
NASA Astrophysics Data System (ADS)
KIM, J.; Smith, M. B.; Koren, V.; Salas, F.; Cui, Z.; Johnson, D.
2017-12-01
The National Oceanic and Atmospheric Administration (NOAA)-National Weather Service (NWS) developed the Hydrology Laboratory-Research Distributed Hydrologic Model (HL-RDHM) framework as an initial step towards spatially distributed modeling at River Forecast Centers (RFCs). Recently, the NOAA/NWS worked with the National Center for Atmospheric Research (NCAR) to implement the National Water Model (NWM) for nationally-consistent water resources prediction. The NWM is based on the WRF-Hydro framework and is run at a 1km spatial resolution and 1-hour time step over the contiguous United States (CONUS) and contributing areas in Canada and Mexico. In this study, we compare streamflow simulations from HL-RDHM and WRF-Hydro to observations from 279 USGS stations. For streamflow simulations, HL-RDHM is run on 4km grids with the temporal resolution of 1 hour for a 5-year period (Water Years 2008-2012), using a priori parameters provided by NOAA-NWS. The WRF-Hydro streamflow simulations for the same time period are extracted from NCAR's 23 retrospective run of the NWM (version 1.0) over CONUS based on 1km grids. We choose 279 USGS stations which are relatively less affected by dams or reservoirs, in the domains of six different RFCs. We use the daily average values of simulations and observations for the convenience of comparison. The main purpose of this research is to evaluate how HL-RDHM and WRF-Hydro perform at USGS gauge stations. We compare daily time-series of observations and both simulations, and calculate the error values using a variety of error functions. Using these plots and error values, we evaluate the performances of HL-RDHM and WRF-Hydro models. Our results show a mix of model performance across geographic regions.
The Future of Drought in the Southeastern U.S.: Projections from downscaled CMIP5 models
NASA Astrophysics Data System (ADS)
Keellings, D.; Engstrom, J.
2017-12-01
The Southeastern U.S. has been repeatedly impacted by severe droughts that have affected the environment and economy of the region. In this study the ability of 32 downscaled CMIP5 models, bias corrected using localized constructed analogs (LOCA), to simulate historical observations of dry spells from 1950-2005 are assessed using Perkins skill scores and significance tests. The models generally simulate the distribution of dry days well but there are significant differences between the ability of the best and worst performing models, particularly when it comes to the upper tail of the distribution. The best and worst performing models are then projected through 2099, using RCP 4.5 and 8.5, and estimates of 20 year return periods are compared. Only the higher skill models provide a good estimate of extreme dry spell lengths with simulations of 20 year return values within ± 5 days of observed values across the region. Projected return values differ by model grouping, but all models exhibit significant increases.
NASA Astrophysics Data System (ADS)
Cappelli, Mark; Young, Christopher
2016-10-01
We present continued efforts towards introducing physical models for cross-magnetic field electron transport into Hall thruster discharge simulations. In particular, we seek to evaluate whether such models accurately capture ion dynamics, both averaged and resolved in time, through comparisons with measured ion velocity distributions which are now becoming available for several devices. Here, we describe a turbulent electron transport model that is integrated into 2-D hybrid fluid/PIC simulations of a 72 mm diameter laboratory thruster operating at 400 W. We also compare this model's predictions with one recently proposed by Lafluer et al.. Introducing these models into 2-D hybrid simulations is relatively straightforward and leverages the existing framework for solving the electron fluid equations. The models are tested for their ability to capture the time-averaged experimental discharge current and its fluctuations due to ionization instabilities. Model predictions are also more rigorously evaluated against recent laser-induced fluorescence measurements of time-resolved ion velocity distributions.
Simulation of Groundwater Flow in the Coastal Plain Aquifer System of Virginia
Heywood, Charles E.; Pope, Jason P.
2009-01-01
The groundwater model documented in this report simulates the transient evolution of water levels in the aquifers and confining units of the Virginia Coastal Plain and adjacent portions of Maryland and North Carolina since 1890. Groundwater withdrawals have lowered water levels in Virginia Coastal Plain aquifers and have resulted in drawdown in the Potomac aquifer exceeding 200 feet in some areas. The discovery of the Chesapeake Bay impact crater and a revised conceptualization of the Potomac aquifer are two major changes to the hydrogeologic framework that have been incorporated into the groundwater model. The spatial scale of the model was selected on the basis of the primary function of the model of assessing the regional water-level responses of the confined aquifers beneath the Coastal Plain. The local horizontal groundwater flow through the surficial aquifer is not intended to be accurately simulated. Representation of recharge, evapotranspiration, and interaction with surface-water features, such as major rivers, lakes, the Chesapeake Bay, and the Atlantic Ocean, enable simulation of shallow flow-system details that influence locations of recharge to and discharge from the deeper confined flow system. The increased density of groundwater associated with the transition from fresh to salty groundwater near the Atlantic Ocean affects regional groundwater flow and was simulated with the Variable Density Flow Process of SEAWAT (a U.S. Geological Survey program for simulation of three-dimensional variable-density groundwater flow and transport). The groundwater density distribution was generated by a separate 108,000-year simulation of Pleistocene freshwater flushing around the Chesapeake Bay impact crater during transient sea-level changes. Specified-flux boundaries simulate increasing groundwater underflow out of the model domain into Maryland and minor underflow from the Piedmont Province into the model domain. Reported withdrawals accounted for approximately 75 percent of the total groundwater withdrawn from Coastal Plain aquifers during the year 2000. Unreported self-supplied withdrawals were simulated in the groundwater model by specifying their probable locations, magnitudes, and aquifer assignments on the basis of a separate study of domestic-well characteristics in Virginia. The groundwater flow model was calibrated to 7,183 historic water-level observations from 497 observation wells with the parameter-estimation codes UCODE-2005 and PEST. Most water-level observations were from the Potomac aquifer system, which permitted a more complex spatial distribution of simulated hydraulic conductivity within the Potomac aquifer than was possible for other aquifers. Zone, function, and pilot-point approaches were used to distribute assigned hydraulic properties within the aquifer system. The good fit (root mean square error = 3.6 feet) of simulated to observed water levels and reasonableness of the estimated parameter values indicate the model is a good representation of the physical groundwater flow system. The magnitudes and temporal and spatial distributions of residuals indicate no appreciable model bias. The model is intended to be useful for predicting changes in regional groundwater levels in the confined aquifer system in response to future pumping. Because the transient release of water stored in low-permeability confining units is simulated, drawdowns resulting from simulated pumping stresses may change substantially through time before reaching steady state. Consequently, transient simulations of water levels at different future times will be more accurate than a steady-state simulation for evaluating probable future aquifer-system responses to proposed pumping.
NASA Astrophysics Data System (ADS)
Zhang, Chuanwei; Zhang, Dongsheng; Wen, Jianping
2018-02-01
In order to coordinately control the torque distribution of existing two-wheel independent drive electric vehicle, and improve the energy efficiency and control stability of the whole vehicle, the control strategies based on fuzzy control were designed which adopt the direct yaw moment control as the main line. For realizing the torque coordination simulation of the two-wheel independent drive vehicle, the vehicle model, motor model and tire model were built, including the vehicle 7 - DOF dynamics model, motion equation, torque equation. Finally, in the Carsim - Simulink joint simulation platform, the feasibility of the drive control strategy was verified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan
A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less
Hydrogen bonds in concreto and in computro: the sequel
NASA Astrophysics Data System (ADS)
Stouten, Pieter F. W.; Van Eijck, Bouke P.; Kroon, Jan
1991-02-01
In the framework of our comparative research concerning hydrogen bonding in the crystalline and liquid phases we have carried out molecular dynamics (MD) simulations of liquid methanol. Six different rigid three site models are compared. Five of them had been reported in the literature and one (OM2) we developed by a fit to the experimental molar volume, heat of vaporization and neutron weighted radial distribution function. In general the agreement with experiment is satisfactory for the different models. None of the models has an explicit hydrogen bond potential, but five of the six models show a degree of hydrogen bonding comparable to experiments on liquid methanol. The analysis of the simulation hydrogen bonds indicates that there is a distinct preference of the O⋯O axis to lie in the acceptor lone pairs plane, but hardly any for the lone pair directions. Ab initio calculations and crystal structure statistics of OH⋯O hydrogen bonds agree with this observation. The O⋯O hydrogen bond length distributions are similar for most models. The crystal structures show a sharper O⋯O distribution. Explicit introduction of harmonic motion with a quite realistic root mean square amplitude of 0.08 Å to the thermally averaged crystal distribution results in a distribution comparable to OM2 although the maximum of the former is found at shorter distance. On the basis of the analysis of the static properties of all models we conclude that our OM2, Jorgenson's OPLS and Haughney, Ferrario and McDonald's HFM1 models are good candidates for simulations of liquid methanol under isothermal, isochoric conditions. Partly flexible and completely rigid OM2 are simulated at constant pressure and with fixed volume. The flexible simulations give essentially the same (correct) results under both conditions, which is not surprising because the flexible form was fitted under both conditions. Rigid OM2 has a similar potential energy but larger pressure in the isochoric case and larger energy and far larger volume in the isobaric case. Radial distribution functions and hydrogen bond geometries are very similar for all four cases. Only in the case of the osobaric rigid methanol does the volume expansion seem to be accompanied by a slight preference for tetrahedrality around the oxygen atom.
NASA Astrophysics Data System (ADS)
Williams, J. E.; van der Swaluw, E.; de Vries, W. J.; Sauter, F. J.; van Pul, W. A. J.; Hoogerbrugge, R.
2015-08-01
We present a parameterization developed to simulate Ammonium particle (NH4+) concentrations in the Operational Priority Substances (OPS) source-receptor model, without the necessity of using a detailed chemical scheme. By using the ratios of the main pre-cursor gases SO2, NO2 and NH3, and utilising calculations performed using a chemical box-model, we show that the parameterization can simulate annual mean NH4+ concentration fields to within ∼15% of measured values at locations throughout the Netherlands. Performing simulations for different decades, we find a strong correlation of simulated NH4+ distributions for both past (1993-1995) and present (2009-2012) time periods. Although the total concentration of NH4+ has decreased over the period, we find that the fraction of NH4+ transported into the Netherlands has increased from around 40% in the past to 50% for present-day. This is due to the variable efficiency of mitigation practises across economic sectors. Performing simulations for the year 2020 using associated emission estimates, we show that there are generally decreases of ∼8-25% compared to present day concentrations. By altering the meteorological fields applied in the future simulations, we show that a significant uncertainty of between ∼50 and 100% exists on this estimated NH4+ distribution as a result of variability in the temperature dependent emission terms and relative humidity. Therefore, any projections of future NH4+ distributions should be performed using well chosen meteorological fields representing recent meteorological situations.
USDA-ARS?s Scientific Manuscript database
AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic/water quality simulation components. The AgES-W model was previously evaluated for streamflow and recently has been enhanced with the addition of nitrogen (N) and sediment modeling compo...
USDA-ARS?s Scientific Manuscript database
AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic and water quality (H/WQ) simulation components under the Java Connection Framework (JCF) and the Object Modeling System (OMS) environmental modeling framework. AgES-W is implicitly scala...
USDA-ARS?s Scientific Manuscript database
AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic/water quality (H/WQ) simulation components under the Object Modeling System (OMS3) environmental modeling framework. AgES-W has recently been enhanced with the addition of nitrogen (N) a...
Characteristics of white LED transmission through a smoke screen
NASA Astrophysics Data System (ADS)
Zheng, Yunfei; Yang, Aiying; Feng, Lihui; Guo, Peng
2018-01-01
The characteristics of white LED transmission through a smoke screen is critical for visible light communication through a smoke screen. Based on the Mie scattering theory, the Monte Carlo transmission model is established. Based on the probability density function, the white LED sampling model is established according to the measured spectrum of a white LED and the distribution angle of the lambert model. The sampling model of smoke screen particle diameter is also established according to its distribution. We simulate numerically the influence the smoke thickness, the smoke concentration and the angle of irradiance of white LED on transmittance of the white LED. We construct a white LED smoke transmission experiment system. The measured result on the light transmittance and the smoke concentration agreed with the simulated result, and demonstrated the validity of simulation model for visible light transmission channel through a smoke screen.
NASA Astrophysics Data System (ADS)
Robles-Morua, A.; Vivoni, E. R.; Volo, T. J.; Rivera, E. R.; Dominguez, F.; Meixner, T.
2011-12-01
This project is part of a multidisciplinary effort aimed at understanding the impacts of climate variability and change on the ecological services provided by riparian ecosystems in semiarid watersheds of the southwestern United States. Valuing the environmental and recreational services provided by these ecosystems in the future requires a numerical simulation approach to estimate streamflow in ungauged tributaries as well as diffuse and direct recharge to groundwater basins. In this work, we utilize a distributed hydrologic model known as the TIN-based Real-time Integrated Basin Simulator (tRIBS) in the upper Santa Cruz and San Pedro basins with the goal of generating simulated hydrological fields that will be coupled to a riparian groundwater model. With the distributed model, we will evaluate a set of climate change and population scenarios to quantify future conditions in these two river systems and their impacts on flood peaks, recharge events and low flows. Here, we present a model confidence building exercise based on high performance computing (HPC) runs of the tRIBS model in both basins during the period of 1990-2000. Distributed model simulations utilize best-available data across the US-Mexico border on topography, land cover and soils obtained from analysis of remotely-sensed imagery and government databases. Meteorological forcing over the historical period is obtained from a combination of sparse ground networks and weather radar rainfall estimates. We then focus on a comparison between simulation runs using ground-based forcing to cases where the Weather Research Forecast (WRF) model is used to specify the historical conditions. Two spatial resolutions are considered from the WRF model fields - a coarse (35-km) and a downscaled (10- km) forcing. Comparisons will focus on the distribution of precipitation, soil moisture, runoff generation and recharge and assess the value of the WRF coarse and downscaled products. These results provide confidence in the model application and a measure of modeling uncertainty that will help set the foundation for forthcoming climate change studies.
Wen J. Wang; Hong S. He; Frank R. Thompson; Martin A. Spetich; Jacob S. Fraser
2018-01-01
Demographic processes (fecundity, dispersal, colonization, growth, and mortality) and their interactions with environmental changes are notwell represented in current climate-distribution models (e.g., niche and biophysical process models) and constitute a large uncertainty in projections of future tree species distribution shifts.We investigate how species biological...
Load index model: An advanced tool to support decision making during mass-casualty incidents.
Adini, Bruria; Aharonson-Daniel, Limor; Israeli, Avi
2015-03-01
In mass-casualty events, accessing information concerning hospital congestion levels is crucial to improving patient distribution and optimizing care. The study aimed to develop a decision support tool for distributing casualties to hospitals in an emergency scenario involving multiple casualties. A comprehensive literature review and structured interviews with 20 content experts produced a shortlist of relevant criteria for inclusion in the model. A "load index model" was prepared, incorporating results of a modified Delphi survey of 100 emergency response experts. The model was tested in three simulation exercises in which an emergency scenario was presented to six groups of senior emergency managers. Information was provided regarding capacities of 11 simulated admitting hospitals in the region, and evacuation destinations were requested for 600 simulated casualties. Of the three simulation rounds, two were performed without the model and one after its presentation. Following simulation experiments and implementation during a real-life security threat, the efficacy of the model was assessed. Variability between experts concerning casualties' evacuation destinations decreased significantly following the model's introduction. Most responders (92%) supported the need for standardized data, and 85% found that the model improved policy setting regarding casualty evacuation in an emergency situation. These findings were reaffirmed in a real-life emergency scenario. The proposed model improved capacity to ensure evacuation of patients to less congested medical facilities in emergency situations, thereby enhancing lifesaving medical services. The model supported decision-making processes in both simulation exercises and an actual emergency situation.
NASA Astrophysics Data System (ADS)
Li, W.; Su, Y.; Harmon, T. C.; Guo, Q.
2013-12-01
Light Detection and Ranging (lidar) is an optical remote sensing technology that measures properties of scattered light to find range and/or other information of a distant object. Due to its ability to generate 3-dimensional data with high spatial resolution and accuracy, lidar technology is being increasingly used in ecology, geography, geology, geomorphology, seismology, remote sensing, and atmospheric physics. In this study we construct a 3-dimentional (3D) radiative transfer model (RTM) using lidar data to simulate the spatial distribution of solar radiation (direct and diffuse) on the surface of water and mountain forests. The model includes three sub-models: a light model simulating the light source, a sensor model simulating the camera, and a scene model simulating the landscape. We use ground-based and airborne lidar data to characterize the 3D structure of the study area, and generate a detailed 3D scene model. The interactions between light and object are simulated using the Monte Carlo Ray Tracing (MCRT) method. A large number of rays are generated from the light source. For each individual ray, the full traveling path is traced until it is absorbed or escapes from the scene boundary. By locating the sensor at different positions and directions, we can simulate the spatial distribution of solar energy at the ground, vegetation and water surfaces. These outputs can then be incorporated into meteorological drivers for hydrologic and energy balance models to improve our understanding of hydrologic processes and ecosystem functions.
NASA Astrophysics Data System (ADS)
Yousef, Adel K. M.; Taha, Ziad. A.; Shehab, Abeer A.
2011-01-01
This paper describes the development of a computer model used to analyze the heat flow during pulsed Nd: YAG laser spot welding of dissimilar metal; low carbon steel (1020) to aluminum alloy (6061). The model is built using ANSYS FLUENT 3.6 software where almost all the environments simulated to be similar to the experimental environments. A simulation analysis was implemented based on conduction heat transfer out of the key hole where no melting occurs. The effect of laser power and pulse duration was studied. Three peak powers 1, 1.66 and 2.5 kW were varied during pulsed laser spot welding (keeping the energy constant), also the effect of two pulse durations 4 and 8 ms (with constant peak power), on the transient temperature distribution and weld pool dimension were predicated using the present simulation. It was found that the present simulation model can give an indication for choosing the suitable laser parameters (i.e. pulse durations, peak power and interaction time required) during pulsed laser spot welding of dissimilar metals.
NASA Astrophysics Data System (ADS)
Yiran, P.; Li, J.; von Salzen, K.; Dai, T.; Liu, D.
2014-12-01
Mineral dust is a significant contributor to global and Asian aerosol burden. Currently, large uncertainties still exist in simulated aerosol processes in global climate models (GCMs), which lead to a diversity in dust mass loading and spatial distribution of GCM projections. In this study, satellite measurements from CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization) and observed aerosol data from Asian stations are compared with modelled aerosol in the Canadian Atmospheric Global Climate Model (CanAM4.2). Both seasonal and annual variations in Asian dust distribution are investigated. Vertical profile of simulated aerosol in troposphere is evaluated with CALIOP Level 3 products and local observed extinction for dust and total aerosols. Physical processes in GCM such as horizontal advection, vertical mixing, dry and wet removals are analyzed according to model simulation and available measurements of aerosol. This work aims to improve current understanding of Asian dust transport and vertical exchange on a large scale, which may help to increase the accuracy of GCM simulation on aerosols.
Model and particle-in-cell simulation of ion energy distribution in collisionless sheath
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Zhuwen, E-mail: zzwwdxy@gznc.edu.cn; Key Laboratory of Photoelectron Materials Design and Simulation in Guizhou Province, Guiyang 550018; Scientific Research Innovation Team in Plasma and Functional Thin Film Materials in Guizhou Province, Guiyang 550018
2015-06-15
In this paper, we propose a self-consistent theoretical model, which is described by the ion energy distributions (IEDs) in collisionless sheaths, and the analytical results for different combined dc/radio frequency (rf) capacitive coupled plasma discharge cases, including sheath voltage errors analysis, are compared with the results of numerical simulations using a one-dimensional plane-parallel particle-in-cell (PIC) simulation. The IEDs in collisionless sheaths are performed on combination of dc/rf voltage sources electrodes discharge using argon as the process gas. The incident ions on the grounded electrode are separated, according to their different radio frequencies, and dc voltages on a separated electrode, themore » IEDs, and widths of energy in sheath and the plasma sheath thickness are discussed. The IEDs, the IED widths, and sheath voltages by the theoretical model are investigated and show good agreement with PIC simulations.« less
Computational simulation of the creep-rupture process in filamentary composite materials
NASA Technical Reports Server (NTRS)
Slattery, Kerry T.; Hackett, Robert M.
1991-01-01
A computational simulation of the internal damage accumulation which causes the creep-rupture phenomenon in filamentary composite materials is developed. The creep-rupture process involves complex interactions between several damage mechanisms. A statistically-based computational simulation using a time-differencing approach is employed to model these progressive interactions. The finite element method is used to calculate the internal stresses. The fibers are modeled as a series of bar elements which are connected transversely by matrix elements. Flaws are distributed randomly throughout the elements in the model. Load is applied, and the properties of the individual elements are updated at the end of each time step as a function of the stress history. The simulation is continued until failure occurs. Several cases, with different initial flaw dispersions, are run to establish a statistical distribution of the time-to-failure. The calculations are performed on a supercomputer. The simulation results compare favorably with the results of creep-rupture experiments conducted at the Lawrence Livermore National Laboratory.
Consistent kinetic simulation of plasma and sputtering in low temperature plasmas
NASA Astrophysics Data System (ADS)
Schmidt, Frederik; Trieschmann, Jan; Mussenbrock, Thomas
2016-09-01
Plasmas are commonly used in sputtering applications for the deposition of thin films. Although magnetron sources are a prominent choice, capacitively coupled plasmas have certain advantages (e.g., sputtering of non-conducting and/or ferromagnetic materials, aside of excellent control of the ion energy distribution). In order to understand the collective plasma and sputtering dynamics, a kinetic simulation model is helpful. Particle-in-Cell has been proven to be successful in simulating the plasma dynamics, while the Test-Multi-Particle-Method can be used to describe the sputtered neutral species. In this talk a consistent combination of these methods is presented by consistently coupling the simulated ion flux as input to a neutral particle transport model. The combined model is used to simulate and discuss the spatially dependent densities, fluxes and velocity distributions of all particles. This work is supported by the German Research Foundation (DFG) in the frame of Transregional Collaborative Research Center (SFB) TR-87.
NASA Technical Reports Server (NTRS)
Kiang, N. Y.; Jablonski, Emma R.; Way, Michael J.; Del Genio, Anthony; Roberge, Aki
2015-01-01
The mean surface temperature of a planet is now acknowledged as insufficient to surmise its full potential habitability. Advancing our understanding requires exploration with 3D general circulation models (GCMs), which can take into account how gradients and fluxes across a planet's surface influence the distribution of heat, clouds, and the potential for heterogeneous distribution of liquid water. Here we present 3D GCM simulations of the effects of alternative stellar spectra, instellation, model resolution, and ocean heat transport, on the simulated distribution of heat and moisture of an Earth-like planet (ELP).
Building a generalized distributed system model
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi; Foudriat, E. C.
1991-01-01
A modeling tool for both analysis and design of distributed systems is discussed. Since many research institutions have access to networks of workstations, the researchers decided to build a tool running on top of the workstations to function as a prototype as well as a distributed simulator for a computing system. The effects of system modeling on performance prediction in distributed systems and the effect of static locking and deadlocks on the performance predictions of distributed transactions are also discussed. While the probability of deadlock is considerably small, its effects on performance could be significant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, Himanshu; Palmintier, Bryan S; Krad, Ibrahim
This paper presents the results of a distributed solar PV impact assessment study that was performed using a synthetic integrated transmission (T) and distribution (D) model. The primary objective of the study was to present a new approach for distributed solar PV impact assessment, where along with detailed models of transmission and distribution networks, consumer loads were modeled using the physics of end-use equipment, and distributed solar PV was geographically dispersed and connected to the secondary distribution networks. The highlights of the study results were (i) increase in the Area Control Error (ACE) at high penetration levels of distributed solarmore » PV; and (ii) differences in distribution voltages profiles and voltage regulator operations between integrated T&D and distribution only simulations.« less
Measurement with microscopic MRI and simulation of flow in different aneurysm models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edelhoff, Daniel, E-mail: daniel.edelhoff@tu-dortmund.de; Frank, Frauke; Heil, Marvin
2015-10-15
Purpose: The impact and the development of aneurysms depend to a significant degree on the exchange of liquid between the regular vessel and the pathological extension. A better understanding of this process will lead to improved prediction capabilities. The aim of the current study was to investigate fluid-exchange in aneurysm models of different complexities by combining microscopic magnetic resonance measurements with numerical simulations. In order to evaluate the accuracy and applicability of these methods, the fluid-exchange process between the unaltered vessel lumen and the aneurysm phantoms was analyzed quantitatively using high spatial resolution. Methods: Magnetic resonance flow imaging was usedmore » to visualize fluid-exchange in two different models produced with a 3D printer. One model of an aneurysm was based on histological findings. The flow distribution in the different models was measured on a microscopic scale using time of flight magnetic resonance imaging. The whole experiment was simulated using fast graphics processing unit-based numerical simulations. The obtained simulation results were compared qualitatively and quantitatively with the magnetic resonance imaging measurements, taking into account flow and spin–lattice relaxation. Results: The results of both presented methods compared well for the used aneurysm models and the chosen flow distributions. The results from the fluid-exchange analysis showed comparable characteristics concerning measurement and simulation. Similar symmetry behavior was observed. Based on these results, the amount of fluid-exchange was calculated. Depending on the geometry of the models, 7% to 45% of the liquid was exchanged per second. Conclusions: The result of the numerical simulations coincides well with the experimentally determined velocity field. The rate of fluid-exchange between vessel and aneurysm was well-predicted. Hence, the results obtained by simulation could be validated by the experiment. The observed deviations can be caused by the noise in the measurement and by the limited resolution of the simulation. The resulting differences are small enough to allow reliable predictions of the flow distribution in vessels with stents and for pulsed blood flow.« less
The effects of numerical-model complexity and observation type on estimated porosity values
Starn, Jeffrey; Bagtzoglou, Amvrossios C.; Green, Christopher T.
2015-01-01
The relative merits of model complexity and types of observations employed in model calibration are compared. An existing groundwater flow model coupled with an advective transport simulation of the Salt Lake Valley, Utah (USA), is adapted for advective transport, and effective porosity is adjusted until simulated tritium concentrations match concentrations in samples from wells. Two calibration approaches are used: a “complex” highly parameterized porosity field and a “simple” parsimonious model of porosity distribution. The use of an atmospheric tracer (tritium in this case) and apparent ages (from tritium/helium) in model calibration also are discussed. Of the models tested, the complex model (with tritium concentrations and tritium/helium apparent ages) performs best. Although tritium breakthrough curves simulated by complex and simple models are very generally similar, and there is value in the simple model, the complex model is supported by a more realistic porosity distribution and a greater number of estimable parameters. Culling the best quality data did not lead to better calibration, possibly because of processes and aquifer characteristics that are not simulated. Despite many factors that contribute to shortcomings of both the models and the data, useful information is obtained from all the models evaluated. Although any particular prediction of tritium breakthrough may have large errors, overall, the models mimic observed trends.
Zhu, Lin; Gong, Huili; Chen, Yun; Li, Xiaojuan; Chang, Xiang; Cui, Yijiao
2016-01-01
Hydraulic conductivity is a major parameter affecting the output accuracy of groundwater flow and transport models. The most commonly used semi-empirical formula for estimating conductivity is Kozeny-Carman equation. However, this method alone does not work well with heterogeneous strata. Two important parameters, grain size and porosity, often show spatial variations at different scales. This study proposes a method for estimating conductivity distributions by combining a stochastic hydrofacies model with geophysical methods. The Markov chain model with transition probability matrix was adopted to re-construct structures of hydrofacies for deriving spatial deposit information. The geophysical and hydro-chemical data were used to estimate the porosity distribution through the Archie’s law. Results show that the stochastic simulated hydrofacies model reflects the sedimentary features with an average model accuracy of 78% in comparison with borehole log data in the Chaobai alluvial fan. The estimated conductivity is reasonable and of the same order of magnitude of the outcomes of the pumping tests. The conductivity distribution is consistent with the sedimentary distributions. This study provides more reliable spatial distributions of the hydraulic parameters for further numerical modeling. PMID:26927886
The Energy Coding of a Structural Neural Network Based on the Hodgkin-Huxley Model.
Zhu, Zhenyu; Wang, Rubin; Zhu, Fengyun
2018-01-01
Based on the Hodgkin-Huxley model, the present study established a fully connected structural neural network to simulate the neural activity and energy consumption of the network by neural energy coding theory. The numerical simulation result showed that the periodicity of the network energy distribution was positively correlated to the number of neurons and coupling strength, but negatively correlated to signal transmitting delay. Moreover, a relationship was established between the energy distribution feature and the synchronous oscillation of the neural network, which showed that when the proportion of negative energy in power consumption curve was high, the synchronous oscillation of the neural network was apparent. In addition, comparison with the simulation result of structural neural network based on the Wang-Zhang biophysical model of neurons showed that both models were essentially consistent.
Water quality modeling in the dead end sections of drinking water distribution networks.
Abokifa, Ahmed A; Yang, Y Jeffrey; Lo, Cynthia S; Biswas, Pratim
2016-02-01
Dead-end sections of drinking water distribution networks are known to be problematic zones in terms of water quality degradation. Extended residence time due to water stagnation leads to rapid reduction of disinfectant residuals allowing the regrowth of microbial pathogens. Water quality models developed so far apply spatial aggregation and temporal averaging techniques for hydraulic parameters by assigning hourly averaged water demands to the main nodes of the network. Although this practice has generally resulted in minimal loss of accuracy for the predicted disinfectant concentrations in main water transmission lines, this is not the case for the peripheries of the distribution network. This study proposes a new approach for simulating disinfectant residuals in dead end pipes while accounting for both spatial and temporal variability in hydraulic and transport parameters. A stochastic demand generator was developed to represent residential water pulses based on a non-homogenous Poisson process. Dispersive solute transport was considered using highly dynamic dispersion rates. A genetic algorithm was used to calibrate the axial hydraulic profile of the dead-end pipe based on the different demand shares of the withdrawal nodes. A parametric sensitivity analysis was done to assess the model performance under variation of different simulation parameters. A group of Monte-Carlo ensembles was carried out to investigate the influence of spatial and temporal variations in flow demands on the simulation accuracy. A set of three correction factors were analytically derived to adjust residence time, dispersion rate and wall demand to overcome simulation error caused by spatial aggregation approximation. The current model results show better agreement with field-measured concentrations of conservative fluoride tracer and free chlorine disinfectant than the simulations of recent advection dispersion reaction models published in the literature. Accuracy of the simulated concentration profiles showed significant dependence on the spatial distribution of the flow demands compared to temporal variation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Experimental testing and modeling analysis of solute mixing at water distribution pipe junctions.
Shao, Yu; Jeffrey Yang, Y; Jiang, Lijie; Yu, Tingchao; Shen, Cheng
2014-06-01
Flow dynamics at a pipe junction controls particle trajectories, solute mixing and concentrations in downstream pipes. The effect can lead to different outcomes of water quality modeling and, hence, drinking water management in a distribution network. Here we have investigated solute mixing behavior in pipe junctions of five hydraulic types, for which flow distribution factors and analytical equations for network modeling are proposed. First, based on experiments, the degree of mixing at a cross is found to be a function of flow momentum ratio that defines a junction flow distribution pattern and the degree of departure from complete mixing. Corresponding analytical solutions are also validated using computational-fluid-dynamics (CFD) simulations. Second, the analytical mixing model is further extended to double-Tee junctions. Correspondingly the flow distribution factor is modified to account for hydraulic departure from a cross configuration. For a double-Tee(A) junction, CFD simulations show that the solute mixing depends on flow momentum ratio and connection pipe length, whereas the mixing at double-Tee(B) is well represented by two independent single-Tee junctions with a potential water stagnation zone in between. Notably, double-Tee junctions differ significantly from a cross in solute mixing and transport. However, it is noted that these pipe connections are widely, but incorrectly, simplified as cross junctions of assumed complete solute mixing in network skeletonization and water quality modeling. For the studied pipe junction types, analytical solutions are proposed to characterize the incomplete mixing and hence may allow better water quality simulation in a distribution network. Published by Elsevier Ltd.
Research on Orbital Plasma-Electrodynamics (ROPE)
NASA Technical Reports Server (NTRS)
Wu, S. T.; Wright, K.
1994-01-01
Since the development of probe theory by Langmuir and Blodgett, the problem of current collection by a charged spherically or cylindrically symmetric body has been investigated by a number of authors. This paper overviews the development of a fully three-dimensional particle simulation code which can be used to understand the physics of current collection in three dimensions and can be used to analyze data resulting from the future tethered satellite system (TSS). According to the TSS configurations, two types of particle simulation models were constructed: a simple particle simulation (SIPS) and a super particle simulation (SUPS). The models study the electron transient response and its asymptotic behavior around a three dimensional, highly biased satellite. The potential distribution surrounding the satellite is determined by solving Laplace's equation in the SIPS model and by solving Poisson's equation in the SUPS model. Thus, the potential distribution in space is independent of the density distribution of the particles in the SUPS model but it does depend on the density distribution of the particles in the SUPS model. The evolution of the potential distribution in the SUPS model is described. When the spherical satellite is charged to a highly positive potential and immersed in a plasma with a uniform magnetic field, the formation of an electron torus in the equatorial plane (the plane in perpendicular to the magnetic field) and elongation of the torus along the magnetic field are found in both the SIPS and the SUPS models but the shape of the torus is different. The areas of high potential that exist in the polar regions in the SUPS model exaggerate the elongation of the electron torus along the magnetic field. The current collected by the satellite for different magentic field strengths is investigated in both models. Due to the nonlinear effects present in SUPS, the oscillating phenomenon of the current collection curve during the first 10 plasma periods can be seen (this does not appear in SIPS). From the parametric studies, it appears that the oscillating phenomenon of the current collection curve occurs only when the magnetic field strength is less than 0.2 gauss for the present model.
NASA Astrophysics Data System (ADS)
Kaiser, Christopher; Hendricks, Johannes; Righi, Mattia; Jöckel, Patrick
2016-04-01
The reliability of aerosol radiative forcing estimates from climate models depends on the accuracy of simulated global aerosol distribution and composition, as well as on the models' representation of the aerosol-cloud and aerosol-radiation interactions. To help improve on previous modeling studies, we recently developed the new aerosol microphysics submodel MADE3 that explicitly tracks particle mixing state in the Aitken, accumulation, and coarse mode size ranges. We implemented MADE3 into the global atmospheric chemistry general circulation model EMAC and evaluated it by comparison of simulated aerosol properties to observations. Compared properties include continental near-surface aerosol component concentrations and size distributions, continental and marine aerosol vertical profiles, and nearly global aerosol optical depth. Recent studies have shown the specific importance of aerosol vertical profiles for determination of the aerosol radiative forcing. Therefore, our focus here is on the evaluation of simulated vertical profiles. The observational data is taken from campaigns between 1990 and 2011 over the Pacific Ocean, over North and South America, and over Europe. The datasets include black carbon and total aerosol mass mixing ratios, as well as aerosol particle number concentrations. Compared to other models, EMAC with MADE3 yields good agreement with the observations - despite a general high bias of the simulated mass mixing ratio profiles. However, BC concentrations are generally overestimated by many models in the upper troposphere. With MADE3 in EMAC, we find better agreement of the simulated BC profiles with HIPPO data than the multi-model average of the models that took part in the AeroCom project. There is an interesting difference between the profiles from individual campaigns and more "climatological" datasets. For instance, compared to spatially and temporally localized campaigns, the model simulates a more continuous decline in both total aerosol and black carbon mass mixing ratio with altitude than found in the observations. In contrast, measured profiles from the HIPPO project are qualitatively captured well. Similar conclusions hold for the comparison of simulated and measured aerosol particle number concentrations. On the one hand, these results exemplify the difficulty in evaluating the representativeness of the simulated global climatological state of the aerosol by means of comparison with individually measured vertical profiles. On the other hand, it highlights the value of aircraft campaigns with large spatial and temporal coverage for model evaluation.
NASA Astrophysics Data System (ADS)
Isaenkova, Margarita; Perlovich, Yuriy; Zhuk, Dmitry; Krymskaya, Olga
2017-10-01
The rolling of Zirconium tube is studied by means of the crystal plasticity viscoplastic self-consistent (VPSC) constitutive modeling. This modeling performed by a dislocation-based constitutive model and a spectral solver using open-source simulation of DAMASK kit. The multi-grain representative volume elements with periodic boundary conditions are used to predict the texture evolution and distributions of strain and stresses. Two models for randomly textured and partially rolled material are deformed to 30% reduction in tube wall thickness and 7% reduction in tube diameter. The resulting shapes of the models are shown and distributions of strain are plotted. Also, evolution of grain's shape during deformation is shown.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jager, Yetta; Efroymson, Rebecca Ann; Sublette, K.
Quantitative tools are needed to evaluate the ecological effects of increasing petroleum production. In this article, we describe two stochastic models for simulating the spatial distribution of brine spills on a landscape. One model uses general assumptions about the spatial arrangement of spills and their sizes; the second model distributes spills by siting rectangular well complexes and conditioning spill probabilities on the configuration of pipes. We present maps of landscapes with spills produced by the two methods and compare the ability of the models to reproduce a specified spill area. A strength of the models presented here is their abilitymore » to extrapolate from the existing landscape to simulate landscapes with a higher (or lower) density of oil wells.« less
ERIC Educational Resources Information Center
Gordovil-Merino, Amalia; Guardia-Olmos, Joan; Pero-Cebollero, Maribel
2012-01-01
In this paper, we used simulations to compare the performance of classical and Bayesian estimations in logistic regression models using small samples. In the performed simulations, conditions were varied, including the type of relationship between independent and dependent variable values (i.e., unrelated and related values), the type of variable…
Ciecior, Willy; Röhlig, Klaus-Jürgen; Kirchner, Gerald
2018-10-01
In the present paper, deterministic as well as first- and second-order probabilistic biosphere modeling approaches are compared. Furthermore, the sensitivity of the influence of the probability distribution function shape (empirical distribution functions and fitted lognormal probability functions) representing the aleatory uncertainty (also called variability) of a radioecological model parameter as well as the role of interacting parameters are studied. Differences in the shape of the output distributions for the biosphere dose conversion factor from first-order Monte Carlo uncertainty analysis using empirical and fitted lognormal distribution functions for input parameters suggest that a lognormal approximation is possibly not always an adequate representation of the aleatory uncertainty of a radioecological parameter. Concerning the comparison of the impact of aleatory and epistemic parameter uncertainty on the biosphere dose conversion factor, the latter here is described using uncertain moments (mean, variance) while the distribution itself represents the aleatory uncertainty of the parameter. From the results obtained, the solution space of second-order Monte Carlo simulation is much larger than that from first-order Monte Carlo simulation. Therefore, the influence of epistemic uncertainty of a radioecological parameter on the output result is much larger than that one caused by its aleatory uncertainty. Parameter interactions are only of significant influence in the upper percentiles of the distribution of results as well as only in the region of the upper percentiles of the model parameters. Copyright © 2018 Elsevier Ltd. All rights reserved.
Lyke, Stephen D; Voelz, David G; Roggemann, Michael C
2009-11-20
The probability density function (PDF) of aperture-averaged irradiance fluctuations is calculated from wave-optics simulations of a laser after propagating through atmospheric turbulence to investigate the evolution of the distribution as the aperture diameter is increased. The simulation data distribution is compared to theoretical gamma-gamma and lognormal PDF models under a variety of scintillation regimes from weak to strong. Results show that under weak scintillation conditions both the gamma-gamma and lognormal PDF models provide a good fit to the simulation data for all aperture sizes studied. Our results indicate that in moderate scintillation the gamma-gamma PDF provides a better fit to the simulation data than the lognormal PDF for all aperture sizes studied. In the strong scintillation regime, the simulation data distribution is gamma gamma for aperture sizes much smaller than the coherence radius rho0 and lognormal for aperture sizes on the order of rho0 and larger. Examples of how these results affect the bit-error rate of an on-off keyed free space optical communication link are presented.
Emulation for probabilistic weather forecasting
NASA Astrophysics Data System (ADS)
Cornford, Dan; Barillec, Remi
2010-05-01
Numerical weather prediction models are typically very expensive to run due to their complexity and resolution. Characterising the sensitivity of the model to its initial condition and/or to its parameters requires numerous runs of the model, which is impractical for all but the simplest models. To produce probabilistic forecasts requires knowledge of the distribution of the model outputs, given the distribution over the inputs, where the inputs include the initial conditions, boundary conditions and model parameters. Such uncertainty analysis for complex weather prediction models seems a long way off, given current computing power, with ensembles providing only a partial answer. One possible way forward that we develop in this work is the use of statistical emulators. Emulators provide an efficient statistical approximation to the model (or simulator) while quantifying the uncertainty introduced. In the emulator framework, a Gaussian process is fitted to the simulator response as a function of the simulator inputs using some training data. The emulator is essentially an interpolator of the simulator output and the response in unobserved areas is dictated by the choice of covariance structure and parameters in the Gaussian process. Suitable parameters are inferred from the data in a maximum likelihood, or Bayesian framework. Once trained, the emulator allows operations such as sensitivity analysis or uncertainty analysis to be performed at a much lower computational cost. The efficiency of emulators can be further improved by exploiting the redundancy in the simulator output through appropriate dimension reduction techniques. We demonstrate this using both Principal Component Analysis on the model output and a new reduced-rank emulator in which an optimal linear projection operator is estimated jointly with other parameters, in the context of simple low order models, such as the Lorenz 40D system. We present the application of emulators to probabilistic weather forecasting, where the construction of the emulator training set replaces the traditional ensemble model runs. Thus the actual forecast distributions are computed using the emulator conditioned on the ‘ensemble runs' which are chosen to explore the plausible input space using relatively crude experimental design methods. One benefit here is that the ensemble does not need to be a sample from the true distribution of the input space, rather it should cover that input space in some sense. The probabilistic forecasts are computed using Monte Carlo methods sampling from the input distribution and using the emulator to produce the output distribution. Finally we discuss the limitations of this approach and briefly mention how we might use similar methods to learn the model error within a framework that incorporates a data assimilation like aspect, using emulators and learning complex model error representations. We suggest future directions for research in the area that will be necessary to apply the method to more realistic numerical weather prediction models.
Modeling and Simulation for a Surf Zone Robot
2012-12-14
of-freedom surf zone robot is developed and tested with a physical test platform and with a simulated robot in Robot Operating System . Derived from...terrain. The application of the model to future platforms is analyzed and a broad examination of the current state of surf zone robotic systems is...public release; distribution is unlimited MODELING AND SIMULATION FOR A SURF ZONE ROBOT Eric Shuey Lieutenant, United States Navy B.S., Systems
Chien, Yu Ching; Wu, Shian Chee; Chen, Wan Ching; Chou, Chih Chung
2013-04-01
Microcystis , a genus of potentially harmful cyanobacteria, is known to proliferate in stratified freshwaters due to its capability to change cell density and regulate buoyancy. In this study, a trajectory model was developed to simulate the cell density change and spatial distribution of Microcystis cells with nonuniform colony sizes. Simulations showed that larger colonies migrate to the near-surface water layer during the night to effectively capture irradiation and become heavy enough to sink during daytime. Smaller-sized colonies instead took a longer time to get to the surface. Simulation of the diurnally varying Microcystis population profile matched the observed pattern in the field when the radii of the multisized colonies were in a beta distribution. This modeling approach is able to take into account the history of cells by keeping track of their positions and properties, such as cell density and the sizes of colonies. It also serves as the basis for further developmental modeling of phytoplanktons that are forming colonies and changing buoyancy.
Multinuclear NMR of CaSiO(3) glass: simulation from first-principles.
Pedone, Alfonso; Charpentier, Thibault; Menziani, Maria Cristina
2010-06-21
An integrated computational method which couples classical molecular dynamics simulations with density functional theory calculations is used to simulate the solid-state NMR spectra of amorphous CaSiO(3). Two CaSiO(3) glass models are obtained by shell-model molecular dynamics simulations, successively relaxed at the GGA-PBE level of theory. The calculation of the NMR parameters (chemical shielding and quadrupolar parameters), which are then used to simulate solid-state 1D and 2D-NMR spectra of silicon-29, oxygen-17 and calcium-43, is achieved by the gauge including projector augmented-wave (GIPAW) and the projector augmented-wave (PAW) methods. It is shown that the limitations due to the finite size of the MD models can be overcome using a Kernel Estimation Density (KDE) approach to simulate the spectra since it better accounts for the disorder effects on the NMR parameter distribution. KDE allows reconstructing a smoothed NMR parameter distribution from the MD/GIPAW data. Simulated NMR spectra calculated with the present approach are found to be in excellent agreement with the experimental data. This further validates the CaSiO(3) structural model obtained by MD simulations allowing the inference of relationships between structural data and NMR response. The methods used to simulate 1D and 2D-NMR spectra from MD GIPAW data have been integrated in a package (called fpNMR) freely available on request.
Evaluating Domestic Hot Water Distribution System Options With Validated Analysis Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weitzel, E.; Hoeschele, M.
2014-09-01
A developing body of work is forming that collects data on domestic hot water consumption, water use behaviors, and energy efficiency of various distribution systems. A full distribution system developed in TRNSYS has been validated using field monitoring data and then exercised in a number of climates to understand climate impact on performance. This study builds upon previous analysis modelling work to evaluate differing distribution systems and the sensitivities of water heating energy and water use efficiency to variations of climate, load, distribution type, insulation and compact plumbing practices. Overall 124 different TRNSYS models were simulated. Of the configurations evaluated,more » distribution losses account for 13-29% of the total water heating energy use and water use efficiency ranges from 11-22%. The base case, an uninsulated trunk and branch system sees the most improvement in energy consumption by insulating and locating the water heater central to all fixtures. Demand recirculation systems are not projected to provide significant energy savings and in some cases increase energy consumption. Water use is most efficient with demand recirculation systems, followed by the insulated trunk and branch system with a central water heater. Compact plumbing practices and insulation have the most impact on energy consumption (2-6% for insulation and 3-4% per 10 gallons of enclosed volume reduced). The results of this work are useful in informing future development of water heating best practices guides as well as more accurate (and simulation time efficient) distribution models for annual whole house simulation programs.« less
Chaste: A test-driven approach to software development for biological modelling
NASA Astrophysics Data System (ADS)
Pitt-Francis, Joe; Pathmanathan, Pras; Bernabeu, Miguel O.; Bordas, Rafel; Cooper, Jonathan; Fletcher, Alexander G.; Mirams, Gary R.; Murray, Philip; Osborne, James M.; Walter, Alex; Chapman, S. Jon; Garny, Alan; van Leeuwen, Ingeborg M. M.; Maini, Philip K.; Rodríguez, Blanca; Waters, Sarah L.; Whiteley, Jonathan P.; Byrne, Helen M.; Gavaghan, David J.
2009-12-01
Chaste ('Cancer, heart and soft-tissue environment') is a software library and a set of test suites for computational simulations in the domain of biology. Current functionality has arisen from modelling in the fields of cancer, cardiac physiology and soft-tissue mechanics. It is released under the LGPL 2.1 licence. Chaste has been developed using agile programming methods. The project began in 2005 when it was reasoned that the modelling of a variety of physiological phenomena required both a generic mathematical modelling framework, and a generic computational/simulation framework. The Chaste project evolved from the Integrative Biology (IB) e-Science Project, an inter-institutional project aimed at developing a suitable IT infrastructure to support physiome-level computational modelling, with a primary focus on cardiac and cancer modelling. Program summaryProgram title: Chaste Catalogue identifier: AEFD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: LGPL 2.1 No. of lines in distributed program, including test data, etc.: 5 407 321 No. of bytes in distributed program, including test data, etc.: 42 004 554 Distribution format: tar.gz Programming language: C++ Operating system: Unix Has the code been vectorised or parallelized?: Yes. Parallelized using MPI. RAM:<90 Megabytes for two of the scenarios described in Section 6 of the manuscript (Monodomain re-entry on a slab or Cylindrical crypt simulation). Up to 16 Gigabytes (distributed across processors) for full resolution bidomain cardiac simulation. Classification: 3. External routines: Boost, CodeSynthesis XSD, CxxTest, HDF5, METIS, MPI, PETSc, Triangle, Xerces Nature of problem: Chaste may be used for solving coupled ODE and PDE systems arising from modelling biological systems. Use of Chaste in two application areas are described in this paper: cardiac electrophysiology and intestinal crypt dynamics. Solution method: Coupled multi-physics with PDE, ODE and discrete mechanics simulation. Running time: The largest cardiac simulation described in the manuscript takes about 6 hours to run on a single 3 GHz core. See results section (Section 6) of the manuscript for discussion on parallel scaling.
Qinghua, Zhao; Jipeng, Li; Yongxing, Zhang; He, Liang; Xuepeng, Wang; Peng, Yan; Xiaofeng, Wu
2015-04-07
To employ three-dimensional finite element modeling and biomechanical simulation for evaluating the stability and stress conduction of two postoperative internal fixed modeling-multilevel posterior instrumentation ( MPI) and MPI with anterior instrumentation (MPAI) with neck-thoracic vertebral tumor en bloc resection. Mimics software and computed tomography (CT) images were used to establish the three-dimensional (3D) model of vertebrae C5-T2 and simulated the C7 en bloc vertebral resection for MPI and MPAI modeling. Then the statistics and images were transmitted into the ANSYS finite element system and 20N distribution load (simulating body weight) and applied 1 N · m torque on neutral point for simulating vertebral displacement and stress conduction and distribution of motion mode, i. e. flexion, extension, bending and rotating. With a better stability, the displacement of two adjacent vertebral bodies of MPI and MPAI modeling was less than that of complete vertebral modeling. No significant differences existed between each other. But as for stress shielding effect reduction, MPI was slightly better than MPAI. From biomechanical point of view, two internal instrumentations with neck-thoracic tumor en bloc resection may achieve an excellent stability with no significant differences. But with better stress conduction, MPI is more advantageous in postoperative reconstruction.
a Weighted Local-World Evolving Network Model Based on the Edge Weights Preferential Selection
NASA Astrophysics Data System (ADS)
Li, Ping; Zhao, Qingzhen; Wang, Haitang
2013-05-01
In this paper, we use the edge weights preferential attachment mechanism to build a new local-world evolutionary model for weighted networks. It is different from previous papers that the local-world of our model consists of edges instead of nodes. Each time step, we connect a new node to two existing nodes in the local-world through the edge weights preferential selection. Theoretical analysis and numerical simulations show that the scale of the local-world affect on the weight distribution, the strength distribution and the degree distribution. We give the simulations about the clustering coefficient and the dynamics of infectious diseases spreading. The weight dynamics of our network model can portray the structure of realistic networks such as neural network of the nematode C. elegans and Online Social Network.
ERIC Educational Resources Information Center
Carey, Cayelan C.; Gougis, Rebekka Darner
2017-01-01
Ecosystem modeling is a critically important tool for environmental scientists, yet is rarely taught in undergraduate and graduate classrooms. To address this gap, we developed a teaching module that exposes students to a suite of modeling skills and tools (including computer programming, numerical simulation modeling, and distributed computing)…
Simulator for concurrent processing data flow architectures
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.; Stoughton, John W.; Mielke, Roland R.
1992-01-01
A software simulator capability of simulating execution of an algorithm graph on a given system under the Algorithm to Architecture Mapping Model (ATAMM) rules is presented. ATAMM is capable of modeling the execution of large-grained algorithms on distributed data flow architectures. Investigating the behavior and determining the performance of an ATAMM based system requires the aid of software tools. The ATAMM Simulator presented is capable of determining the performance of a system without having to build a hardware prototype. Case studies are performed on four algorithms to demonstrate the capabilities of the ATAMM Simulator. Simulated results are shown to be comparable to the experimental results of the Advanced Development Model System.
Model-based Bayesian inference for ROC data analysis
NASA Astrophysics Data System (ADS)
Lei, Tianhu; Bae, K. Ty
2013-03-01
This paper presents a study of model-based Bayesian inference to Receiver Operating Characteristics (ROC) data. The model is a simple version of general non-linear regression model. Different from Dorfman model, it uses a probit link function with a covariate variable having zero-one two values to express binormal distributions in a single formula. Model also includes a scale parameter. Bayesian inference is implemented by Markov Chain Monte Carlo (MCMC) method carried out by Bayesian analysis Using Gibbs Sampling (BUGS). Contrast to the classical statistical theory, Bayesian approach considers model parameters as random variables characterized by prior distributions. With substantial amount of simulated samples generated by sampling algorithm, posterior distributions of parameters as well as parameters themselves can be accurately estimated. MCMC-based BUGS adopts Adaptive Rejection Sampling (ARS) protocol which requires the probability density function (pdf) which samples are drawing from be log concave with respect to the targeted parameters. Our study corrects a common misconception and proves that pdf of this regression model is log concave with respect to its scale parameter. Therefore, ARS's requirement is satisfied and a Gaussian prior which is conjugate and possesses many analytic and computational advantages is assigned to the scale parameter. A cohort of 20 simulated data sets and 20 simulations from each data set are used in our study. Output analysis and convergence diagnostics for MCMC method are assessed by CODA package. Models and methods by using continuous Gaussian prior and discrete categorical prior are compared. Intensive simulations and performance measures are given to illustrate our practice in the framework of model-based Bayesian inference using MCMC method.
Verification on spray simulation of a pintle injector for liquid rocket engine
NASA Astrophysics Data System (ADS)
Son, Min; Yu, Kijeong; Radhakrishnan, Kanmaniraja; Shin, Bongchul; Koo, Jaye
2016-02-01
The pintle injector used for a liquid rocket engine is a newly re-attracted injection system famous for its wide throttle ability with high efficiency. The pintle injector has many variations with complex inner structures due to its moving parts. In order to study the rotating flow near the injector tip, which was observed from the cold flow experiment using water and air, a numerical simulation was adopted and a verification of the numerical model was later conducted. For the verification process, three types of experimental data including velocity distributions of gas flows, spray angles and liquid distribution were all compared using simulated results. The numerical simulation was performed using a commercial simulation program with the Eulerian multiphase model and axisymmetric two dimensional grids. The maximum and minimum velocities of gas were within the acceptable range of agreement, however, the spray angles experienced up to 25% error when the momentum ratios were increased. The spray density distributions were quantitatively measured and had good agreement. As a result of this study, it was concluded that the simulation method was properly constructed to study specific flow characteristics of the pintle injector despite having the limitations of two dimensional and coarse grids.
Bayesian modelling of uncertainties of Monte Carlo radiative-transfer simulations
NASA Astrophysics Data System (ADS)
Beaujean, Frederik; Eggers, Hans C.; Kerzendorf, Wolfgang E.
2018-07-01
One of the big challenges in astrophysics is the comparison of complex simulations to observations. As many codes do not directly generate observables (e.g. hydrodynamic simulations), the last step in the modelling process is often a radiative-transfer treatment. For this step, the community relies increasingly on Monte Carlo radiative transfer due to the ease of implementation and scalability with computing power. We consider simulations in which the number of photon packets is Poisson distributed, while the weight assigned to a single photon packet follows any distribution of choice. We show how to estimate the statistical uncertainty of the sum of weights in each bin from the output of a single radiative-transfer simulation. Our Bayesian approach produces a posterior distribution that is valid for any number of packets in a bin, even zero packets, and is easy to implement in practice. Our analytic results for large number of packets show that we generalize existing methods that are valid only in limiting cases. The statistical problem considered here appears in identical form in a wide range of Monte Carlo simulations including particle physics and importance sampling. It is particularly powerful in extracting information when the available data are sparse or quantities are small.
Dong, Ren G; Dong, Jennie H; Wu, John Z; Rakheja, Subhash
2007-01-01
The objective of this study is to develop analytical models for simulating driving-point biodynamic responses distributed at the fingers and palm of the hand under vibration along the forearm direction (z(h)-axis). Two different clamp-like model structures are formulated to analyze the distributed responses at the fingers-handle and palm-handle interfaces, as opposed to the single driving point invariably considered in the reported models. The parameters of the proposed four- and five degrees-of-freedom models are identified through minimization of an rms error function of the model and measured responses under different hand actions, namely, fingers pull, push only, grip only, and combined push and grip. The results show that the responses predicted from both models agree reasonably well with the measured data in terms of distributed as well total impedance magnitude and phase. The variations in the identified model parameters under different hand actions are further discussed in view of the biological system behavior. The proposed models are considered to serve as useful tools for design and assessment of vibration isolation methods, and for developing a hand-arm simulator for vibration analysis of power tools.
Architectural Improvements and New Processing Tools for the Open XAL Online Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, Christopher K; Pelaia II, Tom; Freed, Jonathan M
The online model is the component of Open XAL providing accelerator modeling, simulation, and dynamic synchronization to live hardware. Significant architectural changes and feature additions have been recently made in two separate areas: 1) the managing and processing of simulation data, and 2) the modeling of RF cavities. Simulation data and data processing have been completely decoupled. A single class manages all simulation data while standard tools were developed for processing the simulation results. RF accelerating cavities are now modeled as composite structures where parameter and dynamics computations are distributed. The beam and hardware models both maintain their relative phasemore » information, which allows for dynamic phase slip and elapsed time computation.« less
On extending parallelism to serial simulators
NASA Technical Reports Server (NTRS)
Nicol, David; Heidelberger, Philip
1994-01-01
This paper describes an approach to discrete event simulation modeling that appears to be effective for developing portable and efficient parallel execution of models of large distributed systems and communication networks. In this approach, the modeler develops submodels using an existing sequential simulation modeling tool, using the full expressive power of the tool. A set of modeling language extensions permit automatically synchronized communication between submodels; however, the automation requires that any such communication must take a nonzero amount off simulation time. Within this modeling paradigm, a variety of conservative synchronization protocols can transparently support conservative execution of submodels on potentially different processors. A specific implementation of this approach, U.P.S. (Utilitarian Parallel Simulator), is described, along with performance results on the Intel Paragon.
Changes in Arctic Sea Ice Thickness and Floe Size
NASA Astrophysics Data System (ADS)
Zhang, J.; Schweiger, A. J. B.; Stern, H. L., III; Steele, M.
2016-12-01
A thickness, floe size, and enthalpy distribution sea ice model was implemented into the Pan-arctic Ice-Ocean Modeling and Assimilation System (PIOMAS) by coupling the Zhang et al. [2015] sea ice floe size distribution (FSD) theory with the Thorndike et al. [1975] ice thickness distribution (ITD) theory in order to explicitly simulate multicategory FSD and ITD simultaneously. A range of ice thickness and floe size observations were used for model calibration and validation. The expanded, validated PIOMAS was used to study sea ice response to atmospheric and oceanic changes in the Arctic, focusing on the interannual variability and trends of ice thickness and floe size over the period 1979-2015. It is found that over the study period both ice thickness and floe size have been decreasing steadily in the Arctic. The simulated ice thickness shows considerable spatiotemporal variability in recent years. As the ice cover becomes thinner and weaker, the model simulates an increasing number of small floes (at the low end of the FSD), which affects sea ice properties, particularly in the marginal ice zone.
NASA Astrophysics Data System (ADS)
Watanabe, Tomoaki; Nagata, Koji
2016-11-01
The mixing volume model (MVM), which is a mixing model for molecular diffusion in Lagrangian simulations of turbulent mixing problems, is proposed based on the interactions among spatially distributed particles in a finite volume. The mixing timescale in the MVM is derived by comparison between the model and the subgrid scale scalar variance equation. A-priori test of the MVM is conducted based on the direct numerical simulations of planar jets. The MVM is shown to predict well the mean effects of the molecular diffusion under various conditions. However, a predicted value of the molecular diffusion term is positively correlated to the exact value in the DNS only when the number of the mixing particles is larger than two. Furthermore, the MVM is tested in the hybrid implicit large-eddy-simulation/Lagrangian-particle-simulation (ILES/LPS). The ILES/LPS with the present mixing model predicts well the decay of the scalar variance in planar jets. This work was supported by JSPS KAKENHI Nos. 25289030 and 16K18013. The numerical simulations presented in this manuscript were carried out on the high performance computing system (NEC SX-ACE) in the Japan Agency for Marine-Earth Science and Technology.
NASA Astrophysics Data System (ADS)
Ram, Farangis; De Graef, Marc
2018-04-01
In an electron backscatter diffraction pattern (EBSP), the angular distribution of backscattered electrons (BSEs) depends on their energy. Monte Carlo modeling of their depth and energy distributions suggests that the highest energy BSEs are more likely to hit the bottom of the detector than the top. In this paper, we examine experimental EBSPs to validate the modeled angular BSE distribution. To that end, the Kikuchi bandlet method is employed to measure the width of Kikuchi bands in both modeled and measured EBSPs. The results show that in an EBSP obtained with a 15 keV primary probe, the width of a Kikuchi band varies by about 0 .4∘ from the bottom of the EBSD detector to its top. The same is true for a simulated pattern that is composed of BSEs with 5 keV to 15 keV energies, which validates the Monte Carlo simulations.
Bonded-cell model for particle fracture.
Nguyen, Duc-Hanh; Azéma, Emilien; Sornay, Philippe; Radjai, Farhang
2015-02-01
Particle degradation and fracture play an important role in natural granular flows and in many applications of granular materials. We analyze the fracture properties of two-dimensional disklike particles modeled as aggregates of rigid cells bonded along their sides by a cohesive Mohr-Coulomb law and simulated by the contact dynamics method. We show that the compressive strength scales with tensile strength between cells but depends also on the friction coefficient and a parameter describing cell shape distribution. The statistical scatter of compressive strength is well described by the Weibull distribution function with a shape parameter varying from 6 to 10 depending on cell shape distribution. We show that this distribution may be understood in terms of percolating critical intercellular contacts. We propose a random-walk model of critical contacts that leads to particle size dependence of the compressive strength in good agreement with our simulation data.
A Distributed Snow Evolution Modeling System (SnowModel)
NASA Astrophysics Data System (ADS)
Liston, G. E.; Elder, K.
2004-12-01
A spatially distributed snow-evolution modeling system (SnowModel) has been specifically designed to be applicable over a wide range of snow landscapes, climates, and conditions. To reach this goal, SnowModel is composed of four sub-models: MicroMet defines the meteorological forcing conditions, EnBal calculates surface energy exchanges, SnowMass simulates snow depth and water-equivalent evolution, and SnowTran-3D accounts for snow redistribution by wind. While other distributed snow models exist, SnowModel is unique in that it includes a well-tested blowing-snow sub-model (SnowTran-3D) for application in windy arctic, alpine, and prairie environments where snowdrifts are common. These environments comprise 68% of the seasonally snow-covered Northern Hemisphere land surface. SnowModel also accounts for snow processes occurring in forested environments (e.g., canopy interception related processes). SnowModel is designed to simulate snow-related physical processes occurring at spatial scales of 5-m and greater, and temporal scales of 1-hour and greater. These include: accumulation from precipitation; wind redistribution and sublimation; loading, unloading, and sublimation within forest canopies; snow-density evolution; and snowpack ripening and melt. To enhance its wide applicability, SnowModel includes the physical calculations required to simulate snow evolution within each of the global snow classes defined by Sturm et al. (1995), e.g., tundra, taiga, alpine, prairie, maritime, and ephemeral snow covers. The three, 25-km by 25-km, Cold Land Processes Experiment (CLPX) mesoscale study areas (MSAs: Fraser, North Park, and Rabbit Ears) are used as SnowModel simulation examples to highlight model strengths, weaknesses, and features in forested, semi-forested, alpine, and shrubland environments.
NASA Astrophysics Data System (ADS)
Grams, G.; Giraud, S.; Fantina, A. F.; Gulminelli, F.
2018-03-01
The aim of the present study is to calculate the nuclear distribution associated at finite temperature to any given equation of state of stellar matter based on the Wigner-Seitz approximation, for direct applications in core-collapse simulations. The Gibbs free energy of the different configurations is explicitly calculated, with special care devoted to the calculation of rearrangement terms, ensuring thermodynamic consistency. The formalism is illustrated with two different applications. First, we work out the nuclear statistical equilibrium cluster distribution for the Lattimer and Swesty equation of state, widely employed in supernova simulations. Secondly, we explore the effect of including shell structure, and consider realistic nuclear mass tables from the Brussels-Montreal Hartree-Fock-Bogoliubov model (specifically, HFB-24). We show that the whole collapse trajectory is dominated by magic nuclei, with extremely spread and even bimodal distributions of the cluster probability around magic numbers, demonstrating the importance of cluster distributions with realistic mass models in core-collapse simulations. Simple analytical expressions are given, allowing further applications of the method to any relativistic or nonrelativistic subsaturation equation of state.
Simulation of concentration distribution of urban particles under wind
NASA Astrophysics Data System (ADS)
Chen, Yanghou; Yang, Hangsheng
2018-02-01
The concentration of particulate matter in the air is too high, which seriously affects people’s health. The concentration of particles in densely populated towns is also high. Understanding the distribution of particles in the air helps to remove them passively. The concentration distribution of particles in urban streets is simulated by using the FLUENT software. The simulation analysis based on Discrete Phase Modelling (DPM) of FLUENT. Simulation results show that the distribution of the particles is caused by different layout of buildings. And it is pointed out that in the windward area of the building and the leeward sides of the high-rise building are the areas with high concentration of particles. Understanding the concentration of particles in different areas is also helpful for people to avoid and reduce the concentration of particles in high concentration areas.
General relativistic magnetohydrodynamical κ-jet models for Sagittarius A*
NASA Astrophysics Data System (ADS)
Davelaar, J.; Mościbrodzka, M.; Bronzwaer, T.; Falcke, H.
2018-04-01
Context. The observed spectral energy distribution of an accreting supermassive black hole typically forms a power-law spectrum in the near infrared (NIR) and optical wavelengths, that may be interpreted as a signature of accelerated electrons along the jet. However, the details of acceleration remain uncertain. Aim. In this paper, we study the radiative properties of jets produced in axisymmetric general relativistic magnetohydrodynamics (GRMHD) simulations of hot accretion flows onto underluminous supermassive black holes both numerically and semi-analytically, with the aim of investigating the differences between models with and without accelerated electrons inside the jet. Methods: We assume that electrons are accelerated in the jet regions of our GRMHD simulation. To model them, we modify the electrons' distribution function in the jet regions from a purely relativistic thermal distribution to a combination of a relativistic thermal distribution and the κ-distribution function (the κ-distribution function is itself a combination of a relativistic thermal and a non-thermal power-law distribution, and thus it describes accelerated electrons). Inside the disk, we assume a thermal distribution for the electrons. In order to resolve the particle acceleration regions in the GRMHD simulations, we use a coordinate grid that is optimized for modeling jets. We calculate jet spectra and synchrotron maps by using the ray tracing code RAPTOR, and compare the synthetic observations to observations of Sgr A*. Finally, we compare numerical models of jets to semi-analytical ones. Results: We find that in the κ-jet models, the radio-emitting region size, radio flux, and spectral index in NIR/optical bands increase for decreasing values of the κ parameter, which corresponds to a larger amount of accelerated electrons. This is in agreement with analytical predictions. In our models, the size of the emission region depends roughly linearly on the observed wavelength λ, independently of the assumed distribution function. The model with κ = 3.5, ηacc = 5-10% (the percentage of electrons that are accelerated), and observing angle i = 30° fits the observed Sgr A* emission in the flaring state from the radio to the NIR/optical regimes, while κ = 3.5, ηacc < 1%, and observing angle i = 30° fit the upper limits in quiescence. At this point, our models (including the purely thermal ones) cannot reproduce the observed source sizes accurately, which is probably due to the assumption of axisymmetry in our GRMHD simulations. The κ-jet models naturally recover the observed nearly-flat radio spectrum of Sgr A* without invoking the somewhat artificial isothermal jet model that was suggested earlier. Conclusions: From our model fits we conclude that between 5% and 10% of the electrons inside the jet of Sgr A* are accelerated into a κ distribution function when Sgr A* is flaring. In quiescence, we match the NIR upper limits when this percentage is <1%.
NASA Astrophysics Data System (ADS)
Zhou, Chunhong; Shen, Xiaojing; Liu, Zirui; Zhang, Yangmei; Xin, Jinyuan
2018-04-01
A coupled aerosol-cloud model is essential for investigating the formation of haze and fog and the interaction of aerosols with clouds and precipitation. One of the key tasks of such a model is to produce correct mass and number size distributions of aerosols. In this paper, a parameterization scheme for aerosol size distribution in initial emission, which took into account the measured mass and number size distributions of aerosols, was developed in the GRAPES-CUACE [Global/Regional Assimilation and PrEdiction System-China Meteorological Administration (CMA) Unified Atmospheric Chemistry Environment model]—an online chemical weather forecast system that contains microphysical processes and emission, transport, and chemical conversion of sectional multi-component aerosols. In addition, the competitive mechanism between nucleation and condensation for secondary aerosol formation was improved, and the dry deposition was also modified to be in consistent with the real depositing length. Based on the above improvements, the GRAPES-CUACE simulations were verified against observational data during 1-31 January 2013, when a series of heavy regional haze-fog events occurred in eastern China. The results show that the aerosol number size distribution from the improved experiment was much closer to the observation, whereas in the old experiment the number concentration was higher in the nucleation mode and lower in the accumulation mode. Meanwhile, the errors in aerosol number size distribution as diagnosed by its sectional mass size distribution were also reduced. Moreover, simulations of organic carbon, sulfate, and other aerosol components were improved and the overestimation as well as underestimation of PM2.5 concentration in eastern China was significantly reduced, leading to increased correlation coefficient between simulated and observed PM2.5 by more than 70%. In the remote areas where bad simulation results were produced previously, the correlation coefficient grew from 0.35 to 0.61, and the mean mass concentration went up from 43% to 87.5% of the observed value. Thus, the simulation of particulate matters in these areas has been improved considerably.
Regional model simulations of New Zealand climate
NASA Astrophysics Data System (ADS)
Renwick, James A.; Katzfey, Jack J.; Nguyen, Kim C.; McGregor, John L.
1998-03-01
Simulation of New Zealand climate is examined through the use of a regional climate model nested within the output of the Commonwealth Scientific and Industrial Research Organisation nine-level general circulation model (GCM). R21 resolution GCM output is used to drive a regional model run at 125 km grid spacing over the Australasian region. The 125 km run is used in turn to drive a simulation at 50 km resolution over New Zealand. Simulations with a full seasonal cycle are performed for 10 model years. The focus is on the quality of the simulation of present-day climate, but results of a doubled-CO2 run are discussed briefly. Spatial patterns of mean simulated precipitation and surface temperatures improve markedly as horizontal resolution is increased, through the better resolution of the country's orography. However, increased horizontal resolution leads to a positive bias in precipitation. At 50 km resolution, simulated frequency distributions of daily maximum/minimum temperatures are statistically similar to those of observations at many stations, while frequency distributions of daily precipitation appear to be statistically different to those of observations at most stations. Modeled daily precipitation variability at 125 km resolution is considerably less than observed, but is comparable to, or exceeds, observed variability at 50 km resolution. The sensitivity of the simulated climate to changes in the specification of the land surface is discussed briefly. Spatial patterns of the frequency of extreme temperatures and precipitation are generally well modeled. Under a doubling of CO2, the frequency of precipitation extremes changes only slightly at most locations, while air frosts become virtually unknown except at high-elevation sites.
Peters, Sheila Annie
2008-01-01
Despite recent advances in understanding of the role of the gut as a metabolizing organ, recognition of gut wall metabolism and/or other factors contributing to intestinal loss of a compound has been a challenging task due to the lack of well characterized methods to distinguish it from first-pass hepatic extraction. The implications of identifying intestinal loss of a compound in drug discovery and development can be enormous. Physiologically based pharmacokinetic (PBPK) simulations of pharmacokinetic profiles provide a simple, reliable and cost-effective way to understand the mechanisms underlying pharmacokinetic processes. The purpose of this article is to demonstrate the application of PBPK simulations in bringing to light intestinal loss of orally administered drugs, using two example compounds: verapamil and an in-house compound that is no longer in development (referred to as compound A in this article). A generic PBPK model, built in-house using MATLAB software and incorporating absorption, metabolism, distribution, biliary and renal elimination models, was employed for simulation of concentration-time profiles. Modulation of intrinsic hepatic clearance and tissue distribution parameters in the generic PBPK model was done to achieve a good fit to the observed intravenous pharmacokinetic profiles of the compounds studied. These optimized clearance and distribution parameters are expected to be invariant across different routes of administration, as long as the kinetics are linear, and were therefore employed to simulate the oral profiles of the compounds. For compounds with reasonably good solubility and permeability, an area under the concentration-time curve for the simulated oral profile that far exceeded the observed would indicate some kind of loss in the intestine. PBPK simulations applied to compound A showed substantial loss of the compound in the gastrointestinal tract in humans but not in rats. This accounted for the lower bioavailability of the compound in humans than in rats. PBPK simulations of verapamil identified gut wall metabolism, well established in the literature, and showed large interspecies differences with respect to both gut wall metabolism and drug-induced delays in gastric emptying. Mechanistic insights provided by PBPK simulations can be very valuable in answering vital questions in drug discovery and development. However, such applications of PBPK models are limited by the lack of accurate inputs for clearance and distribution. This article demonstrates a successful application of PBPK simulations to identify and quantify intestinal loss of two model compounds in rats and humans. The limitation of inaccurate inputs for the clearance and distribution parameters was overcome by optimizing these parameters through fitting intravenous profiles. The study also demonstrated that the large interspecies differences associated with gut wall metabolism and gastric emptying, evident for the compounds studied, make animal model extrapolations to humans unreliable. It is therefore important to do PBPK simulations of human pharmacokinetic profiles to understand the relevance of intestinal loss of a compound in humans.
NASA Astrophysics Data System (ADS)
Raj, R.; Hamm, N. A. S.; van der Tol, C.; Stein, A.
2015-08-01
Gross primary production (GPP), separated from flux tower measurements of net ecosystem exchange (NEE) of CO2, is used increasingly to validate process-based simulators and remote sensing-derived estimates of simulated GPP at various time steps. Proper validation should include the uncertainty associated with this separation at different time steps. This can be achieved by using a Bayesian framework. In this study, we estimated the uncertainty in GPP at half hourly time steps. We used a non-rectangular hyperbola (NRH) model to separate GPP from flux tower measurements of NEE at the Speulderbos forest site, The Netherlands. The NRH model included the variables that influence GPP, in particular radiation, and temperature. In addition, the NRH model provided a robust empirical relationship between radiation and GPP by including the degree of curvature of the light response curve. Parameters of the NRH model were fitted to the measured NEE data for every 10-day period during the growing season (April to October) in 2009. Adopting a Bayesian approach, we defined the prior distribution of each NRH parameter. Markov chain Monte Carlo (MCMC) simulation was used to update the prior distribution of each NRH parameter. This allowed us to estimate the uncertainty in the separated GPP at half-hourly time steps. This yielded the posterior distribution of GPP at each half hour and allowed the quantification of uncertainty. The time series of posterior distributions thus obtained allowed us to estimate the uncertainty at daily time steps. We compared the informative with non-informative prior distributions of the NRH parameters. The results showed that both choices of prior produced similar posterior distributions GPP. This will provide relevant and important information for the validation of process-based simulators in the future. Furthermore, the obtained posterior distributions of NEE and the NRH parameters are of interest for a range of applications.
Instabilities and Turbulence Generation by Pick-Up Ion Distributions in the Outer Heliosheath
NASA Astrophysics Data System (ADS)
Weichman, K.; Roytershteyn, V.; Delzanno, G. L.; Pogorelov, N.
2017-12-01
Pick-up ions (PUIs) play a significant role in the dynamics of the heliosphere. One problem that has attracted significant attention is the stability of ring-like distributions of PUIs and the electromagnetic fluctuations that could be generated by PUI distributions. For example, PUI stability is relevant to theories attempting to identify the origins of the IBEX ribbon. PUIs have previously been investigated by linear stability analysis of model (e.g. Gaussian) rings and corresponding computer simulations. The majority of these simulations utilized particle-in-cell methods which suffer from accuracy limitations imposed by the statistical noise associated with representing the plasma by a relatively small number of computational particles. In this work, we utilize highly accurate spectral Vlasov simulations conducted using the fully kinetic implicit code SPS (Spectral Plasma Solver) to investigate the PUI distributions inferred from a global heliospheric model (Heerikhuisen et al., 2016). Results are compared with those obtained by hybrid and fully kinetic particle-in-cell methods.
Numerical simulation of thermal stress distributions in Czochralski-grown silicon crystals
NASA Astrophysics Data System (ADS)
Kumar, M. Avinash; Srinivasan, M.; Ramasamy, P.
2018-04-01
Numerical simulation is one of the important tools in the investigation and optimization of the single-crystal silicon grown by the Czochralski (Cz) method. A 2D steady global heat transfer model was used to investigate the temperature distribution and the thermal stress distributions at particular crystal position during the Cz growth process. The computation determines the thermal stress such as von Mises stress and maximum shear stress distribution along grown crystal and shows possible reason for dislocation formation in the Cz-grown single-crystal silicon.
NASA Astrophysics Data System (ADS)
Zhang, Hongda; Han, Chao; Ye, Taohong; Ren, Zhuyin
2016-03-01
A method of chemistry tabulation combined with presumed probability density function (PDF) is applied to simulate piloted premixed jet burner flames with high Karlovitz number using large eddy simulation. Thermo-chemistry states are tabulated by the combination of auto-ignition and extended auto-ignition model. To evaluate the predictive capability of the proposed tabulation method to represent the thermo-chemistry states under the condition of different fresh gases temperature, a-priori study is conducted by performing idealised transient one-dimensional premixed flame simulations. Presumed PDF is used to involve the interaction of turbulence and flame with beta PDF to model the reaction progress variable distribution. Two presumed PDF models, Dirichlet distribution and independent beta distribution, respectively, are applied for representing the interaction between two mixture fractions that are associated with three inlet streams. Comparisons of statistical results show that two presumed PDF models for the two mixture fractions are both capable of predicting temperature and major species profiles, however, they are shown to have a significant effect on the predictions for intermediate species. An analysis of the thermo-chemical state-space representation of the sub-grid scale (SGS) combustion model is performed by comparing correlations between the carbon monoxide mass fraction and temperature. The SGS combustion model based on the proposed chemistry tabulation can reasonably capture the peak value and change trend of intermediate species. Aspects regarding model extensions to adequately predict the peak location of intermediate species are discussed.
Variability-aware compact modeling and statistical circuit validation on SRAM test array
NASA Astrophysics Data System (ADS)
Qiao, Ying; Spanos, Costas J.
2016-03-01
Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose a variability-aware compact model characterization methodology based on stepwise parameter selection. Transistor I-V measurements are obtained from bit transistor accessible SRAM test array fabricated using a collaborating foundry's 28nm FDSOI technology. Our in-house customized Monte Carlo simulation bench can incorporate these statistical compact models; and simulation results on SRAM writability performance are very close to measurements in distribution estimation. Our proposed statistical compact model parameter extraction methodology also has the potential of predicting non-Gaussian behavior in statistical circuit performances through mixtures of Gaussian distributions.
NASA Astrophysics Data System (ADS)
Liu, Yaoze; Engel, Bernard A.; Flanagan, Dennis C.; Gitau, Margaret W.; McMillan, Sara K.; Chaubey, Indrajeet; Singh, Shweta
2018-05-01
Best management practices (BMPs) are popular approaches used to improve hydrology and water quality. Uncertainties in BMP effectiveness over time may result in overestimating long-term efficiency in watershed planning strategies. To represent varying long-term BMP effectiveness in hydrologic/water quality models, a high level and forward-looking modeling framework was developed. The components in the framework consist of establishment period efficiency, starting efficiency, efficiency for each storm event, efficiency between maintenance, and efficiency over the life cycle. Combined, they represent long-term efficiency for a specific type of practice and specific environmental concern (runoff/pollutant). An approach for possible implementation of the framework was discussed. The long-term impacts of grass buffer strips (agricultural BMP) and bioretention systems (urban BMP) in reducing total phosphorus were simulated to demonstrate the framework. Data gaps were captured in estimating the long-term performance of the BMPs. A Bayesian method was used to match the simulated distribution of long-term BMP efficiencies with the observed distribution with the assumption that the observed data represented long-term BMP efficiencies. The simulated distribution matched the observed distribution well with only small total predictive uncertainties. With additional data, the same method can be used to further improve the simulation results. The modeling framework and results of this study, which can be adopted in hydrologic/water quality models to better represent long-term BMP effectiveness, can help improve decision support systems for creating long-term stormwater management strategies for watershed management projects.
Jurgens, Bryant; Böhlke, John Karl; Kauffman, Leon J.; Belitz, Kenneth; Esser, Bradley K.
2016-01-01
A partial exponential lumped parameter model (PEM) was derived to determine age distributions and nitrate trends in long-screened production wells. The PEM can simulate age distributions for wells screened over any finite interval of an aquifer that has an exponential distribution of age with depth. The PEM has 3 parameters – the ratio of saturated thickness to the top and bottom of the screen and mean age, but these can be reduced to 1 parameter (mean age) by using well construction information and estimates of the saturated thickness. The PEM was tested with data from 30 production wells in a heterogeneous alluvial fan aquifer in California, USA. Well construction data were used to guide parameterization of a PEM for each well and mean age was calibrated to measured environmental tracer data (3H, 3He, CFC-113, and 14C). Results were compared to age distributions generated for individual wells using advective particle tracking models (PTMs). Age distributions from PTMs were more complex than PEM distributions, but PEMs provided better fits to tracer data, partly because the PTMs did not simulate 14C accurately in wells that captured varying amounts of old groundwater recharged at lower rates prior to groundwater development and irrigation. Nitrate trends were simulated independently of the calibration process and the PEM provided good fits for at least 11 of 24 wells. This work shows that the PEM, and lumped parameter models (LPMs) in general, can often identify critical features of the age distributions in wells that are needed to explain observed tracer data and nonpoint source contaminant trends, even in systems where aquifer heterogeneity and water-use complicate distributions of age. While accurate PTMs are preferable for understanding and predicting aquifer-scale responses to water use and contaminant transport, LPMs can be sensitive to local conditions near individual wells that may be inaccurately represented or missing in an aquifer-scale flow model.
LaFontaine, Jacob H.; Jones, L. Elliott; Painter, Jaime A.
2017-12-29
A suite of hydrologic models has been developed for the Apalachicola-Chattahoochee-Flint River Basin (ACFB) as part of the National Water Census, a U.S. Geological Survey research program that focuses on developing new water accounting tools and assessing water availability and use at the regional and national scales. Seven hydrologic models were developed using the Precipitation-Runoff Modeling System (PRMS), a deterministic, distributed-parameter, process-based system that simulates the effects of precipitation, temperature, land cover, and water use on basin hydrology. A coarse-resolution PRMS model was developed for the entire ACFB, and six fine-resolution PRMS models were developed for six subbasins of the ACFB. The coarse-resolution model was loosely coupled with a groundwater model to better assess the effects of water use on streamflow in the lower ACFB, a complex geologic setting with karst features. The PRMS coarse-resolution model was used to provide inputs of recharge to the groundwater model, which in turn provide simulations of groundwater flow that were aggregated with PRMS-based simulations of surface runoff and shallow-subsurface flow. Simulations without the effects of water use were developed for each model for at least the calendar years 1982–2012 with longer periods for the Potato Creek subbasin (1942–2012) and the Spring Creek subbasin (1952–2012). Water-use-affected flows were simulated for 2008–12. Water budget simulations showed heterogeneous distributions of precipitation, actual evapotranspiration, recharge, runoff, and storage change across the ACFB. Streamflow volume differences between no-water-use and water-use simulations were largest along the main stem of the Apalachicola and Chattahoochee River Basins, with streamflow percentage differences largest in the upper Chattahoochee and Flint River Basins and Spring Creek in the lower Flint River Basin. Water-use information at a shorter time step and a fully coupled simulation in the lower ACFB may further improve water availability estimates and hydrologic simulations in the basin.
NASA Astrophysics Data System (ADS)
Price, Jonathan S.; Woo, Ming-Ko
1990-12-01
A two-dimensional advection dispersion model of solute transport is used to simulate the long-term changes in the chloride distribution of the young isostatically raised beach ridge and depression sequences in a James Bay coastal marsh. The USGS-SUTRA model reproduces the hydraulic conditions in the wetland, causing recharge of freshwater to the ridges and discharge of saline water to the inter-ridge depressions, demonstrating the importance of vertical water fluxes of water and chloride. Even though water velocities are very low, molecular diffusion alone cannot explain the observed chloride distribution. Imposing the characteristics of a frozen surface during winter eliminated the vertical fluxes, and doubled the time required for the simulated chloride distribution to match the field data. The model correctly predicts the observed pattern of suppressed salinity beneath the ridges and a general decrease of salinity with distance inland. The results are useful in understanding the processes which operate in the first 100 years of marsh development.
Hydroclimatic Controls on the Means and Variability of Vegetation Phenology and Carbon Uptake
NASA Technical Reports Server (NTRS)
Koster, Randal Dean; Walker, Gregory K.; Collatz, George J.; Thornton, Peter E.
2013-01-01
Long-term, global offline (land-only) simulations with a dynamic vegetation phenology model are used to examine the control of hydroclimate over vegetation-related quantities. First, with a control simulation, the model is shown to capture successfully (though with some bias) key observed relationships between hydroclimate and the spatial and temporal variations of phenological expression. In subsequent simulations, the model shows that: (i) the global spatial variation of seasonal phenological maxima is controlled mostly by hydroclimate, irrespective of distributions in vegetation type, (ii) the occurrence of high interannual moisture-related phenological variability in grassland areas is determined by hydroclimate rather than by the specific properties of grassland, and (iii) hydroclimatic means and variability have a corresponding impact on the spatial and temporal distributions of gross primary productivity (GPP).
Evapotranspiration (ET), a highly dynamic flux in wetland landscapes, regulates the accuracy of surface/sub-surface runoff simulation in a hydrologic model. However, considerable uncertainty in simulating ET-related processes remains, including our limited ability to incorporate ...
Shinohara, Ayaka; Hanaoka, Hirofumi; Sakashita, Tetsuya; Sato, Tatsuhiko; Yamaguchi, Aiko; Ishioka, Noriko S; Tsushima, Yoshito
2018-02-01
Radionuclide therapy with low-energy auger electron emitters may provide high antitumor efficacy while keeping the toxicity to normal organs low. Here we evaluated the usefulness of an auger electron emitter and compared it with that of a beta emitter for tumor treatment in in vitro models and conducted a dosimetry simulation using radioiodine-labeled metaiodobenzylguanidine (MIBG) as a model compound. We evaluated the cellular uptake of 125 I-MIBG and the therapeutic effects of 125 I- and 131 I-MIBG in 2D and 3D PC-12 cell culture models. We used a Monte Carlo simulation code (PHITS) to calculate the absorbed radiation dose of 125 I or 131 I in computer simulation models for 2D and 3D cell cultures. In the dosimetry calculation for the 3D model, several distribution patterns of radionuclide were applied. A higher cumulative dose was observed in the 3D model due to the prolonged retention of MIBG compared to the 2D model. However, 125 I-MIBG showed a greater therapeutic effect in the 2D model compared to the 3D model (respective EC 50 values in the 2D and 3D models: 86.9 and 303.9 MBq/cell), whereas 131 I-MIBG showed the opposite result (respective EC 50 values in the 2D and 3D models: 49.4 and 30.2 MBq/cell). The therapeutic effect of 125 I-MIBG was lower than that of 131 I-MIBG in both models, but the radionuclide-derived difference was smaller in the 2D model. The dosimetry simulation with PHITS revealed the influence of the radiation quality, the crossfire effect, radionuclide distribution, and tumor shape on the absorbed dose. Application of the heterogeneous distribution series dramatically changed the radiation dose distribution of 125 I-MIBG, and mitigated the difference between the estimated and measured therapeutic effects of 125 I-MIBG. The therapeutic effect of 125 I-MIBG was comparable to that of 131 I-MIBG in the 2D model, but the efficacy was inferior to that of 131 I-MIBG in the 3D model, since the crossfire effect is negligible and the homogeneous distribution of radionuclides was insufficient. Thus, auger electrons would be suitable for treating small-sized tumors. The design of radiopharmaceuticals with auger electron emitters requires particularly careful consideration of achieving a homogeneous distribution of the compound in the tumor.
NASA Astrophysics Data System (ADS)
Possemiers, Mathias; Huysmans, Marijke; Batelaan, Okke
2015-08-01
Adequate aquifer characterization and simulation using heat transport models are indispensible for determining the optimal design for aquifer thermal energy storage (ATES) systems and wells. Recent model studies indicate that meter-scale heterogeneities in the hydraulic conductivity field introduce a considerable uncertainty in the distribution of thermal energy around an ATES system and can lead to a reduction in the thermal recoverability. In a study site in Bierbeek, Belgium, the influence of centimeter-scale clay drapes on the efficiency of a doublet ATES system and the distribution of the thermal energy around the ATES wells are quantified. Multiple-point geostatistical simulation of edge properties is used to incorporate the clay drapes in the models. The results show that clay drapes have an influence both on the distribution of thermal energy in the subsurface and on the efficiency of the ATES system. The distribution of the thermal energy is determined by the strike of the clay drapes, with the major axis of anisotropy parallel to the clay drape strike. The clay drapes have a negative impact (3.3-3.6 %) on the energy output in the models without a hydraulic gradient. In the models with a hydraulic gradient, however, the presence of clay drapes has a positive influence (1.6-10.2 %) on the energy output of the ATES system. It is concluded that it is important to incorporate small-scale heterogeneities in heat transport models to get a better estimate on ATES efficiency and distribution of thermal energy.
NASA Astrophysics Data System (ADS)
Possemiers, Mathias; Huysmans, Marijke; Batelaan, Okke
2015-04-01
Adequate aquifer characterization and simulation using heat transport models are indispensible for determining the optimal design for Aquifer Thermal Energy Storage (ATES) systems and wells. Recent model studies indicate that meter scale heterogeneities in the hydraulic conductivity field introduce a considerable uncertainty in the distribution of thermal energy around an ATES system and can lead to a reduction in the thermal recoverability. In this paper, the influence of centimeter scale clay drapes on the efficiency of a doublet ATES system and the distribution of the thermal energy around the ATES wells are quantified. Multiple-point geostatistical simulation of edge properties is used to incorporate the clay drapes in the models. The results show that clay drapes have an influence both on the distribution of thermal energy in the subsurface and on the efficiency of the ATES system. The distribution of the thermal energy is determined by the strike of the clay drapes, with the major axis of anisotropy parallel to the clay drape strike. The clay drapes have a negative impact (3.3 - 3.6%) on the energy output in the models without a hydraulic gradient. In the models with a hydraulic gradient, however, the presence of clay drapes has a positive influence (1.6 - 10.2%) on the energy output of the ATES system. It is concluded that it is important to incorporate small scale heterogeneities in heat transport models to get a better estimate on ATES efficiency and distribution of thermal energy.
NASA Astrophysics Data System (ADS)
Liu, Shaojie; Doughty, Austin; Mesiya, Sana; Pettitt, Alex; Zhou, Feifan; Chen, Wei R.
2017-02-01
Temperature distribution in tissue is a crucial factor in determining the outcome of photothermal therapy in cancer treatment. In order to investigate the temperature distribution in tumor tissue during laser irradiation, we developed a novel ex vivo device to simulate the photothermal therapy on tumors. A 35°C, a thermostatic incubator was used to provide a simulation environment for body temperature of live animals. Different biological tissues (chicken breast and bovine liver) were buried inside a tissue-simulating gel and considered as tumor tissues. An 805-nm laser was used to irradiate the target tissue. A fiber with an interstitial cylindrical diffuser (10 mm) was directly inserted in the center of the tissue, and the needle probes of a thermocouple were inserted into the tissue paralleling the laser fiber at different distances to measure the temperature distribution. All of the procedures were performed in the incubator. Based on the results of this study, the temperature distribution in bovine liver is similar to that of tumor tissue under photothermal therapy with the same doses. Therefore, the developed model using bovine liver for determining temperature distribution can be used during interstitial photothermal therapy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, N; Shen, C; Tian, Z
Purpose: Monte Carlo (MC) simulation is typically regarded as the most accurate dose calculation method for proton therapy. Yet for real clinical cases, the overall accuracy also depends on that of the MC beam model. Commissioning a beam model to faithfully represent a real beam requires finely tuning a set of model parameters, which could be tedious given the large number of pencil beams to commmission. This abstract reports an automatic beam-model commissioning method for pencil-beam scanning proton therapy via an optimization approach. Methods: We modeled a real pencil beam with energy and spatial spread following Gaussian distributions. Mean energy,more » and energy and spatial spread are model parameters. To commission against a real beam, we first performed MC simulations to calculate dose distributions of a set of ideal (monoenergetic, zero-size) pencil beams. Dose distribution for a real pencil beam is hence linear superposition of doses for those ideal pencil beams with weights in the Gaussian form. We formulated the commissioning task as an optimization problem, such that the calculated central axis depth dose and lateral profiles at several depths match corresponding measurements. An iterative algorithm combining conjugate gradient method and parameter fitting was employed to solve the optimization problem. We validated our method in simulation studies. Results: We calculated dose distributions for three real pencil beams with nominal energies 83, 147 and 199 MeV using realistic beam parameters. These data were regarded as measurements and used for commission. After commissioning, average difference in energy and beam spread between determined values and ground truth were 4.6% and 0.2%. With the commissioned model, we recomputed dose. Mean dose differences from measurements were 0.64%, 0.20% and 0.25%. Conclusion: The developed automatic MC beam-model commissioning method for pencil-beam scanning proton therapy can determine beam model parameters with satisfactory accuracy.« less
NASA Astrophysics Data System (ADS)
Gherghel-Lascu, A.; Apel, W. D.; Arteaga-Velázquez, J. C.; Bekk, K.; Bertania, M.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Cantoni, E.; Chiavassa, A.; Cossavella, F.; Daumiller, K.; de Souza, V.; Di Pierro, F.; Doll, P.; Engel, R.; Fuhrmann, D.; Gils, H. J.; Glasstetter, R.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Huber, D.; Huege, T.; Kampert, K.-H.; Kang, D.; Klages, H. O.; Link, K.; Łuczak, P.; Mathes, H. J.; Mayer, H. J.; Milke, J.; Mitrica, B.; Morello, C.; Oehlschläger, J.; Ostapchenko, S.; Palmieri, N.; Pierog, T.; Rebel, H.; Roth, M.; Schieler, H.; Schoo, S.; Schröder, F. G.; Sima, O.; Toma, G.; Trinchero, G. C.; Ulrich, H.; Weindl, A.; Wochele, J.; Zabierowski, J.
2017-06-01
The charged particle densities obtained from CORSIKA simulated EAS, using the QGSJet-II.04 hadronic interaction model are used for primary energy reconstruction. Simulated data are reconstructed by using Lateral Energy Correction Functions computed with a new realistic model of the Grande stations implemented in Geant4.10.
Electrical Power Distribution and Control Modeling and Analysis
NASA Technical Reports Server (NTRS)
Fu, Johnny S.; Liffring, Mark; Mehdi, Ishaque S.
2001-01-01
This slide presentation reviews the use of Electrical Power Distribution and Control (EPD&C) Modeling and how modeling can support analysis. The presentation discusses using the EASY5 model to simulate and analyze the Space Shuttle Electric Auxiliary Power Unit. Diagrams of the model schematics are included, as well as graphs of the battery cell impedance, hydraulic load dynamics, and EPD&C response to hydraulic load variations.
SAMICS marketing and distribution model
NASA Technical Reports Server (NTRS)
1978-01-01
A SAMICS (Solar Array Manufacturing Industry Costing Standards) was formulated as a computer simulation model. Given a proper description of the manufacturing technology as input, this model computes the manufacturing price of solar arrays for a broad range of production levels. This report presents a model for computing these marketing and distribution costs, the end point of the model being the loading dock of the final manufacturer.
NASA Astrophysics Data System (ADS)
Revuelto, J.; Dumont, M.; Tuzet, F.; Vionnet, V.; Lafaysse, M.; Lecourt, G.; Vernay, M.; Morin, S.; Cosme, E.; Six, D.; Rabatel, A.
2017-12-01
Nowadays snowpack models show a good capability in simulating the evolution of snow in mountain areas. However singular deviations of meteorological forcing and shortcomings in the modelling of snow physical processes, when accumulated on time along a snow season, could produce large deviations from real snowpack state. The evaluation of these deviations is usually assessed with on-site observations from automatic weather stations. Nevertheless the location of these stations could strongly influence the results of these evaluations since local topography may have a marked influence on snowpack evolution. Despite the evaluation of snowpack models with automatic weather stations usually reveal good results, there exist a lack of large scale evaluations of simulations results on heterogeneous alpine terrain subjected to local topographic effects.This work firstly presents a complete evaluation of the detailed snowpack model Crocus over an extended mountain area, the Arve upper catchment (western European Alps). This catchment has a wide elevation range with a large area above 2000m a.s.l. and/or glaciated. The evaluation compares results obtained with distributed and semi-distributed simulations (the latter nowadays used on the operational forecasting). Daily observations of the snow covered area from MODIS satellite sensor, seasonal glacier surface mass balance evolution measured in more than 65 locations and the galciers annual equilibrium line altitude from Landsat/Spot/Aster satellites, have been used for model evaluation. Additionally the latest advances in producing ensemble snowpack simulations for assimilating satellite reflectance data over extended areas will be presented. These advances comprises the generation of an ensemble of downscaled high-resolution meteorological forcing from meso-scale meteorological models and the application of a particle filter scheme for assimilating satellite observations. Despite the results are prefatory, they show a good potential improving snowpack forecasting capabilities.
NASA Astrophysics Data System (ADS)
Ravi, Koustuban; Wang, Qian; Ho, Seng-Tiong
2015-08-01
We report a new computational model for simulations of electromagnetic interactions with semiconductor quantum well(s) (SQW) in complex electromagnetic geometries using the finite-difference time-domain method. The presented model is based on an approach of spanning a large number of electron transverse momentum states in each SQW sub-band (multi-band) with a small number of discrete multi-electron states (multi-level, multi-electron). This enables accurate and efficient two-dimensional (2-D) and three-dimensional (3-D) simulations of nanophotonic devices with SQW active media. The model includes the following features: (1) Optically induced interband transitions between various SQW conduction and heavy-hole or light-hole sub-bands are considered. (2) Novel intra sub-band and inter sub-band transition terms are derived to thermalize the electron and hole occupational distributions to the correct Fermi-Dirac distributions. (3) The terms in (2) result in an explicit update scheme which circumvents numerically cumbersome iterative procedures. This significantly augments computational efficiency. (4) Explicit update terms to account for carrier leakage to unconfined states are derived, which thermalize the bulk and SQW populations to a common quasi-equilibrium Fermi-Dirac distribution. (5) Auger recombination and intervalence band absorption are included. The model is validated by comparisons to analytic band-filling calculations, simulations of SQW optical gain spectra, and photonic crystal lasers.
Simulating statistics of lightning-induced and man made fires
NASA Astrophysics Data System (ADS)
Krenn, R.; Hergarten, S.
2009-04-01
The frequency-area distributions of forest fires show power-law behavior with scaling exponents α in a quite narrow range, relating wildfire research to the theoretical framework of self-organized criticality. Examples of self-organized critical behavior can be found in computer simulations of simple cellular automata. The established self-organized critical Drossel-Schwabl forest fire model (DS-FFM) is one of the most widespread models in this context. Despite its qualitative agreement with event-size statistics from nature, its applicability is still questioned. Apart from general concerns that the DS-FFM apparently oversimplifies the complex nature of forest dynamics, it significantly overestimates the frequency of large fires. We present a straightforward modification of the model rules that increases the scaling exponent α by approximately 13 and brings the simulated event-size statistics close to those observed in nature. In addition, combined simulations of both the original and the modified model predict a dependence of the overall distribution on the ratio of lightning induced and man made fires as well as a difference between their respective event-size statistics. The increase of the scaling exponent with decreasing lightning probability as well as the splitting of the partial distributions are confirmed by the analysis of the Canadian Large Fire Database. As a consequence, lightning induced and man made forest fires cannot be treated separately in wildfire modeling, hazard assessment and forest management.
Three-moment representation of rain in a cloud microphysics model
NASA Astrophysics Data System (ADS)
Paukert, M.; Fan, J.; Rasch, P. J.; Morrison, H.; Milbrandt, J.; Khain, A.; Shpund, J.
2017-12-01
Two-moment microphysics schemes have been commonly used for cloud simulation in models across different scales - from large-eddy simulations to global climate models. These schemes have yielded valuable insights into cloud and precipitation processes, however the size distributions are limited to two degrees of freedom, and thus the shape parameter is typically fixed or diagnosed. We have developed a three-moment approach for the rain category in order to provide an additional degree of freedom to the size distribution and thereby improve the cloud microphysics representations for more accurate weather and climate simulations. The approach is applied to the Predicted Particle Properties (P3) scheme. In addition to the rain number and mass mixing ratios predicted in the two-moment P3, we now include prognostic equations for the sixth moment of the size distribution (radar reflectivity), thus allowing the shape parameter to evolve freely. We employ the spectral bin microphysics (SBM) model to formulate the three-moment process rates in P3 for drop collisions and breakup. We first test the three-moment scheme with a maritime stratocumulus case from the VOCALS field campaign, and compare the model results with respect to cloud and precipitation properties from the new P3 scheme, original two-moment P3 scheme, SBM, and in-situ aircraft measurements. The improved simulation results by the new P3 scheme will be discussed and physically explained.
Electro-osmotic flow of a model electrolyte
NASA Astrophysics Data System (ADS)
Zhu, Wei; Singer, Sherwin J.; Zheng, Zhi; Conlisk, A. T.
2005-04-01
Electro-osmotic flow is studied by nonequilibrium molecular dynamics simulations in a model system chosen to elucidate various factors affecting the velocity profile and facilitate comparison with existing continuum theories. The model system consists of spherical ions and solvent, with stationary, uniformly charged walls that make a channel with a height of 20 particle diameters. We find that hydrodynamic theory adequately describes simple pressure-driven (Poiseuille) flow in this model. However, Poisson-Boltzmann theory fails to describe the ion distribution in important situations, and therefore continuum fluid dynamics based on the Poisson-Boltzmann ion distribution disagrees with simulation results in those situations. The failure of Poisson-Boltzmann theory is traced to the exclusion of ions near the channel walls resulting from reduced solvation of the ions in that region. When a corrected ion distribution is used as input for hydrodynamic theory, agreement with numerical simulations is restored. An analytic theory is presented that demonstrates that repulsion of the ions from the channel walls increases the flow rate, and attraction to the walls has the opposite effect. A recent numerical study of electro-osmotic flow is reanalyzed in the light of our findings, and the results conform well to our conclusions for the model system.
Space evolution model and empirical analysis of an urban public transport network
NASA Astrophysics Data System (ADS)
Sui, Yi; Shao, Feng-jing; Sun, Ren-cheng; Li, Shu-jing
2012-07-01
This study explores the space evolution of an urban public transport network, using empirical evidence and a simulation model validated on that data. Public transport patterns primarily depend on traffic spatial-distribution, demands of passengers and expected utility of investors. Evolution is an iterative process of satisfying the needs of passengers and investors based on a given traffic spatial-distribution. The temporal change of urban public transport network is evaluated both using topological measures and spatial ones. The simulation model is validated using empirical data from nine big cities in China. Statistical analyses on topological and spatial attributes suggest that an evolution network with traffic demands characterized by power-law numerical values which distribute in a mode of concentric circles tallies well with these nine cities.
Water Network Tool for Resilience v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-12-09
WNTR is a python package designed to simulate and analyze resilience of water distribution networks. The software includes: - Pressure driven and demand driven hydraulic simulation - Water quality simulation to track concentration, trace, and water age - Conditional controls to simulate power outages - Models to simulate pipe breaks - A wide range of resilience metrics - Analysis and visualization tools
NASA Astrophysics Data System (ADS)
Lee, Taesam
2018-05-01
Multisite stochastic simulations of daily precipitation have been widely employed in hydrologic analyses for climate change assessment and agricultural model inputs. Recently, a copula model with a gamma marginal distribution has become one of the common approaches for simulating precipitation at multiple sites. Here, we tested the correlation structure of the copula modeling. The results indicate that there is a significant underestimation of the correlation in the simulated data compared to the observed data. Therefore, we proposed an indirect method for estimating the cross-correlations when simulating precipitation at multiple stations. We used the full relationship between the correlation of the observed data and the normally transformed data. Although this indirect method offers certain improvements in preserving the cross-correlations between sites in the original domain, the method was not reliable in application. Therefore, we further improved a simulation-based method (SBM) that was developed to model the multisite precipitation occurrence. The SBM preserved well the cross-correlations of the original domain. The SBM method provides around 0.2 better cross-correlation than the direct method and around 0.1 degree better than the indirect method. The three models were applied to the stations in the Nakdong River basin, and the SBM was the best alternative for reproducing the historical cross-correlation. The direct method significantly underestimates the correlations among the observed data, and the indirect method appeared to be unreliable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Fei; Pratt, Annabelle; Bialek, Tom
2016-11-21
This paper reports on tools and methodologies developed to study the impact of adding rooftop photovoltaic (PV) systems, with and without the ability to provide voltage support, on the voltage profile of distribution feeders. Simulation results are provided from a study of a specific utility feeder. The simulation model of the utility distribution feeder was built in OpenDSS and verified by comparing the simulated voltages to field measurements. First, we set all PV systems to operate at unity power factor and analyzed the impact on feeder voltages. Then we conducted multiple simulations with voltage support activated for all the smartmore » PV inverters. These included different constant power factor settings and volt/VAR controls.« less
Remillard, J.; Fridlind, Ann M.; Ackerman, A. S.; ...
2017-09-20
Here, a case study of persistent stratocumulus over the Azores is simulated using two independent large-eddy simulation (LES) models with bin microphysics, and forward-simulated cloud radar Doppler moments and spectra are compared with observations. Neither model is able to reproduce the monotonic increase of downward mean Doppler velocity with increasing reflectivity that is observed under a variety of conditions, but for differing reasons. To a varying degree, both models also exhibit a tendency to produce too many of the largest droplets, leading to excessive skewness in Doppler velocity distributions, especially below cloud base. Excessive skewness appears to be associated withmore » an insufficiently sharp reduction in droplet number concentration at diameters larger than ~200 μm, where a pronounced shoulder is found for in situ observations and a sharp reduction in reflectivity size distribution is associated with relatively narrow observed Doppler spectra. Effectively using LES with bin microphysics to study drizzle formation and evolution in cloud Doppler radar data evidently requires reducing numerical diffusivity in the treatment of the stochastic collection equation; if that is accomplished sufficiently to reproduce typical spectra, progress toward understanding drizzle processes is likely.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Remillard, J.; Fridlind, Ann M.; Ackerman, A. S.
Here, a case study of persistent stratocumulus over the Azores is simulated using two independent large-eddy simulation (LES) models with bin microphysics, and forward-simulated cloud radar Doppler moments and spectra are compared with observations. Neither model is able to reproduce the monotonic increase of downward mean Doppler velocity with increasing reflectivity that is observed under a variety of conditions, but for differing reasons. To a varying degree, both models also exhibit a tendency to produce too many of the largest droplets, leading to excessive skewness in Doppler velocity distributions, especially below cloud base. Excessive skewness appears to be associated withmore » an insufficiently sharp reduction in droplet number concentration at diameters larger than ~200 μm, where a pronounced shoulder is found for in situ observations and a sharp reduction in reflectivity size distribution is associated with relatively narrow observed Doppler spectra. Effectively using LES with bin microphysics to study drizzle formation and evolution in cloud Doppler radar data evidently requires reducing numerical diffusivity in the treatment of the stochastic collection equation; if that is accomplished sufficiently to reproduce typical spectra, progress toward understanding drizzle processes is likely.« less
NASA Astrophysics Data System (ADS)
Green, David M.; Dallaire, Joel D.; Reaper, Jerome H.
2004-08-01
The Joint Battlespace Infosphere (JBI) program is performing a technology investigation into global communications, data mining and warehousing, and data fusion technologies by focusing on techniques and methodologies that support twenty first century military distributed collaboration. Advancement of these technologies is vitally important if military decision makers are to have the right data, in the right format, at the right time and place to support making the right decisions within available timelines. A quantitative understanding of individual and combinational effects arising from the application of technologies within a framework is presently far too complex to evaluate at more than a cursory depth. In order to facilitate quantitative analysis under these circumstances, the Distributed Information Enterprise Modeling and Simulation (DIEMS) team was formed to apply modeling and simulation (M&S) techniques to help in addressing JBI analysis challenges. The DIEMS team has been tasked utilizing collaborative distributed M&S architectures to quantitatively evaluate JBI technologies and tradeoffs. This paper first presents a high level view of the DIEMS project. Once this approach has been established, a more concentrated view of the detailed communications simulation techniques used in generating the underlying support data sets is presented.
Numerical Simulation Of Silicon-Ribbon Growth
NASA Technical Reports Server (NTRS)
Woda, Ben K.; Kuo, Chin-Po; Utku, Senol; Ray, Sujit Kumar
1987-01-01
Mathematical model includes nonlinear effects. In development simulates growth of silicon ribbon from melt. Takes account of entire temperature and stress history of ribbon. Numerical simulations performed with new model helps in search for temperature distribution, pulling speed, and other conditions favoring growth of wide, flat, relatively defect-free silicon ribbons for solar photovoltaic cells at economically attractive, high production rates. Also applicable to materials other than silicon.
NASA Technical Reports Server (NTRS)
Gibson, Jim; Jordan, Joe; Grant, Terry
1990-01-01
Local Area Network Extensible Simulator (LANES) computer program provides method for simulating performance of high-speed local-area-network (LAN) technology. Developed as design and analysis software tool for networking computers on board proposed Space Station. Load, network, link, and physical layers of layered network architecture all modeled. Mathematically models according to different lower-layer protocols: Fiber Distributed Data Interface (FDDI) and Star*Bus. Written in FORTRAN 77.
Simulating historical variability in the amount of old forests in the Oregon Coast Range.
M.C. Wimberly; T.M. Spies; C.J. Long; C. Whitlock
2000-01-01
We developed the landscape age-class demographics simulator (LADS) to model historical variability in the amount of old-growth and late-successional forest in the Oregon Coast Range over the past 3,000 years. The model simulated temporal and spatial patterns of forest fires along with the resulting fluctuations in the distribution of forest age classes across the...
Medium Fidelity Simulation of Oxygen Tank Venting
NASA Technical Reports Server (NTRS)
Sweet, Adam; Kurien, James; Lau, Sonie (Technical Monitor)
2001-01-01
The item to he cleared is a medium-fidelity software simulation model of a vented cryogenic tank. Such tanks are commonly used to transport cryogenic liquids such as liquid oxygen via truck, and have appeared on liquid-fueled rockets for decades. This simulation model works with the HCC simulation system that was developed by Xerox PARC and NASA Ames Research Center. HCC has been previously cleared for distribution. When used with the HCC software, the model generates simulated readings for the tank pressure and temperature as the simulated cryogenic liquid boils off and is vented. Failures (such as a broken vent valve) can be injected into the simulation to produce readings corresponding to the failure. Release of this simulation will allow researchers to test their software diagnosis systems by attempting to diagnose the simulated failure from the simulated readings. This model does not contain any encryption software nor can it perform any control tasks that might be export controlled.
Hurtado Rúa, Sandra M; Mazumdar, Madhu; Strawderman, Robert L
2015-12-30
Bayesian meta-analysis is an increasingly important component of clinical research, with multivariate meta-analysis a promising tool for studies with multiple endpoints. Model assumptions, including the choice of priors, are crucial aspects of multivariate Bayesian meta-analysis (MBMA) models. In a given model, two different prior distributions can lead to different inferences about a particular parameter. A simulation study was performed in which the impact of families of prior distributions for the covariance matrix of a multivariate normal random effects MBMA model was analyzed. Inferences about effect sizes were not particularly sensitive to prior choice, but the related covariance estimates were. A few families of prior distributions with small relative biases, tight mean squared errors, and close to nominal coverage for the effect size estimates were identified. Our results demonstrate the need for sensitivity analysis and suggest some guidelines for choosing prior distributions in this class of problems. The MBMA models proposed here are illustrated in a small meta-analysis example from the periodontal field and a medium meta-analysis from the study of stroke. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
[Effects of sampling plot number on tree species distribution prediction under climate change].
Liang, Yu; He, Hong-Shi; Wu, Zhi-Wei; Li, Xiao-Na; Luo, Xu
2013-05-01
Based on the neutral landscapes under different degrees of landscape fragmentation, this paper studied the effects of sampling plot number on the prediction of tree species distribution at landscape scale under climate change. The tree species distribution was predicted by the coupled modeling approach which linked an ecosystem process model with a forest landscape model, and three contingent scenarios and one reference scenario of sampling plot numbers were assumed. The differences between the three scenarios and the reference scenario under different degrees of landscape fragmentation were tested. The results indicated that the effects of sampling plot number on the prediction of tree species distribution depended on the tree species life history attributes. For the generalist species, the prediction of their distribution at landscape scale needed more plots. Except for the extreme specialist, landscape fragmentation degree also affected the effects of sampling plot number on the prediction. With the increase of simulation period, the effects of sampling plot number on the prediction of tree species distribution at landscape scale could be changed. For generalist species, more plots are needed for the long-term simulation.
Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan
2016-01-01
This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability.
Park, Yoon Soo; Lee, Young-Sun; Xing, Kuan
2016-01-01
This study investigates the impact of item parameter drift (IPD) on parameter and ability estimation when the underlying measurement model fits a mixture distribution, thereby violating the item invariance property of unidimensional item response theory (IRT) models. An empirical study was conducted to demonstrate the occurrence of both IPD and an underlying mixture distribution using real-world data. Twenty-one trended anchor items from the 1999, 2003, and 2007 administrations of Trends in International Mathematics and Science Study (TIMSS) were analyzed using unidimensional and mixture IRT models. TIMSS treats trended anchor items as invariant over testing administrations and uses pre-calibrated item parameters based on unidimensional IRT. However, empirical results showed evidence of two latent subgroups with IPD. Results also showed changes in the distribution of examinee ability between latent classes over the three administrations. A simulation study was conducted to examine the impact of IPD on the estimation of ability and item parameters, when data have underlying mixture distributions. Simulations used data generated from a mixture IRT model and estimated using unidimensional IRT. Results showed that data reflecting IPD using mixture IRT model led to IPD in the unidimensional IRT model. Changes in the distribution of examinee ability also affected item parameters. Moreover, drift with respect to item discrimination and distribution of examinee ability affected estimates of examinee ability. These findings demonstrate the need to caution and evaluate IPD using a mixture IRT framework to understand its effects on item parameters and examinee ability. PMID:26941699
Manual for the Jet Event and Background Simulation Library(JEBSimLib)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinz, Matthias; Soltz, Ron; Angerami, Aaron
Jets are the collimated streams of particles resulting from hard scattering in the initial state of high-energy collisions. In heavy-ion collisions, jets interact with the quark-gluon plasma (QGP) before freezeout, providing a probe into the internal structure and properties of the QGP. In order to study jets, background must be subtracted from the measured event, potentially introducing a bias. We aim to understand and quantify this subtraction bias. PYTHIA, a library to simulate pure jet events, is used to simulate a model for a signature with one pure jet (a photon) and one quenched jet, where all quenched particle momentamore » are reduced by a user-de ned constant fraction. Background for the event is simulated using multiplicity values generated by the TRENTO initial state model of heavy-ion collisions fed into a thermal model consisting of a 3-dimensional Boltzmann distribution for particle types and momenta. Data from the simulated events is used to train a statistical model, which computes a posterior distribution of the quench factor for a data set. The model was tested rst on pure jet events and then on full events including the background. This model will allow for a quantitative determination of biases induced by various methods of background subtraction.« less
Manual for the Jet Event and Background Simulation Library
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinz, M.; Soltz, R.; Angerami, A.
Jets are the collimated streams of particles resulting from hard scattering in the initial state of high-energy collisions. In heavy-ion collisions, jets interact with the quark-gluon plasma (QGP) before freezeout, providing a probe into the internal structure and properties of the QGP. In order to study jets, background must be subtracted from the measured event, potentially introducing a bias. We aim to understand and quantify this subtraction bias. PYTHIA, a library to simulate pure jet events, is used to simulate a model for a signature with one pure jet (a photon) and one quenched jet, where all quenched particle momentamore » are reduced by a user-de ned constant fraction. Background for the event is simulated using multiplicity values generated by the TRENTO initial state model of heavy-ion collisions fed into a thermal model consisting of a 3-dimensional Boltzmann distribution for particle types and momenta. Data from the simulated events is used to train a statistical model, which computes a posterior distribution of the quench factor for a data set. The model was tested rst on pure jet events and then on full events including the background. This model will allow for a quantitative determination of biases induced by various methods of background subtraction.« less
Xue, Lianqing; Yang, Fan; Yang, Changbing; Wei, Guanghui; Li, Wenqian; He, Xinlin
2018-01-11
Understanding the mechanism of complicated hydrological processes is important for sustainable management of water resources in an arid area. This paper carried out the simulations of water movement for the Manas River Basin (MRB) using the improved semi-distributed Topographic hydrologic model (TOPMODEL) with a snowmelt model and topographic index algorithm. A new algorithm is proposed to calculate the curve of topographic index using internal tangent circle on a conical surface. Based on the traditional model, the improved indicator of temperature considered solar radiation is used to calculate the amount of snowmelt. The uncertainty of parameters for the TOPMODEL model was analyzed using the generalized likelihood uncertainty estimation (GLUE) method. The proposed model shows that the distribution of the topographic index is concentrated in high mountains, and the accuracy of runoff simulation has certain enhancement by considering radiation. Our results revealed that the performance of the improved TOPMODEL is acceptable and comparable to runoff simulation in the MRB. The uncertainty of the simulations resulted from the parameters and structures of model, climatic and anthropogenic factors. This study is expected to serve as a valuable complement for widely application of TOPMODEL and identify the mechanism of hydrological processes in arid area.
An Agent-Based Dynamic Model for Analysis of Distributed Space Exploration Architectures
NASA Astrophysics Data System (ADS)
Sindiy, Oleg V.; DeLaurentis, Daniel A.; Stein, William B.
2009-07-01
A range of complex challenges, but also potentially unique rewards, underlie the development of exploration architectures that use a distributed, dynamic network of resources across the solar system. From a methodological perspective, the prime challenge is to systematically model the evolution (and quantify comparative performance) of such architectures, under uncertainty, to effectively direct further study of specialized trajectories, spacecraft technologies, concept of operations, and resource allocation. A process model for System-of-Systems Engineering is used to define time-varying performance measures for comparative architecture analysis and identification of distinguishing patterns among interoperating systems. Agent-based modeling serves as the means to create a discrete-time simulation that generates dynamics for the study of architecture evolution. A Solar System Mobility Network proof-of-concept problem is introduced representing a set of longer-term, distributed exploration architectures. Options within this set revolve around deployment of human and robotic exploration and infrastructure assets, their organization, interoperability, and evolution, i.e., a system-of-systems. Agent-based simulations quantify relative payoffs for a fully distributed architecture (which can be significant over the long term), the latency period before they are manifest, and the up-front investment (which can be substantial compared to alternatives). Verification and sensitivity results provide further insight on development paths and indicate that the framework and simulation modeling approach may be useful in architectural design of other space exploration mass, energy, and information exchange settings.
NASA Astrophysics Data System (ADS)
Shellito, Cindy J.; Sloan, Lisa C.
2006-02-01
This study utilizes the NCAR Land Surface Model (LSM1.2) integrated with dynamic global vegetation to recreate the early Paleogene global distribution of vegetation and to examine the response of the vegetation distribution to changes in climate at the Paleocene-Eocene boundary (˜ 55 Ma). We run two simulations with Eocene geography driven by climatologies generated in two atmosphere global modeling experiments: one with atmospheric pCO 2 at 560 ppm, and another at 1120 ppm. In both scenarios, the model produces the best match with fossil flora in the low latitudes. A comparison of model output from the two scenarios suggests that the greatest impact of climate on vegetation will occur in the high latitudes, in the Arctic Circle and in Antarctica. In these regions, greater accumulated summertime warmth in the 1120 ppm simulation allows temperate plant functional types to expand further poleward. Additionally, the high pCO 2 scenario produces a greater abundance of trees over grass at these high latitudes. In the middle and low latitudes, the general distribution of plant functional types is similar in both pCO 2 scenarios. Likely, a greater increment of greenhouse gases is necessary to produce the type of change evident in the mid-latitude paleobotanical record. Overall, differences between model output and fossil flora are greatest at high latitudes.
Attitude Estimation for Unresolved Agile Space Objects with Shape Model Uncertainty
2012-09-01
Simulated lightcurve data using the Cook-Torrance [8] Bidirectional Reflectivity Distribution Function ( BRDF ) model was first applied in a batch estimation...framework to ellipsoidal SO models in geostationary orbits [9]. The Ashikhmin-Shirley [10] BRDF has also been used to study estimation of specular...non-convex 300 facet model and simulated lightcurves using a combination of Lambertian and Cook-Torrance (specular) BRDF models with an Unscented
[Simulation and data analysis of stereological modeling based on virtual slices].
Wang, Hao; Shen, Hong; Bai, Xiao-yan
2008-05-01
To establish a computer-assisted stereological model for simulating the process of slice section and evaluate the relationship between section surface and estimated three-dimensional structure. The model was designed by mathematic method as a win32 software based on the MFC using Microsoft visual studio as IDE for simulating the infinite process of sections and analysis of the data derived from the model. The linearity of the fitting of the model was evaluated by comparison with the traditional formula. The win32 software based on this algorithm allowed random sectioning of the particles distributed randomly in an ideal virtual cube. The stereological parameters showed very high throughput (>94.5% and 92%) in homogeneity and independence tests. The data of density, shape and size of the section were tested to conform to normal distribution. The output of the model and that from the image analysis system showed statistical correlation and consistency. The algorithm we described can be used for evaluating the stereologic parameters of the structure of tissue slices.
IS THE SIZE DISTRIBUTION OF URBAN AEROSOLS DETERMINED BY THERMODYNAMIC EQUILIBRIUM? (R826371C005)
A size-resolved equilibrium model, SELIQUID, is presented and used to simulate the size–composition distribution of semi-volatile inorganic aerosol in an urban environment. The model uses the efflorescence branch of aerosol behavior to predict the equilibrium partitioni...
DOT National Transportation Integrated Search
2014-06-01
In June 2012, the Environmental Protection Agency (EPA) released the Operating Mode : Distribution Generator (OMDG) a tool for developing an operating mode distribution as an input : to the Motor Vehicle Emissions Simulator model (MOVES). The t...
The paper discusses the simulation of the effects of changes to particle loading, particle size distribution, and electrostatic precipitator (ESP) operating temperatures using ESP models. It also illustrates the usefulness of modern ESP models for this type of analysis. Increasin...
Using WNTR to Model Water Distribution System Resilience
The Water Network Tool for Resilience (WNTR) is a new open source Python package developed by the U.S. Environmental Protection Agency and Sandia National Laboratories to model and evaluate resilience of water distribution systems. WNTR can be used to simulate a wide range of di...
Lumped versus distributed thermoregulatory control: results from a three-dimensional dynamic model.
Werner, J; Buse, M; Foegen, A
1989-01-01
In this study we use a three-dimensional model of the human thermal system, the spatial grid of which is 0.5 ... 1.0 cm. The model is based on well-known physical heat-transfer equations, and all parameters of the passive system have definite physical values. According to the number of substantially different areas and organs, 54 spatially different values are attributed to each physical parameter. Compatibility of simulation and experiment was achieved solely on the basis of physical considerations and physiological basic data. The equations were solved using a modification of the alternating direction implicit method. On the basis of this complex description of the passive system close to reality, various lumped and distributed parameter control equations were tested for control of metabolic heat production, blood flow and sweat production. The simplest control equations delivering results on closed-loop control compatible with experimental evidence were determined. It was concluded that it is essential to take into account the spatial distribution of heat production, blood flow and sweat production, and that at least for control of shivering, distributed controller gains different from the pattern of distribution of muscle tissue are required. For sweat production this is not so obvious, so that for simulation of sweating control after homogeneous heat load a lumped parameter control may be justified. Based on these conclusions three-dimensional temperature profiles for cold and heat load and the dynamics for changes of the environmental conditions were computed. In view of the exact simulation of the passive system and the compatibility with experimentally attainable variables there is good evidence that those values extrapolated by the simulation are adequately determined. The model may be used both for further analysis of the real thermoregulatory mechanisms and for special applications in environmental and clinical health care.
Jämbeck, Joakim P M; Eriksson, Emma S E; Laaksonen, Aatto; Lyubartsev, Alexander P; Eriksson, Leif A
2014-01-14
Liposomes are proposed as drug delivery systems and can in principle be designed so as to cohere with specific tissue types or local environments. However, little detail is known about the exact mechanisms for drug delivery and the distributions of drug molecules inside the lipid carrier. In the current work, a coarse-grained (CG) liposome model is developed, consisting of over 2500 lipids, with varying degrees of drug loading. For the drug molecule, we chose hypericin, a natural compound proposed for use in photodynamic therapy, for which a CG model was derived and benchmarked against corresponding atomistic membrane bilayer model simulations. Liposomes with 21-84 hypericin molecules were generated and subjected to 10 microsecond simulations. Distribution of the hypericins, their orientations within the lipid bilayer, and the potential of mean force for transferring a hypericin molecule from the interior aqueous "droplet" through the liposome bilayer are reported herein.
Determination of the oil distribution in a hermetic compressor using numerical simulation
NASA Astrophysics Data System (ADS)
Posch, S.; Hopfgartner, J.; Berger, E.; Zuber, B.; Almbauer, R.; Schöllauf, P.
2017-08-01
In addition to the reduction of friction the oil in a hermetic compressor is very important for the transfer of heat from hot parts to the compressor shell. The simulation of the oil distribution in a hermetic reciprocating compressor for refrigeration application is shown in the present work. Using the commercial Computational Fluid Dynamics (CFD) software ANSYS Fluent, the oil flow inside the compressor shell from the oil pump outlet to the oil sump is calculated. A comprehensive overview of the used models and the boundary conditions is given. After reaching steady-state conditions the oil covered surfaces are analysed concerning heat transfer coefficients. The gained heat transfer coefficients are used as input parameters for a thermal model of a hermetic compressor. An increase in accuracy of the thermal model with the simulated heat transfer coefficients compared to values from literature is shown by model validation with experimental data.
A Numerical Model Study of Nocturnal Drainage Flows with Strong Wind and Temperature Gradients.
NASA Astrophysics Data System (ADS)
Yamada, T.; Bunker, S.
1989-07-01
A second-moment turbulence-closure model described in Yamada and Bunker is used to simulate nocturnal drainage flows observed during the 1984 ASCOT field expedition in Brush Creek, Colorado. In order to simulate the observed strong wind directional shear and temperature gradients, two modifications are added to the model. The strong wind directional shear was maintained by introducing a `nudging' term in the equation of motion to guide the modeled winds in the layers above the ridge top toward the observed wind direction. The second modification was accomplished by reformulating the conservation equation for the potential temperature in such a way that only the deviation from the horizontally averaged value was prognostically computed.The vegetation distribution used in this study is undoubtedly crude. Nevertheless, the present simulation suggests that tall tree canopy can play an important role in producing inhomogeneous wind distribution, particularly in the levels below the canopy top.
NASA Astrophysics Data System (ADS)
Cowton, L. R.; Neufeld, J. A.; Bickle, M.; White, N.; White, J.; Chadwick, A.
2017-12-01
Vertically-integrated gravity current models enable computationally efficient simulations of CO2 flow in sub-surface reservoirs. These simulations can be used to investigate the properties of reservoirs by minimizing differences between observed and modeled CO2 distributions. At the Sleipner project, about 1 Mt yr-1 of supercritical CO2 is injected at a depth of 1 km into a pristine saline aquifer with a thick shale caprock. Analysis of time-lapse seismic reflection surveys shows that CO2 is distributed within 9 discrete layers. The trapping mechanism comprises a stacked series of 1 m thick, impermeable shale horizons that are spaced at 30 m intervals through the reservoir. Within the stratigraphically highest reservoir layer, Layer 9, a submarine channel deposit has been mapped on the pre-injection seismic survey. Detailed measurements of the three-dimensional CO2 distribution within Layer 9 have been made using seven time-lapse surveys, providing a useful benchmark against which numerical flow simulations can be tested. Previous simulations have, in general, been largely unsuccessful in matching the migration rate of CO2 in this layer. Here, CO2 flow within Layer 9 is modeled as a vertically-integrated gravity current that spreads beneath a structurally complex caprock using a two-dimensional grid, considerably increasing computational efficiency compared to conventional three-dimensional simulators. This flow model is inverted to find the optimal reservoir permeability in Layer 9 by minimizing the difference between observed and predicted distributions of CO2 as a function of space and time. A three parameter inverse model, comprising reservoir permeability, channel permeability and channel width, is investigated by grid search. The best-fitting reservoir permeability is 3 Darcys, which is consistent with measurements made on core material from the reservoir. Best-fitting channel permeability is 26 Darcys. Finally, the ability of this simplified numerical model to forecast CO2 flow within Layer 9 is tested. Permeability recovered by modeling a suite of early seismic surveys is used to predict the CO2 distribution for a suite of later seismic surveys with a considerable degree of success. Forecasts have also been carried out that can be tested using future seismic surveys.
Lipid droplets fusion in adipocyte differentiated 3T3-L1 cells: A Monte Carlo simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boschi, Federico, E-mail: federico.boschi@univr.it; Department of Computer Science, University of Verona, Strada Le Grazie 15, 37134 Verona; Rizzatti, Vanni
Several human worldwide diseases like obesity, type 2 diabetes, hepatic steatosis, atherosclerosis and other metabolic pathologies are related to the excessive accumulation of lipids in cells. Lipids accumulate in spherical cellular inclusions called lipid droplets (LDs) whose sizes range from fraction to one hundred of micrometers in adipocytes. It has been suggested that LDs can grow in size due to a fusion process by which a larger LD is obtained with spherical shape and volume equal to the sum of the progenitors’ ones. In this study, the size distribution of two populations of LDs was analyzed in immature and maturemore » (5-days differentiated) 3T3-L1 adipocytes (first and second populations, respectively) after Oil Red O staining. A Monte Carlo simulation of interaction between LDs has been developed in order to quantify the size distribution and the number of fusion events needed to obtain the distribution of the second population size starting from the first one. Four models are presented here based on different kinds of interaction: a surface weighted interaction (R2 Model), a volume weighted interaction (R3 Model), a random interaction (Random model) and an interaction related to the place where the LDs are born (Nearest Model). The last two models mimic quite well the behavior found in the experimental data. This work represents a first step in developing numerical simulations of the LDs growth process. Due to the complex phenomena involving LDs (absorption, growth through additional neutral lipid deposition in existing droplets, de novo formation and catabolism) the study focuses on the fusion process. The results suggest that, to obtain the observed size distribution, a number of fusion events comparable with the number of LDs themselves is needed. Moreover the MC approach results a powerful tool for investigating the LDs growth process. Highlights: • We evaluated the role of the fusion process in the synthesis of the lipid droplets. • We compared the size distribution of the lipid droplets in immature and mature cells. • We used the Monte Carlo simulation approach, simulating 10 thousand of fusion events. • Four different interaction models between the lipid droplets were tested. • The best model which mimics the experimental measures was selected.« less
The development of a simulation model of primary prevention strategies for coronary heart disease.
Babad, Hannah; Sanderson, Colin; Naidoo, Bhash; White, Ian; Wang, Duolao
2002-11-01
This paper describes the present state of development of a discrete-event micro-simulation model for coronary heart disease prevention. The model is intended to support health policy makers in assessing the impacts on health care resources of different primary prevention strategies. For each person, a set of times to disease events, conditional on the individual's risk factor profile, is sampled from a set of probability distributions that are derived from a new analysis of the Framingham cohort study on coronary heart disease. Methods used to model changes in behavioural and physiological risk factors are discussed and a description of the simulation logic is given. The model incorporates POST (Patient Oriented Simulation Technique) simulation routines.
NASA Technical Reports Server (NTRS)
Yeh, Hwa-Young M.; Prasad, N.; Mack, Robert A.; Adler, Robert F.
1990-01-01
In this June 29, 1986 case study, a radiative transfer model is used to simulate the aircraft multichannel microwave brightness temperatures presented in the Adler et al. (1990) paper and to study the convective storm structure. Ground-based radar data are used to derive hydrometeor profiles of the storm, based on which the microwave upwelling brightness temperatures are calculated. Various vertical hydrometeor phase profiles and the Marshall and Palmer (M-P, 1948) and Sekhon and Srivastava (S-S, 1970) ice particle size distributions are experimented in the model. The results are compared with the aircraft radiometric data. The comparison reveals that the M-P distribution well represents the ice particle size distribution, especially in the upper tropospheric portion of the cloud; the S-S distribution appears to better simulate the ice particle size at the lower portion of the cloud, which has a greater effect on the low-frequency microwave upwelling brightness temperatures; and that, in deep convective regions, significant supercooled liquid water (about 0.5 g/cu m) may be present up to the -30 C layer, while in less convective areas, frozen hydrometeors are predominant above -10 C level.
NASA Astrophysics Data System (ADS)
Fairchild, A. J.; Chirayath, V. A.; Gladen, R. W.; Chrysler, M. D.; Koymen, A. R.; Weiss, A. H.
2017-01-01
In this paper, we present results of numerical modelling of the University of Texas at Arlington’s time of flight positron annihilation induced Auger electron spectrometer (UTA TOF-PAES) using SIMION® 8.1 Ion and Electron Optics Simulator. The time of flight (TOF) spectrometer measures the energy of electrons emitted from the surface of a sample as a result of the interaction of low energy positrons with the sample surface. We have used SIMION® 8.1 to calculate the times of flight spectra of electrons leaving the sample surface with energies and angles dispersed according to distribution functions chosen to model the positron induced electron emission process and have thus obtained an estimate of the true electron energy distribution. The simulated TOF distribution was convolved with a Gaussian timing resolution function and compared to the experimental distribution. The broadening observed in the simulated TOF spectra was found to be consistent with that observed in the experimental secondary electron spectra of Cu generated as a result of positrons incident with energy 1.5 eV to 901 eV, when a timing resolution of 2.3 ns was assumed.
USDA-ARS?s Scientific Manuscript database
Hydrological interaction between surface and subsurface water systems has a significant impact on water quality, ecosystems and biogeochemistry cycling of both systems. Distributed models have been developed to simulate this function, but they require detailed spatial inputs and extensive computati...
Factors affecting species distribution predictions: A simulation modeling experiment
Gordon C. Reese; Kenneth R. Wilson; Jennifer A. Hoeting; Curtis H. Flather
2005-01-01
Geospatial species sample data (e.g., records with location information from natural history museums or annual surveys) are rarely collected optimally, yet are increasingly used for decisions concerning our biological heritage. Using computer simulations, we examined factors that could affect the performance of autologistic regression (ALR) models that predict species...
The adaptation of the Community Multiscale Air Quality (CMAQ) modeling system to simulate O3, particulate matter, and related precursor distributions over the northern hemisphere is presented. Hemispheric simulations with CMAQ and the Weather Research and Forecasting (...
Global Atmospheric Aerosol Modeling
NASA Technical Reports Server (NTRS)
Hendricks, Johannes; Aquila, Valentina; Righi, Mattia
2012-01-01
Global aerosol models are used to study the distribution and properties of atmospheric aerosol particles as well as their effects on clouds, atmospheric chemistry, radiation, and climate. The present article provides an overview of the basic concepts of global atmospheric aerosol modeling and shows some examples from a global aerosol simulation. Particular emphasis is placed on the simulation of aerosol particles and their effects within global climate models.