Sample records for model grid size

  1. Predicting grid-size-dependent fracture strains of DP980 with a microstructure-based post-necking model

    DOE PAGES

    Cheng, G.; Hu, X. H.; Choi, K. S.; ...

    2017-07-08

    Ductile fracture is a local phenomenon, and it is well established that fracture strain levels depend on both stress triaxiality and the resolution (grid size) of strain measurements. Two-dimensional plane strain post-necking models with different model sizes are used in this paper to predict the grid-size-dependent fracture strain of a commercial dual-phase steel, DP980. The models are generated from the actual microstructures, and the individual phase flow properties and literature-based individual phase damage parameters for the Johnson–Cook model are used for ferrite and martensite. A monotonic relationship is predicted: the smaller the model size, the higher the fracture strain. Thus,more » a general framework is developed to quantify the grid-size-dependent fracture strains for multiphase materials. In addition to the grid-size dependency, the influences of intrinsic microstructure features, i.e., the flow curve and fracture strains of the two constituent phases, on the predicted fracture strains also are examined. Finally, application of the derived fracture strain versus model size relationship is demonstrated with large clearance trimming simulations with different element sizes.« less

  2. Predicting grid-size-dependent fracture strains of DP980 with a microstructure-based post-necking model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, G.; Hu, X. H.; Choi, K. S.

    Ductile fracture is a local phenomenon, and it is well established that fracture strain levels depend on both stress triaxiality and the resolution (grid size) of strain measurements. Two-dimensional plane strain post-necking models with different model sizes are used in this paper to predict the grid-size-dependent fracture strain of a commercial dual-phase steel, DP980. The models are generated from the actual microstructures, and the individual phase flow properties and literature-based individual phase damage parameters for the Johnson–Cook model are used for ferrite and martensite. A monotonic relationship is predicted: the smaller the model size, the higher the fracture strain. Thus,more » a general framework is developed to quantify the grid-size-dependent fracture strains for multiphase materials. In addition to the grid-size dependency, the influences of intrinsic microstructure features, i.e., the flow curve and fracture strains of the two constituent phases, on the predicted fracture strains also are examined. Finally, application of the derived fracture strain versus model size relationship is demonstrated with large clearance trimming simulations with different element sizes.« less

  3. Filter size definition in anisotropic subgrid models for large eddy simulation on irregular grids

    NASA Astrophysics Data System (ADS)

    Abbà, Antonella; Campaniello, Dario; Nini, Michele

    2017-06-01

    The definition of the characteristic filter size to be used for subgrid scales models in large eddy simulation using irregular grids is still an unclosed problem. We investigate some different approaches to the definition of the filter length for anisotropic subgrid scale models and we propose a tensorial formulation based on the inertial ellipsoid of the grid element. The results demonstrate an improvement in the prediction of several key features of the flow when the anisotropicity of the grid is explicitly taken into account with the tensorial filter size.

  4. Finite-difference modeling with variable grid-size and adaptive time-step in porous media

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yin, Xingyao; Wu, Guochen

    2014-04-01

    Forward modeling of elastic wave propagation in porous media has great importance for understanding and interpreting the influences of rock properties on characteristics of seismic wavefield. However, the finite-difference forward-modeling method is usually implemented with global spatial grid-size and time-step; it consumes large amounts of computational cost when small-scaled oil/gas-bearing structures or large velocity-contrast exist underground. To overcome this handicap, combined with variable grid-size and time-step, this paper developed a staggered-grid finite-difference scheme for elastic wave modeling in porous media. Variable finite-difference coefficients and wavefield interpolation were used to realize the transition of wave propagation between regions of different grid-size. The accuracy and efficiency of the algorithm were shown by numerical examples. The proposed method is advanced with low computational cost in elastic wave simulation for heterogeneous oil/gas reservoirs.

  5. Grid-size dependence of Cauchy boundary conditions used to simulate stream-aquifer interactions

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2010-01-01

    This work examines the simulation of stream–aquifer interactions as grids are refined vertically and horizontally and suggests that traditional methods for calculating conductance can produce inappropriate values when the grid size is changed. Instead, different grid resolutions require different estimated values. Grid refinement strategies considered include global refinement of the entire model and local refinement of part of the stream. Three methods of calculating the conductance of the Cauchy boundary conditions are investigated. Single- and multi-layer models with narrow and wide streams produced stream leakages that differ by as much as 122% as the grid is refined. Similar results occur for globally and locally refined grids, but the latter required as little as one-quarter the computer execution time and memory and thus are useful for addressing some scale issues of stream–aquifer interactions. Results suggest that existing grid-size criteria for simulating stream–aquifer interactions are useful for one-layer models, but inadequate for three-dimensional models. The grid dependence of the conductance terms suggests that values for refined models using, for example, finite difference or finite-element methods, cannot be determined from previous coarse-grid models or field measurements. Our examples demonstrate the need for a method of obtaining conductances that can be translated to different grid resolutions and provide definitive test cases for investigating alternative conductance formulations.

  6. Research on Grid Size Suitability of Gridded Population Distribution in Urban Area: A Case Study in Urban Area of Xuanzhou District, China.

    PubMed

    Dong, Nan; Yang, Xiaohuan; Cai, Hongyan; Xu, Fengjiao

    2017-01-01

    The research on the grid size suitability is important to provide improvement in accuracies of gridded population distribution. It contributes to reveal the actual spatial distribution of population. However, currently little research has been done in this area. Many well-modeled gridded population dataset are basically built at a single grid scale. If the grid cell size is not appropriate, it will result in spatial information loss or data redundancy. Therefore, in order to capture the desired spatial variation of population within the area of interest, it is necessary to conduct research on grid size suitability. This study summarized three expressed levels to analyze grid size suitability, which include location expressed level, numeric information expressed level, and spatial relationship expressed level. This study elaborated the reasons for choosing the five indexes to explore expression suitability. These five indexes are consistency measure, shape index rate, standard deviation of population density, patches diversity index, and the average local variance. The suitable grid size was determined by constructing grid size-indicator value curves and suitable grid size scheme. Results revealed that the three expressed levels on 10m grid scale are satisfying. And the population distribution raster data with 10m grid size provide excellent accuracy without loss. The 10m grid size is recommended as the appropriate scale for generating a high-quality gridded population distribution in our study area. Based on this preliminary study, it indicates the five indexes are coordinated with each other and reasonable and effective to assess grid size suitability. We also suggest choosing these five indexes in three perspectives of expressed level to carry out the research on grid size suitability of gridded population distribution.

  7. Research on Grid Size Suitability of Gridded Population Distribution in Urban Area: A Case Study in Urban Area of Xuanzhou District, China

    PubMed Central

    Dong, Nan; Yang, Xiaohuan; Cai, Hongyan; Xu, Fengjiao

    2017-01-01

    The research on the grid size suitability is important to provide improvement in accuracies of gridded population distribution. It contributes to reveal the actual spatial distribution of population. However, currently little research has been done in this area. Many well-modeled gridded population dataset are basically built at a single grid scale. If the grid cell size is not appropriate, it will result in spatial information loss or data redundancy. Therefore, in order to capture the desired spatial variation of population within the area of interest, it is necessary to conduct research on grid size suitability. This study summarized three expressed levels to analyze grid size suitability, which include location expressed level, numeric information expressed level, and spatial relationship expressed level. This study elaborated the reasons for choosing the five indexes to explore expression suitability. These five indexes are consistency measure, shape index rate, standard deviation of population density, patches diversity index, and the average local variance. The suitable grid size was determined by constructing grid size-indicator value curves and suitable grid size scheme. Results revealed that the three expressed levels on 10m grid scale are satisfying. And the population distribution raster data with 10m grid size provide excellent accuracy without loss. The 10m grid size is recommended as the appropriate scale for generating a high-quality gridded population distribution in our study area. Based on this preliminary study, it indicates the five indexes are coordinated with each other and reasonable and effective to assess grid size suitability. We also suggest choosing these five indexes in three perspectives of expressed level to carry out the research on grid size suitability of gridded population distribution. PMID:28122050

  8. Convergance experiments with a hydrodynamic model of Port Royal Sound, South Carolina

    USGS Publications Warehouse

    Lee, J.K.; Schaffranek, R.W.; Baltzer, R.A.

    1989-01-01

    A two-demensional, depth-averaged, finite-difference, flow/transport model, SIM2D, is being used to simulate tidal circulation and transport in the Port Royal Sound, South Carolina, estuarine system. Models of a subregion of the Port Royal Sound system have been derived from an earlier-developed model of the entire system having a grid size of 600 ft. The submodels were implemented with grid sizes of 600, 300, and 150 ft in order to determine the effects of changes in grid size on computed flows in the subregion, which is characterized by narrow channels and extensive tidal flats that flood and dewater with each rise and fall of the tide. Tidal amplitudes changes less than 5 percent as the grid size was decreased. Simulations were performed with the 300-foot submodel for time steps of 60, 30, and 15 s. Study results are discussed.

  9. Impact of dose size in single fraction spatially fractionated (grid) radiotherapy for melanoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hualin, E-mail: hualin.zhang@northwestern.edu, E-mail: hualinzhang@yahoo.com; Zhong, Hualiang; Barth, Rolf F.

    2014-02-15

    Purpose: To evaluate the impact of dose size in single fraction, spatially fractionated (grid) radiotherapy for selectively killing infiltrated melanoma cancer cells of different tumor sizes, using different radiobiological models. Methods: A Monte Carlo technique was employed to calculate the 3D dose distribution of a commercially available megavoltage grid collimator in a 6 MV beam. The linear-quadratic (LQ) and modified linear quadratic (MLQ) models were used separately to evaluate the therapeutic outcome of a series of single fraction regimens that employed grid therapy to treat both acute and late responding melanomas of varying sizes. The dose prescription point was atmore » the center of the tumor volume. Dose sizes ranging from 1 to 30 Gy at 100% dose line were modeled. Tumors were either touching the skin surface or having their centers at a depth of 3 cm. The equivalent uniform dose (EUD) to the melanoma cells and the therapeutic ratio (TR) were defined by comparing grid therapy with the traditional open debulking field. The clinical outcomes from recent reports were used to verify the authors’ model. Results: Dose profiles at different depths and 3D dose distributions in a series of 3D melanomas treated with grid therapy were obtained. The EUDs and TRs for all sizes of 3D tumors involved at different doses were derived through the LQ and MLQ models, and a practical equation was derived. The EUD was only one fifth of the prescribed dose. The TR was dependent on the prescribed dose and on the LQ parameters of both the interspersed cancer and normal tissue cells. The results from the LQ model were consistent with those of the MLQ model. At 20 Gy, the EUD and TR by the LQ model were 2.8% higher and 1% lower than by the MLQ, while at 10 Gy, the EUD and TR as defined by the LQ model were only 1.4% higher and 0.8% lower, respectively. The dose volume histograms of grid therapy for a 10 cm tumor showed different dosimetric characteristics from those of conventional radiotherapy. A significant portion of the tumor volume received a very large dose in grid therapy, which ensures significant tumor cell killing in these regions. Conversely, some areas received a relatively small dose, thereby sparing interspersed normal cells and increasing radiation tolerance. The radiobiology modeling results indicated that grid therapy could be useful for treating acutely responding melanomas infiltrating radiosensitive normal tissues. The theoretical model predictions were supported by the clinical outcomes. Conclusions: Grid therapy functions by selectively killing infiltrating tumor cells and concomitantly sparing interspersed normal cells. The TR depends on the radiosensitivity of the cell population, dose, tumor size, and location. Because the volumes of very high dose regions are small, the LQ model can be used safely to predict the clinical outcomes of grid therapy. When treating melanomas with a dose of 15 Gy or higher, single fraction grid therapy is clearly advantageous for sparing interspersed normal cells. The existence of a threshold fraction dose, which was found in the authors’ theoretical simulations, was confirmed by clinical observations.« less

  10. Simulating ground water-lake interactions: Approaches and insights

    USGS Publications Warehouse

    Hunt, R.J.; Haitjema, H.M.; Krohelski, J.T.; Feinstein, D.T.

    2003-01-01

    Approaches for modeling lake-ground water interactions have evolved significantly from early simulations that used fixed lake stages specified as constant head to sophisticated LAK packages for MODFLOW. Although model input can be complex, the LAK package capabilities and output are superior to methods that rely on a fixed lake stage and compare well to other simple methods where lake stage can be calculated. Regardless of the approach, guidelines presented here for model grid size, location of three-dimensional flow, and extent of vertical capture can facilitate the construction of appropriately detailed models that simulate important lake-ground water interactions without adding unnecessary complexity. In addition to MODFLOW approaches, lake simulation has been formulated in terms of analytic elements. The analytic element lake package had acceptable agreement with a published LAK1 problem, even though there were differences in the total lake conductance and number of layers used in the two models. The grid size used in the original LAK1 problem, however, violated a grid size guideline presented in this paper. Grid sensitivity analyses demonstrated that an appreciable discrepancy in the distribution of stream and lake flux was related to the large grid size used in the original LAK1 problem. This artifact is expected regardless of MODFLOW LAK package used. When the grid size was reduced, a finite-difference formulation approached the analytic element results. These insights and guidelines can help ensure that the proper lake simulation tool is being selected and applied.

  11. PARADIGM USING JOINT DETERMINISTIC GRID MODELING AND SUB-GRID VARIABILITY STOCHASTIC DESCRIPTION AS A TEMPLATE FOR MODEL EVALUATION

    EPA Science Inventory

    The goal of achieving verisimilitude of air quality simulations to observations is problematic. Chemical transport models such as the Community Multi-Scale Air Quality (CMAQ) modeling system produce volume averages of pollutant concentration fields. When grid sizes are such tha...

  12. RCS of fundamental scatterers in the HF band by wire-grid modelling

    NASA Astrophysics Data System (ADS)

    Trueman, C. W.; Kubina, S. J.

    To extract the maximum information from the return of a radar target such as an aircraft, the target's scattering properties must be well known. Wire grid modeling allows a detailed representation of the surface of a complex scatterer such as an aircraft, in the frequency range where the aircraft size is comparable to a wavelength. A moment method analysis determines the currents on the wires of the grid including the interactions between all parts of the structure. Wire grid models of fundamental scatterers (plates, strips, cubes, and spheres) of sizes comparable to the wavelength in the 2-30 MHz range are analyzed. The study of the radar cross section (RCS) of wire grids in comparison with measured RCS data helps to establish guidelines for building wire grid models, specifying such parameters as where to locate wires, how short the segments must be, and what radius to use. The guidelines so developed can then be applied to build wire grid models of much more complex bodies such as aircraft with much greater confidence.

  13. Influence of grid resolution, parcel size and drag models on bubbling fluidized bed simulation

    DOE PAGES

    Lu, Liqiang; Konan, Arthur; Benyahia, Sofiane

    2017-06-02

    Here in this paper, a bubbling fluidized bed is simulated with different numerical parameters, such as grid resolution and parcel size. We examined also the effect of using two homogeneous drag correlations and a heterogeneous drag based on the energy minimization method. A fast and reliable bubble detection algorithm was developed based on the connected component labeling. The radial and axial solids volume fraction profiles are compared with experiment data and previous simulation results. These results show a significant influence of drag models on bubble size and voidage distributions and a much less dependence on numerical parameters. With a heterogeneousmore » drag model that accounts for sub-scale structures, the void fraction in the bubbling fluidized bed can be well captured with coarse grid and large computation parcels. Refining the CFD grid and reducing the parcel size can improve the simulation results but with a large increase in computation cost.« less

  14. Effect of particle size distribution on the hydrodynamics of dense CFB risers

    NASA Astrophysics Data System (ADS)

    Bakshi, Akhilesh; Khanna, Samir; Venuturumilli, Raj; Altantzis, Christos; Ghoniem, Ahmed

    2015-11-01

    Circulating Fluidized Beds (CFB) are favorable in the energy and chemical industries, due to their high efficiency. While accurate hydrodynamic modeling is essential for optimizing performance, most CFB riser simulations are performed assuming equally-sized solid particles, owing to limited computational resources. Even though this approach yields reasonable predictions, it neglects commonly observed experimental findings suggesting the strong effect of particle size distribution (psd) on the hydrodynamics and chemical conversion. Thus, this study is focused on the inclusion of discrete particle sizes to represent the psd and its effect on fluidization via 2D numerical simulations. The particle sizes and corresponding mass fluxes are obtained using experimental data in dense CFB riser while the modeling framework is described in Bakshi et al 2015. Simulations are conducted at two scales: (a) fine grid to resolve heterogeneous structures and (b) coarse grid using EMMS sub-grid modifications. Using suitable metrics which capture bed dynamics, this study provides insights into segregation and mixing of particles as well as highlights need for improved sub-grid models.

  15. DPW-VI Results Using FUN3D with Focus on k-kL-MEAH2015 (k-kL) Turbulence Model

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, K. S.; Carlson, Jan-Renee; Rumsey, Christopher L.; Lee-Rausch, Elizabeth M.; Park, Michael A.

    2017-01-01

    The Common Research Model wing-body configuration is investigated with the k-kL-MEAH2015 turbulence model implemented in FUN3D. This includes results presented at the Sixth Drag Prediction Workshop and additional results generated after the workshop with a nonlinear Quadratic Constitutive Relation (QCR) variant of the same turbulence model. The workshop provided grids are used, and a uniform grid refinement study is performed at the design condition. A large variation between results with and without a reconstruction limiter is exhibited on "medium" grid sizes, indicating that the medium grid size is too coarse for drawing conclusions in comparison with experiment. This variation is reduced with grid refinement. At a fixed angle of attack near design conditions, the QCR variant yielded decreased lift and drag compared with the linear eddy-viscosity model by an amount that was approximately constant with grid refinement. The k-kL-MEAH2015 turbulence model produced wing root junction flow behavior consistent with wind tunnel observations.

  16. The influence of model resolution on ozone in industrial volatile organic compound plumes.

    PubMed

    Henderson, Barron H; Jeffries, Harvey E; Kim, Byeong-Uk; Vizuete, William G

    2010-09-01

    Regions with concentrated petrochemical industrial activity (e.g., Houston or Baton Rouge) frequently experience large, localized releases of volatile organic compounds (VOCs). Aircraft measurements suggest these released VOCs create plumes with ozone (O3) production rates 2-5 times higher than typical urban conditions. Modeling studies found that simulating high O3 productions requires superfine (1-km) horizontal grid cell size. Compared with fine modeling (4-kmin), the superfine resolution increases the peak O3 concentration by as much as 46%. To understand this drastic O3 change, this study quantifies model processes for O3 and "odd oxygen" (Ox) in both resolutions. For the entire plume, the superfine resolution increases the maximum O3 concentration 3% but only decreases the maximum Ox concentration 0.2%. The two grid sizes produce approximately equal Ox mass but by different reaction pathways. Derived sensitivity to oxides of nitrogen (NOx) and VOC emissions suggests resolution-specific sensitivity to NOx and VOC emissions. Different sensitivity to emissions will result in different O3 responses to subsequently encountered emissions (within the city or downwind). Sensitivity of O3 to emission changes also results in different simulated O3 responses to the same control strategies. Sensitivity of O3 to NOx and VOC emission changes is attributed to finer resolved Eulerian grid and finer resolved NOx emissions. Urban NOx concentration gradients are often caused by roadway mobile sources that would not typically be addressed with Plume-in-Grid models. This study shows that grid cell size (an artifact of modeling) influences simulated control strategies and could bias regulatory decisions. Understanding the dynamics of VOC plume dependence on grid size is the first step toward providing more detailed guidance for resolution. These results underscore VOC and NOx resolution interdependencies best addressed by finer resolution. On the basis of these results, the authors suggest a need for quantitative metrics for horizontal grid resolution in future model guidance.

  17. Sedimentary Geothermal Feasibility Study: October 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Augustine, Chad; Zerpa, Luis

    The objective of this project is to analyze the feasibility of commercial geothermal projects using numerical reservoir simulation, considering a sedimentary reservoir with low permeability that requires productivity enhancement. A commercial thermal reservoir simulator (STARS, from Computer Modeling Group, CMG) is used in this work for numerical modeling. In the first stage of this project (FY14), a hypothetical numerical reservoir model was developed, and validated against an analytical solution. The following model parameters were considered to obtain an acceptable match between the numerical and analytical solutions: grid block size, time step and reservoir areal dimensions; the latter related to boundarymore » effects on the numerical solution. Systematic model runs showed that insufficient grid sizing generates numerical dispersion that causes the numerical model to underestimate the thermal breakthrough time compared to the analytic model. As grid sizing is decreased, the model results converge on a solution. Likewise, insufficient reservoir model area introduces boundary effects in the numerical solution that cause the model results to differ from the analytical solution.« less

  18. Importance of Grid Center Arrangement

    NASA Astrophysics Data System (ADS)

    Pasaogullari, O.; Usul, N.

    2012-12-01

    In Digital Elevation Modeling, grid size is accepted to be the most important parameter. Despite the point density and/or scale of the source data, it is freely decided by the user. Most of the time, arrangement of the grid centers are ignored, even most GIS packages omit the choice of grid center coordinate selection. In our study; importance of the arrangement of grid centers is investigated. Using the analogy between "Raster Grid DEM" and "Bitmap Image", importance of placement of grid centers in DEMs are measured. The study has been conducted on four different grid DEMs obtained from a half ellipsoid. These grid DEMs are obtained in such a way that they are half grid size apart from each other. Resulting grid DEMs are investigated through similarity measures. Image processing scientists use different measures to investigate the dis/similarity between the images and the amount of different information they carry. Grid DEMs are projected to a finer grid in order to co-center. Similarity measures are then applied to each grid DEM pairs. These similarity measures are adapted to DEM with band reduction and real number operation. One of the measures gives function graph and the others give measure matrices. Application of similarity measures to six grid DEM pairs shows interesting results. These four different grid DEMs are created with the same method for the same area, surprisingly; thirteen out of 14 measures state that, the half grid size apart grid DEMs are different from each other. The results indicated that although grid DEMs carry mutual information, they have also additional individual information. In other words, half grid size apart constructed grid DEMs have non-redundant information.; Joint Probability Distributions Function Graphs

  19. Photochemical grid model performance with varying horizontal grid resolution and sub-grid plume treatment for the Martins Creek near-field SO2 study

    NASA Astrophysics Data System (ADS)

    Baker, Kirk R.; Hawkins, Andy; Kelly, James T.

    2014-12-01

    Near source modeling is needed to assess primary and secondary pollutant impacts from single sources and single source complexes. Source-receptor relationships need to be resolved from tens of meters to tens of kilometers. Dispersion models are typically applied for near-source primary pollutant impacts but lack complex photochemistry. Photochemical models provide a realistic chemical environment but are typically applied using grid cell sizes that may be larger than the distance between sources and receptors. It is important to understand the impacts of grid resolution and sub-grid plume treatments on photochemical modeling of near-source primary pollution gradients. Here, the CAMx photochemical grid model is applied using multiple grid resolutions and sub-grid plume treatment for SO2 and compared with a receptor mesonet largely impacted by nearby sources approximately 3-17 km away in a complex terrain environment. Measurements are compared with model estimates of SO2 at 4- and 1-km resolution, both with and without sub-grid plume treatment and inclusion of finer two-way grid nests. Annual average estimated SO2 mixing ratios are highest nearest the sources and decrease as distance from the sources increase. In general, CAMx estimates of SO2 do not compare well with the near-source observations when paired in space and time. Given the proximity of these sources and receptors, accuracy in wind vector estimation is critical for applications that pair pollutant predictions and observations in time and space. In typical permit applications, predictions and observations are not paired in time and space and the entire distributions of each are directly compared. Using this approach, model estimates using 1-km grid resolution best match the distribution of observations and are most comparable to similar studies that used dispersion and Lagrangian modeling systems. Model-estimated SO2 increases as grid cell size decreases from 4 km to 250 m. However, it is notable that the 1-km model estimates using 1-km meteorological model input are higher than the 1-km model simulation that used interpolated 4-km meteorology. The inclusion of sub-grid plume treatment did not improve model skill in predicting SO2 in time and space and generally acts to keep emitted mass aloft.

  20. An adaptive grid to improve the efficiency and accuracy of modelling underwater noise from shipping

    NASA Astrophysics Data System (ADS)

    Trigg, Leah; Chen, Feng; Shapiro, Georgy; Ingram, Simon; Embling, Clare

    2017-04-01

    Underwater noise from shipping is becoming a significant concern and has been listed as a pollutant under Descriptor 11 of the Marine Strategy Framework Directive. Underwater noise models are an essential tool to assess and predict noise levels for regulatory procedures such as environmental impact assessments and ship noise monitoring. There are generally two approaches to noise modelling. The first is based on simplified energy flux models, assuming either spherical or cylindrical propagation of sound energy. These models are very quick but they ignore important water column and seabed properties, and produce significant errors in the areas subject to temperature stratification (Shapiro et al., 2014). The second type of model (e.g. ray-tracing and parabolic equation) is based on an advanced physical representation of sound propagation. However, these acoustic propagation models are computationally expensive to execute. Shipping noise modelling requires spatial discretization in order to group noise sources together using a grid. A uniform grid size is often selected to achieve either the greatest efficiency (i.e. speed of computations) or the greatest accuracy. In contrast, this work aims to produce efficient and accurate noise level predictions by presenting an adaptive grid where cell size varies with distance from the receiver. The spatial range over which a certain cell size is suitable was determined by calculating the distance from the receiver at which propagation loss becomes uniform across a grid cell. The computational efficiency and accuracy of the resulting adaptive grid was tested by comparing it to uniform 1 km and 5 km grids. These represent an accurate and computationally efficient grid respectively. For a case study of the Celtic Sea, an application of the adaptive grid over an area of 160×160 km reduced the number of model executions required from 25600 for a 1 km grid to 5356 in December and to between 5056 and 13132 in August, which represents a 2 to 5-fold increase in efficiency. The 5 km grid reduces the number of model executions further to 1024. However, over the first 25 km the 5 km grid produces errors of up to 13.8 dB when compared to the highly accurate but inefficient 1 km grid. The newly developed adaptive grid generates much smaller errors of less than 0.5 dB while demonstrating high computational efficiency. Our results show that the adaptive grid provides the ability to retain the accuracy of noise level predictions and improve the efficiency of the modelling process. This can help safeguard sensitive marine ecosystems from noise pollution by improving the underwater noise predictions that inform management activities. References Shapiro, G., Chen, F., Thain, R., 2014. The Effect of Ocean Fronts on Acoustic Wave Propagation in a Shallow Sea, Journal of Marine System, 139: 217 - 226. http://dx.doi.org/10.1016/j.jmarsys.2014.06.007.

  1. On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models

    NASA Astrophysics Data System (ADS)

    Xu, S.; Wang, B.; Liu, J.

    2015-10-01

    In this article we propose two grid generation methods for global ocean general circulation models. Contrary to conventional dipolar or tripolar grids, the proposed methods are based on Schwarz-Christoffel conformal mappings that map areas with user-prescribed, irregular boundaries to those with regular boundaries (i.e., disks, slits, etc.). The first method aims at improving existing dipolar grids. Compared with existing grids, the sample grid achieves a better trade-off between the enlargement of the latitudinal-longitudinal portion and the overall smooth grid cell size transition. The second method addresses more modern and advanced grid design requirements arising from high-resolution and multi-scale ocean modeling. The generated grids could potentially achieve the alignment of grid lines to the large-scale coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the grids are orthogonal curvilinear, they can be easily utilized by the majority of ocean general circulation models that are based on finite difference and require grid orthogonality. The proposed grid generation algorithms can also be applied to the grid generation for regional ocean modeling where complex land-sea distribution is present.

  2. Nonuniform grid implicit spatial finite difference method for acoustic wave modeling in tilted transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Chu, Chunlei; Stoffa, Paul L.

    2012-01-01

    Discrete earth models are commonly represented by uniform structured grids. In order to ensure accurate numerical description of all wave components propagating through these uniform grids, the grid size must be determined by the slowest velocity of the entire model. Consequently, high velocity areas are always oversampled, which inevitably increases the computational cost. A practical solution to this problem is to use nonuniform grids. We propose a nonuniform grid implicit spatial finite difference method which utilizes nonuniform grids to obtain high efficiency and relies on implicit operators to achieve high accuracy. We present a simple way of deriving implicit finite difference operators of arbitrary stencil widths on general nonuniform grids for the first and second derivatives and, as a demonstration example, apply these operators to the pseudo-acoustic wave equation in tilted transversely isotropic (TTI) media. We propose an efficient gridding algorithm that can be used to convert uniformly sampled models onto vertically nonuniform grids. We use a 2D TTI salt model to demonstrate its effectiveness and show that the nonuniform grid implicit spatial finite difference method can produce highly accurate seismic modeling results with enhanced efficiency, compared to uniform grid explicit finite difference implementations.

  3. Regional photochemical air quality modeling in the Mexico-US border area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendoza, A.; Russell, A.G.; Mejia, G.M.

    1998-12-31

    The Mexico-United States border area has become an increasingly important region due to its commercial, industrial and urban growth. As a result, environmental concerns have risen. Treaties like the North American Free Trade Agreement (NAFTA) have further motivated the development of environmental impact assessment in the area. Of particular concern are air quality, and how the activities on both sides of the border contribute to its degradation. This paper presents results of applying a three-dimensional photochemical airshed model to study air pollution dynamics along the Mexico-United States border. In addition, studies were conducted to assess how size resolution impacts themore » model performance. The model performed within acceptable statistic limits using 12.5 x 12.5 km{sup 2} grid cells, and the benefits using finer grids were limited. Results were further used to assess the influence of grid-cell size on the modeling of control strategies, where coarser grids lead to significant loss of information.« less

  4. Influence of Terraced area DEM Resolution on RUSLE LS Factor

    NASA Astrophysics Data System (ADS)

    Zhang, Hongming; Baartman, Jantiene E. M.; Yang, Xiaomei; Gai, Lingtong; Geissen, Viollette

    2017-04-01

    Topography has a large impact on the erosion of soil by water. Slope steepness and slope length are combined (the LS factor) in the universal soil-loss equation (USLE) and its revised version (RUSLE) for predicting soil erosion. The LS factor is usually extracted from a digital elevation model (DEM). The grid size of the DEM will thus influence the LS factor and the subsequent calculation of soil loss. Terracing is considered as a support practice factor (P) in the USLE/RUSLE equations, which is multiplied with the other USLE/RUSLE factors. However, as terraces change the slope length and steepness, they also affect the LS factor. The effect of DEM grid size on the LS factor has not been investigated for a terraced area. We obtained a high-resolution DEM by unmanned aerial vehicles (UAVs) photogrammetry, from which the slope steepness, slope length, and LS factor were extracted. The changes in these parameters at various DEM resolutions were then analysed. The DEM produced detailed LS-factor maps, particularly for low LS factors. High (small valleys, gullies, and terrace ridges) and low (flats and terrace fields) spatial frequencies were both sensitive to changes in resolution, so the areas of higher and lower slope steepness both decreased with increasing grid size. Average slope steepness decreased and average slope length increased with grid size. Slope length, however, had a larger effect than slope steepness on the LS factor as the grid size varied. The LS factor increased when the grid size increased from 0.5 to 30-m and increased significantly at grid sizes >5-m. The LS factor was increasingly overestimated as grid size decreased. The LS factor decreased from grid sizes of 30 to 100-m, because the details of the terraced terrain were gradually lost, but the factor was still overestimated.

  5. Influence of model grid size on the simulation of PM2.5 and the related excess mortality in Japan

    NASA Astrophysics Data System (ADS)

    Goto, D.; Ueda, K.; Ng, C. F.; Takami, A.; Ariga, T.; Matsuhashi, K.; Nakajima, T.

    2016-12-01

    Aerosols, especially PM2.5, can affect air pollution, climate change, and human health. The estimation of health impacts due to PM2.5 is often performed using global and regional aerosol transport models with various horizontal resolutions. To investigate the dependence of the simulated PM2.5 on model grid sizes, we executed two simulations using a high-resolution model ( 10km; HRM) and a low-resolution model ( 100km; LRM, which is a typical value for general circulation models). In this study, we used a global-to-regional atmospheric transport model to simulate PM2.5 in Japan with a stretched grid system in HRM and a uniform grid system in LRM for the present (the 2000) and the future (the 2030, as proposed by the Representative Concentrations Pathway 4.5, RCP4.5). These calculations were performed by nudging meteorological fields obtained from an atmosphere-ocean coupled model and providing emission inventories used in the coupled model. After correcting for bias, we calculated the excess mortality due to long-term exposure to PM2.5 for the elderly. Results showed the LRM underestimated by approximately 30 % (of PM2.5 concentrations in the 2000 and 2030), approximately 60 % (excess mortality in the 2000) and approximately 90 % (excess mortality in 2030) compared to the HRM results. The estimation of excess mortality therefore performed better with high-resolution grid sizes. In addition, we also found that our nesting method could be a useful tool to obtain better estimation results.

  6. Power Grid Construction Project Portfolio Optimization Based on Bi-level programming model

    NASA Astrophysics Data System (ADS)

    Zhao, Erdong; Li, Shangqi

    2017-08-01

    As the main body of power grid operation, county-level power supply enterprises undertake an important emission to guarantee the security of power grid operation and safeguard social power using order. The optimization of grid construction projects has been a key issue of power supply capacity and service level of grid enterprises. According to the actual situation of power grid construction project optimization of county-level power enterprises, on the basis of qualitative analysis of the projects, this paper builds a Bi-level programming model based on quantitative analysis. The upper layer of the model is the target restriction of the optimal portfolio; the lower layer of the model is enterprises’ financial restrictions on the size of the enterprise project portfolio. Finally, using a real example to illustrate operation proceeding and the optimization result of the model. Through qualitative analysis and quantitative analysis, the bi-level programming model improves the accuracy and normative standardization of power grid enterprises projects.

  7. Convergence of the Bouguer-Beer law for radiation extinction in particulate media

    NASA Astrophysics Data System (ADS)

    Frankel, A.; Iaccarino, G.; Mani, A.

    2016-10-01

    Radiation transport in particulate media is a common physical phenomenon in natural and industrial processes. Developing predictive models of these processes requires a detailed model of the interaction between the radiation and the particles. Resolving the interaction between the radiation and the individual particles in a very large system is impractical, whereas continuum-based representations of the particle field lend themselves to efficient numerical techniques based on the solution of the radiative transfer equation. We investigate radiation transport through discrete and continuum-based representations of a particle field. Exact solutions for radiation extinction are developed using a Monte Carlo model in different particle distributions. The particle distributions are then projected onto a concentration field with varying grid sizes, and the Bouguer-Beer law is applied by marching across the grid. We show that the continuum-based solution approaches the Monte Carlo solution under grid refinement, but quickly diverges as the grid size approaches the particle diameter. This divergence is attributed to the homogenization error of an individual particle across a whole grid cell. We remark that the concentration energy spectrum of a point-particle field does not approach zero, and thus the concentration variance must also diverge under infinite grid refinement, meaning that no grid-converged solution of the radiation transport is possible.

  8. Predicting grid-size-dependent fracture strains of DP980 with a microstructure-based post-necking model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, G.; Hu, X. H.; Choi, K. S.

    Ductile fracture is a local phenomenon, and it is well established that fracture strain levels depend on both stress triaxiality and the resolution (grid size) of strain measurements. Two-dimensional plane strain post-necking models with different representative volume element (RVE) sizes are used to predict the size-dependent fracture strain of a commercial dual-phase steel, DP980. The models are generated from the actual microstructures, and the individual phase flow properties and literature-based individual phase damage parameters for the Johnson-Cook model are used for ferrite and martensite. A monotonic relationship is predicted: the smaller the model size, the higher the fracture strain. Thus,more » a general framework is developed to quantify the size-dependent fracture strains for multiphase materials. In addition to the RVE sizes, the influences of intrinsic microstructure features, i.e., the flow curve and fracture strains of the two constituent phases, on the predicted fracture strains also are examined. Application of the derived fracture strain versus RVE size relationship is demonstrated with large clearance trimming simulations with different element sizes.« less

  9. Combined effect of pulse density and grid cell size on predicting and mapping aboveground carbon in fast-growing Eucalyptus forest plantation using airborne LiDAR data.

    PubMed

    Silva, Carlos Alberto; Hudak, Andrew Thomas; Klauberg, Carine; Vierling, Lee Alexandre; Gonzalez-Benecke, Carlos; de Padua Chaves Carvalho, Samuel; Rodriguez, Luiz Carlos Estraviz; Cardil, Adrián

    2017-12-01

    LiDAR remote sensing is a rapidly evolving technology for quantifying a variety of forest attributes, including aboveground carbon (AGC). Pulse density influences the acquisition cost of LiDAR, and grid cell size influences AGC prediction using plot-based methods; however, little work has evaluated the effects of LiDAR pulse density and cell size for predicting and mapping AGC in fast-growing Eucalyptus forest plantations. The aim of this study was to evaluate the effect of LiDAR pulse density and grid cell size on AGC prediction accuracy at plot and stand-levels using airborne LiDAR and field data. We used the Random Forest (RF) machine learning algorithm to model AGC using LiDAR-derived metrics from LiDAR collections of 5 and 10 pulses m -2 (RF5 and RF10) and grid cell sizes of 5, 10, 15 and 20 m. The results show that LiDAR pulse density of 5 pulses m -2 provides metrics with similar prediction accuracy for AGC as when using a dataset with 10 pulses m -2 in these fast-growing plantations. Relative root mean square errors (RMSEs) for the RF5 and RF10 were 6.14 and 6.01%, respectively. Equivalence tests showed that the predicted AGC from the training and validation models were equivalent to the observed AGC measurements. The grid cell sizes for mapping ranging from 5 to 20 also did not significantly affect the prediction accuracy of AGC at stand level in this system. LiDAR measurements can be used to predict and map AGC across variable-age Eucalyptus plantations with adequate levels of precision and accuracy using 5 pulses m -2 and a grid cell size of 5 m. The promising results for AGC modeling in this study will allow for greater confidence in comparing AGC estimates with varying LiDAR sampling densities for Eucalyptus plantations and assist in decision making towards more cost effective and efficient forest inventory.

  10. A scale-invariant cellular-automata model for distributed seismicity

    NASA Technical Reports Server (NTRS)

    Barriere, Benoit; Turcotte, Donald L.

    1991-01-01

    In the standard cellular-automata model for a fault an element of stress is randomly added to a grid of boxes until a box has four elements, these are then redistributed to the adjacent boxes on the grid. The redistribution can result in one or more of these boxes having four or more elements in which case further redistributions are required. On the average added elements are lost from the edges of the grid. The model is modified so that the boxes have a scale-invariant distribution of sizes. The objective is to model a scale-invariant distribution of fault sizes. When a redistribution from a box occurs it is equivalent to a characteristic earthquake on the fault. A redistribution from a small box (a foreshock) can trigger an instability in a large box (the main shock). A redistribution from a large box always triggers many instabilities in the smaller boxes (aftershocks). The frequency-size statistics for both main shocks and aftershocks satisfy the Gutenberg-Richter relation with b = 0.835 for main shocks and b = 0.635 for aftershocks. Model foreshocks occur 28 percent of the time.

  11. Water equivalent thickness of immobilization devices in proton therapy planning - Modelling at treatment planning and validation by measurements with a multi-layer ionization chamber.

    PubMed

    Fellin, Francesco; Righetto, Roberto; Fava, Giovanni; Trevisan, Diego; Amelio, Dante; Farace, Paolo

    2017-03-01

    To investigate the range errors made in treatment planning due to the presence of the immobilization devices along the proton beam path. The measured water equivalent thickness (WET) of selected devices was measured by a high-energy spot and a multi-layer ionization chamber and compared with that predicted by treatment planning system (TPS). Two treatment couches, two thermoplastic masks (both un-stretched and stretched) and one headrest were selected. At TPS, every immobilization device was modelled as being part of the patient. The following parameters were assessed: CT acquisition protocol, dose-calculation grid-sizes (1.5 and 3.0mm) and beam-entrance with respect to the devices (coplanar and non-coplanar). Finally, the potential errors produced by a wrong manual separation between treatment couch and the CT table (not present during treatment) were investigated. In the thermoplastic mask, there was a clear effect due to beam entrance, a moderate effect due to the CT protocols and almost no effect due to TPS grid-size, with 1mm errors observed only when thick un-stretched portions were crossed by non-coplanar beams. In the treatment couches the WET errors were negligible (<0.3mm) regardless of the grid-size and CT protocol. The potential range errors produced in the manual separation between treatment couch and CT table were small with 1.5mm grid-size, but could be >0.5mm with a 3.0mm grid-size. In the headrest, WET errors were negligible (0.2mm). With only one exception (un-stretched mask, non-coplanar beams), the WET of all the immobilization devices was properly modelled by the TPS. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  12. Reliability analysis in interdependent smart grid systems

    NASA Astrophysics Data System (ADS)

    Peng, Hao; Kan, Zhe; Zhao, Dandan; Han, Jianmin; Lu, Jianfeng; Hu, Zhaolong

    2018-06-01

    Complex network theory is a useful way to study many real complex systems. In this paper, a reliability analysis model based on complex network theory is introduced in interdependent smart grid systems. In this paper, we focus on understanding the structure of smart grid systems and studying the underlying network model, their interactions, and relationships and how cascading failures occur in the interdependent smart grid systems. We propose a practical model for interdependent smart grid systems using complex theory. Besides, based on percolation theory, we also study the effect of cascading failures effect and reveal detailed mathematical analysis of failure propagation in such systems. We analyze the reliability of our proposed model caused by random attacks or failures by calculating the size of giant functioning components in interdependent smart grid systems. Our simulation results also show that there exists a threshold for the proportion of faulty nodes, beyond which the smart grid systems collapse. Also we determine the critical values for different system parameters. In this way, the reliability analysis model based on complex network theory can be effectively utilized for anti-attack and protection purposes in interdependent smart grid systems.

  13. MODFLOW-LGR: Practical application to a large regional dataset

    NASA Astrophysics Data System (ADS)

    Barnes, D.; Coulibaly, K. M.

    2011-12-01

    In many areas of the US, including southwest Florida, large regional-scale groundwater models have been developed to aid in decision making and water resources management. These models are subsequently used as a basis for site-specific investigations. Because the large scale of these regional models is not appropriate for local application, refinement is necessary to analyze the local effects of pumping wells and groundwater related projects at specific sites. The most commonly used approach to date is Telescopic Mesh Refinement or TMR. It allows the extraction of a subset of the large regional model with boundary conditions derived from the regional model results. The extracted model is then updated and refined for local use using a variable sized grid focused on the area of interest. MODFLOW-LGR, local grid refinement, is an alternative approach which allows model discretization at a finer resolution in areas of interest and provides coupling between the larger "parent" model and the locally refined "child." In the present work, these two approaches are tested on a mining impact assessment case in southwest Florida using a large regional dataset (The Lower West Coast Surficial Aquifer System Model). Various metrics for performance are considered. They include: computation time, water balance (as compared to the variable sized grid), calibration, implementation effort, and application advantages and limitations. The results indicate that MODFLOW-LGR is a useful tool to improve local resolution of regional scale models. While performance metrics, such as computation time, are case-dependent (model size, refinement level, stresses involved), implementation effort, particularly when regional models of suitable scale are available, can be minimized. The creation of multiple child models within a larger scale parent model makes it possible to reuse the same calibrated regional dataset with minimal modification. In cases similar to the Lower West Coast model, where a model is larger than optimal for direct application as a parent grid, a combination of TMR and LGR approaches should be used to develop a suitable parent grid.

  14. Diffraction Analysis of Antennas With Mesh Surfaces

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Yahya

    1987-01-01

    Strip-aperture model replaces wire-grid model. Far-field radiation pattern of antenna with mesh reflector calculated more accurately with new strip-aperture model than with wire-grid model of reflector surface. More adaptable than wire-grid model to variety of practical configurations and decidedly superior for reflectors in which mesh-cell width exceeds mesh thickness. Satisfies reciprocity theorem. Applied where mesh cells are no larger than tenth of wavelength. Small cell size permits use of simplifying approximation that reflector-surface current induced by electromagnetic field is present even in apertures. Approximation useful in calculating far field.

  15. Coarse Grid Modeling of Turbine Film Cooling Flows Using Volumetric Source Terms

    NASA Technical Reports Server (NTRS)

    Heidmann, James D.; Hunter, Scott D.

    2001-01-01

    The recent trend in numerical modeling of turbine film cooling flows has been toward higher fidelity grids and more complex geometries. This trend has been enabled by the rapid increase in computing power available to researchers. However, the turbine design community requires fast turnaround time in its design computations, rendering these comprehensive simulations ineffective in the design cycle. The present study describes a methodology for implementing a volumetric source term distribution in a coarse grid calculation that can model the small-scale and three-dimensional effects present in turbine film cooling flows. This model could be implemented in turbine design codes or in multistage turbomachinery codes such as APNASA, where the computational grid size may be larger than the film hole size. Detailed computations of a single row of 35 deg round holes on a flat plate have been obtained for blowing ratios of 0.5, 0.8, and 1.0, and density ratios of 1.0 and 2.0 using a multiblock grid system to resolve the flows on both sides of the plate as well as inside the hole itself. These detailed flow fields were spatially averaged to generate a field of volumetric source terms for each conservative flow variable. Solutions were also obtained using three coarse grids having streamwise and spanwise grid spacings of 3d, 1d, and d/3. These coarse grid solutions used the integrated hole exit mass, momentum, energy, and turbulence quantities from the detailed solutions as volumetric source terms. It is shown that a uniform source term addition over a distance from the wall on the order of the hole diameter is able to predict adiabatic film effectiveness better than a near-wall source term model, while strictly enforcing correct values of integrated boundary layer quantities.

  16. The equal load-sharing model of cascade failures in power grids

    NASA Astrophysics Data System (ADS)

    Scala, Antonio; De Sanctis Lucentini, Pier Giorgio

    2016-11-01

    Electric power-systems are one of the most important critical infrastructures. In recent years, they have been exposed to extreme stress due to the increasing power demand, the introduction of distributed renewable energy sources, and the development of extensive interconnections. We investigate the phenomenon of abrupt breakdown of an electric power-system under two scenarios: load growth (mimicking the ever-increasing customer demand) and power fluctuations (mimicking the effects of renewable sources). Our results indicate that increasing the system size causes breakdowns to become more abrupt; in fact, mapping the system to a solvable statistical-physics model indicates the occurrence of a first order transition in the large size limit. Such an enhancement for the systemic risk failures (black-outs) with increasing network size is an effect that should be considered in the current projects aiming to integrate national power-grids into ;super-grids;.

  17. Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Ghaffari, Farhad

    2012-01-01

    Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.

  18. Development of a plume-in-grid model for industrial point and volume sources: application to power plant and refinery sources in the Paris region

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Seigneur, C.; Duclaux, O.

    2014-04-01

    Plume-in-grid (PinG) models incorporating a host Eulerian model and a subgrid-scale model (usually a Gaussian plume or puff model) have been used for the simulations of stack emissions (e.g., fossil fuel-fired power plants and cement plants) for gaseous and particulate species such as nitrogen oxides (NOx), sulfur dioxide (SO2), particulate matter (PM) and mercury (Hg). Here, we describe the extension of a PinG model to study the impact of an oil refinery where volatile organic compound (VOC) emissions can be important. The model is based on a reactive PinG model for ozone (O3), which incorporates a three-dimensional (3-D) Eulerian model and a Gaussian puff model. The model is extended to treat PM, with treatments of aerosol chemistry, particle size distribution, and the formation of secondary aerosols, which are consistent in both the 3-D Eulerian host model and the Gaussian puff model. Furthermore, the PinG model is extended to include the treatment of volume sources to simulate fugitive VOC emissions. The new PinG model is evaluated over Greater Paris during July 2009. Model performance is satisfactory for O3, PM2.5 and most PM2.5 components. Two industrial sources, a coal-fired power plant and an oil refinery, are simulated with the PinG model. The characteristics of the sources (stack height and diameter, exhaust temperature and velocity) govern the surface concentrations of primary pollutants (NOx, SO2 and VOC). O3 concentrations are impacted differently near the power plant than near the refinery, because of the presence of VOC emissions at the latter. The formation of sulfate is influenced by both the dispersion of SO2 and the oxidant concentration; however, the former tends to dominate in the simulations presented here. The impact of PinG modeling on the formation of secondary organic aerosol (SOA) is small and results mostly from the effect of different oxidant concentrations on biogenic SOA formation. The investigation of the criteria for injecting plumes into the host model (fixed travel time and/or puff size) shows that a size-based criterion is recommended to treat the formation of secondary aerosols (sulfate, nitrate, and ammonium), in particular, farther downwind of the sources (beyond about 15 km). The impacts of PinG modeling are less significant in a simulation with a coarse grid size (10 km) than with a fine grid size (2 km), because the concentrations of the species emitted from the PinG sources are relatively less important compared to background concentrations when injected into the host model with a coarser grid size.

  19. Improving and Understanding Climate Models: Scale-Aware Parameterization of Cloud Water Inhomogeneity and Sensitivity of MJO Simulation to Physical Parameters in a Convection Scheme

    NASA Astrophysics Data System (ADS)

    Xie, Xin

    Microphysics and convection parameterizations are two key components in a climate model to simulate realistic climatology and variability of cloud distribution and the cycles of energy and water. When a model has varying grid size or simulations have to be run with different resolutions, scale-aware parameterization is desirable so that we do not have to tune model parameters tailored to a particular grid size. The subgrid variability of cloud hydrometers is known to impact microphysics processes in climate models and is found to highly depend on spatial scale. A scale- aware liquid cloud subgrid variability parameterization is derived and implemented in the Community Earth System Model (CESM) in this study using long-term radar-based ground measurements from the Atmospheric Radiation Measurement (ARM) program. When used in the default CESM1 with the finite-volume dynamic core where a constant liquid inhomogeneity parameter was assumed, the newly developed parameterization reduces the cloud inhomogeneity in high latitudes and increases it in low latitudes. This is due to both the smaller grid size in high latitudes, and larger grid size in low latitudes in the longitude-latitude grid setting of CESM as well as the variation of the stability of the atmosphere. The single column model and general circulation model (GCM) sensitivity experiments show that the new parameterization increases the cloud liquid water path in polar regions and decreases it in low latitudes. Current CESM1 simulation suffers from the bias of both the pacific double ITCZ precipitation and weak Madden-Julian oscillation (MJO). Previous studies show that convective parameterization with multiple plumes may have the capability to alleviate such biases in a more uniform and physical way. A multiple-plume mass flux convective parameterization is used in Community Atmospheric Model (CAM) to investigate the sensitivity of MJO simulations. We show that MJO simulation is sensitive to entrainment rate specification. We found that shallow plumes can generate and sustain the MJO propagation in the model.

  20. Monitoring and Modeling Performance of Communications in Computational Grids

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Le, Thuy T.

    2003-01-01

    Computational grids may include many machines located in a number of sites. For efficient use of the grid we need to have an ability to estimate the time it takes to communicate data between the machines. For dynamic distributed grids it is unrealistic to know exact parameters of the communication hardware and the current communication traffic and we should rely on a model of the network performance to estimate the message delivery time. Our approach to a construction of such a model is based on observation of the messages delivery time with various message sizes and time scales. We record these observations in a database and use them to build a model of the message delivery time. Our experiments show presence of multiple bands in the logarithm of the message delivery times. These multiple bands represent multiple paths messages travel between the grid machines and are incorporated in our multiband model.

  1. From grid cells to place cells with realistic field sizes

    PubMed Central

    2017-01-01

    While grid cells in the medial entorhinal cortex (MEC) of rodents have multiple, regularly arranged firing fields, place cells in the cornu ammonis (CA) regions of the hippocampus mostly have single spatial firing fields. Since there are extensive projections from MEC to the CA regions, many models have suggested that a feedforward network can transform grid cell firing into robust place cell firing. However, these models generate place fields that are consistently too small compared to those recorded in experiments. Here, we argue that it is implausible that grid cell activity alone can be transformed into place cells with robust place fields of realistic size in a feedforward network. We propose two solutions to this problem. Firstly, weakly spatially modulated cells, which are abundant throughout EC, provide input to downstream place cells along with grid cells. This simple model reproduces many place cell characteristics as well as results from lesion studies. Secondly, the recurrent connections between place cells in the CA3 network generate robust and realistic place fields. Both mechanisms could work in parallel in the hippocampal formation and this redundancy might account for the robustness of place cell responses to a range of disruptions of the hippocampal circuitry. PMID:28750005

  2. Optimizing dynamic downscaling in one-way nesting using a regional ocean model

    NASA Astrophysics Data System (ADS)

    Pham, Van Sy; Hwang, Jin Hwan; Ku, Hyeyun

    2016-10-01

    Dynamical downscaling with nested regional oceanographic models has been demonstrated to be an effective approach for both operationally forecasted sea weather on regional scales and projections of future climate change and its impact on the ocean. However, when nesting procedures are carried out in dynamic downscaling from a larger-scale model or set of observations to a smaller scale, errors are unavoidable due to the differences in grid sizes and updating intervals. The present work assesses the impact of errors produced by nesting procedures on the downscaled results from Ocean Regional Circulation Models (ORCMs). Errors are identified and evaluated based on their sources and characteristics by employing the Big-Brother Experiment (BBE). The BBE uses the same model to produce both nesting and nested simulations; so it addresses those error sources separately (i.e., without combining the contributions of errors from different sources). Here, we focus on discussing errors resulting from the spatial grids' differences, the updating times and the domain sizes. After the BBE was separately run for diverse cases, a Taylor diagram was used to analyze the results and recommend an optimal combination of grid size, updating period and domain sizes. Finally, suggested setups for the downscaling were evaluated by examining the spatial correlations of variables and the relative magnitudes of variances between the nested model and the original data.

  3. Current Grid Generation Strategies and Future Requirements in Hypersonic Vehicle Design, Analysis and Testing

    NASA Technical Reports Server (NTRS)

    Papadopoulos, Periklis; Venkatapathy, Ethiraj; Prabhu, Dinesh; Loomis, Mark P.; Olynick, Dave; Arnold, James O. (Technical Monitor)

    1998-01-01

    Recent advances in computational power enable computational fluid dynamic modeling of increasingly complex configurations. A review of grid generation methodologies implemented in support of the computational work performed for the X-38 and X-33 are presented. In strategizing topological constructs and blocking structures factors considered are the geometric configuration, optimal grid size, numerical algorithms, accuracy requirements, physics of the problem at hand, computational expense, and the available computer hardware. Also addressed are grid refinement strategies, the effects of wall spacing, and convergence. The significance of grid is demonstrated through a comparison of computational and experimental results of the aeroheating environment experienced by the X-38 vehicle. Special topics on grid generation strategies are also addressed to model control surface deflections, and material mapping.

  4. A Variable Resolution Atmospheric General Circulation Model for a Megasite at the North Slope of Alaska

    NASA Astrophysics Data System (ADS)

    Dennis, L.; Roesler, E. L.; Guba, O.; Hillman, B. R.; McChesney, M.

    2016-12-01

    The Atmospheric Radiation Measurement (ARM) climate research facility has three siteslocated on the North Slope of Alaska (NSA): Barrrow, Oliktok, and Atqasuk. These sites, incombination with one other at Toolik Lake, have the potential to become a "megasite" whichwould combine observational data and high resolution modeling to produce high resolutiondata products for the climate community. Such a data product requires high resolutionmodeling over the area of the megasite. We present three variable resolution atmosphericgeneral circulation model (AGCM) configurations as potential alternatives to stand-alonehigh-resolution regional models. Each configuration is based on a global cubed-sphere gridwith effective resolution of 1 degree, with a refinement in resolution down to 1/8 degree overan area surrounding the ARM megasite. The three grids vary in the size of the refined areawith 13k, 9k, and 7k elements. SquadGen, NCL, and GIMP are used to create the grids.Grids vary based upon the selection of areas of refinement which capture climate andweather processes that may affect a proposed NSA megasite. A smaller area of highresolution may not fully resolve climate and weather processes before they reach the NSA,however grids with smaller areas of refinement have a significantly reduced computationalcost compared with grids with larger areas of refinement. Optimal size and shape of thearea of refinement for a variable resolution model at the NSA is investigated.

  5. Optimal Grid Size for Inter-Comparability of MODIS And VIIRS Vegetation Indices at Level 2G or Higher

    NASA Astrophysics Data System (ADS)

    Campagnolo, M.; Schaaf, C.

    2016-12-01

    Due to the necessity of time compositing and other user requirements, vegetation indices, as well as many other EOS derived products, are distributed in a gridded format (level L2G or higher) using an equal area sinusoidal grid, at grid sizes of 232 m, 463 m or 926 m. In this process, the actual surface signal suffers somewhat of a degradation, caused by both the sensor's point spread function and this resampling from swath to the regular grid. The magnitude of that degradation depends on a number of factors, such as surface heterogeneity, band nominal resolution, observation geometry and grid size. In this research, the effect of grid size is quantified for MODIS and VIIRS (at five EOS validation sites with distinct land covers), for the full range of view zenith angles, and at grid sizes of 232 m, 253 m, 309 m, 371 m, 397 m and 463 m. This allows us to compare MODIS and VIIRS gridded products for the same scenes, and to determine the grid size at which these products are most similar. Towards that end, simulated MODIS and VIIRS bands are generated from Landsat 8 surface reflectance images at each site and gridded products are then derived by using maximum obscov resampling. Then, for every grid size, the original Landsat 8 NDVI and the derived MODIS and VIIRS NDVI products are compared. This methodology can be applied to other bands and products, to determine which spatial aggregation overall is best suited for EOS to S-NPP product continuity. Results for MODIS (250 m bands) and VIIRS (375 m bands) NDVI products show that finer grid sizes tend to be better at preserving the original signal. Significant degradation for gridded NDVI occurs when grid size is larger then 253 m (MODIS) and 371 m (VIIRS). Our results suggest that current MODIS "500 m" (actually 463 m) grid size is best for product continuity. Note however, that up to that grid size value, MODIS gridded products are somewhat better at preserving the surface signal than VIIRS, except for at very high VZA.

  6. Use of upscaled elevation and surface roughness data in two-dimensional surface water models

    USGS Publications Warehouse

    Hughes, J.D.; Decker, J.D.; Langevin, C.D.

    2011-01-01

    In this paper, we present an approach that uses a combination of cell-block- and cell-face-averaging of high-resolution cell elevation and roughness data to upscale hydraulic parameters and accurately simulate surface water flow in relatively low-resolution numerical models. The method developed allows channelized features that preferentially connect large-scale grid cells at cell interfaces to be represented in models where these features are significantly smaller than the selected grid size. The developed upscaling approach has been implemented in a two-dimensional finite difference model that solves a diffusive wave approximation of the depth-integrated shallow surface water equations using preconditioned Newton–Krylov methods. Computational results are presented to show the effectiveness of the mixed cell-block and cell-face averaging upscaling approach in maintaining model accuracy, reducing model run-times, and how decreased grid resolution affects errors. Application examples demonstrate that sub-grid roughness coefficient variations have a larger effect on simulated error than sub-grid elevation variations.

  7. 3D magnetospheric parallel hybrid multi-grid method applied to planet–plasma interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leclercq, L., E-mail: ludivine.leclercq@latmos.ipsl.fr; Modolo, R., E-mail: ronan.modolo@latmos.ipsl.fr; Leblanc, F.

    2016-03-15

    We present a new method to exploit multiple refinement levels within a 3D parallel hybrid model, developed to study planet–plasma interactions. This model is based on the hybrid formalism: ions are kinetically treated whereas electrons are considered as a inertia-less fluid. Generally, ions are represented by numerical particles whose size equals the volume of the cells. Particles that leave a coarse grid subsequently entering a refined region are split into particles whose volume corresponds to the volume of the refined cells. The number of refined particles created from a coarse particle depends on the grid refinement rate. In order tomore » conserve velocity distribution functions and to avoid calculations of average velocities, particles are not coalesced. Moreover, to ensure the constancy of particles' shape function sizes, the hybrid method is adapted to allow refined particles to move within a coarse region. Another innovation of this approach is the method developed to compute grid moments at interfaces between two refinement levels. Indeed, the hybrid method is adapted to accurately account for the special grid structure at the interfaces, avoiding any overlapping grid considerations. Some fundamental test runs were performed to validate our approach (e.g. quiet plasma flow, Alfven wave propagation). Lastly, we also show a planetary application of the model, simulating the interaction between Jupiter's moon Ganymede and the Jovian plasma.« less

  8. JIGSAW-GEO (1.0): Locally Orthogonal Staggered Unstructured Grid Generation for General Circulation Modelling on the Sphere

    NASA Technical Reports Server (NTRS)

    Engwirda, Darren

    2017-01-01

    An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered VoronoiDelaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail.

  9. JIGSAW-GEO (1.0): locally orthogonal staggered unstructured grid generation for general circulation modelling on the sphere

    NASA Astrophysics Data System (ADS)

    Engwirda, Darren

    2017-06-01

    An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered Voronoi-Delaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail.

  10. Application of FUN3D Solver for Aeroacoustics Simulation of a Nose Landing Gear Configuration

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Lockard, David P.; Khorrami, Mehdi R.

    2011-01-01

    Numerical simulations have been performed for a nose landing gear configuration corresponding to the experimental tests conducted in the Basic Aerodynamic Research Tunnel at NASA Langley Research Center. A widely used unstructured grid code, FUN3D, is examined for solving the unsteady flow field associated with this configuration. A series of successively finer unstructured grids has been generated to assess the effect of grid refinement. Solutions have been obtained on purely tetrahedral grids as well as mixed element grids using hybrid RANS/LES turbulence models. The agreement of FUN3D solutions with experimental data on the same size mesh is better on mixed element grids compared to pure tetrahedral grids, and in general improves with grid refinement.

  11. GSRP/David Marshall: Fully Automated Cartesian Grid CFD Application for MDO in High Speed Flows

    NASA Technical Reports Server (NTRS)

    2003-01-01

    With the renewed interest in Cartesian gridding methodologies for the ease and speed of gridding complex geometries in addition to the simplicity of the control volumes used in the computations, it has become important to investigate ways of extending the existing Cartesian grid solver functionalities. This includes developing methods of modeling the viscous effects in order to utilize Cartesian grids solvers for accurate drag predictions and addressing the issues related to the distributed memory parallelization of Cartesian solvers. This research presents advances in two areas of interest in Cartesian grid solvers, viscous effects modeling and MPI parallelization. The development of viscous effects modeling using solely Cartesian grids has been hampered by the widely varying control volume sizes associated with the mesh refinement and the cut cells associated with the solid surface. This problem is being addressed by using physically based modeling techniques to update the state vectors of the cut cells and removing them from the finite volume integration scheme. This work is performed on a new Cartesian grid solver, NASCART-GT, with modifications to its cut cell functionality. The development of MPI parallelization addresses issues associated with utilizing Cartesian solvers on distributed memory parallel environments. This work is performed on an existing Cartesian grid solver, CART3D, with modifications to its parallelization methodology.

  12. ED(MF)n: Humidity-Convection Feedbacks in a Mass Flux Scheme Based on Resolved Size Densities

    NASA Astrophysics Data System (ADS)

    Neggers, R.

    2014-12-01

    Cumulus cloud populations remain at least partially unresolved in present-day numerical simulations of global weather and climate, and accordingly their impact on the larger-scale flow has to be represented through parameterization. Various methods have been developed over the years, ranging in complexity from the early bulk models relying on a single plume to more recent approaches that attempt to reconstruct the underlying probability density functions, such as statistical schemes and multiple plume approaches. Most of these "classic" methods capture key aspects of cumulus cloud populations, and have been successfully implemented in operational weather and climate models. However, the ever finer discretizations of operational circulation models, driven by advances in the computational efficiency of supercomputers, is creating new problems for existing sub-grid schemes. Ideally, a sub-grid scheme should automatically adapt its impact on the resolved scales to the dimension of the grid-box within which it is supposed to act. It can be argued that this is only possible when i) the scheme is aware of the range of scales of the processes it represents, and ii) it can distinguish between contributions as a function of size. How to conceptually represent this knowledge of scale in existing parameterization schemes remains an open question that is actively researched. This study considers a relatively new class of models for sub-grid transport in which ideas from the field of population dynamics are merged with the concept of multi plume modelling. More precisely, a multiple mass flux framework for moist convective transport is formulated in which the ensemble of plumes is created in "size-space". It is argued that thus resolving the underlying size-densities creates opportunities for introducing scale-awareness and scale-adaptivity in the scheme. The behavior of an implementation of this framework in the Eddy Diffusivity Mass Flux (EDMF) model, named ED(MF)n, is examined for a standard case of subtropical marine shallow cumulus. We ask if a system of multiple independently resolved plumes is able to automatically create the vertical profile of bulk (mass) flux at which the sub-grid scale transport balances the imposed larger-scale forcings in the cloud layer.

  13. The Effect of DEM Source and Grid Size on the Index of Connectivity in Savanna Catchments

    NASA Astrophysics Data System (ADS)

    Jarihani, Ben; Sidle, Roy; Bartley, Rebecca; Roth, Christian

    2017-04-01

    The term "hydrological connectivity" is increasingly used instead of sediment delivery ratio to describe the linkage between the sources of water and sediment within a catchment to the catchment outlet. Sediment delivery ratio is an empirical parameter that is highly site-specific and tends to lump all processes, whilst hydrological connectivity focuses on the spatially-explicit hydrologic drivers of surficial processes. Detailed topographic information plays a fundamental role in geomorphological interpretations as well as quantitative modelling of sediment fluxes and connectivity. Geomorphometric analysis permits a detailed characterization of drainage area and drainage pattern together with the possibility of characterizing surface roughness. High resolution topographic data (i.e., LiDAR) are not available for all areas; however, remotely sensed topographic data from multiple sources with different grid sizes are used to undertake geomorphologic analysis in data-sparse regions. The Index of Connectivity (IC), a geomorphometric model based only on DEM data, is applied in two small savanna catchments in Queensland, Australia. The influence of the scale of the topographic data is explored by using DEMs from LiDAR ( 1 m), WorldDEM ( 10 m), raw SRTM and hydrologically corrected SRTM derived data ( 30 m) to calculate the index of connectivity. The effect of the grid size is also investigated by resampling the high resolution LiDAR DEM to multiple grid sizes (e.g. 5, 10, 20 m) and comparing the extracted IC.

  14. Microgrid Design Toolkit (MDT) User Guide Software v1.2.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eddy, John P.

    2017-08-01

    The Microgrid Design Toolkit (MDT) supports decision analysis for new ("greenfield") microgrid designs as well as microgrids with existing infrastructure. The current version of MDT includes two main capabilities. The first capability, the Microgrid Sizing Capability (MSC), is used to determine the size and composition of a new, grid connected microgrid in the early stages of the design process. MSC is focused on developing a microgrid that is economically viable when connected to the grid. The second capability is focused on designing a microgrid for operation in islanded mode. This second capability relies on two models: the Technology Management Optimizationmore » (TMO) model and Performance Reliability Model (PRM).« less

  15. Three-dimensional local grid refinement for block-centered finite-difference groundwater models using iteratively coupled shared nodes: A new method of interpolation and analysis of errors

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2004-01-01

    This paper describes work that extends to three dimensions the two-dimensional local-grid refinement method for block-centered finite-difference groundwater models of Mehl and Hill [Development and evaluation of a local grid refinement method for block-centered finite-difference groundwater models using shared nodes. Adv Water Resour 2002;25(5):497-511]. In this approach, the (parent) finite-difference grid is discretized more finely within a (child) sub-region. The grid refinement method sequentially solves each grid and uses specified flux (parent) and specified head (child) boundary conditions to couple the grids. Iteration achieves convergence between heads and fluxes of both grids. Of most concern is how to interpolate heads onto the boundary of the child grid such that the physics of the parent-grid flow is retained in three dimensions. We develop a new two-step, "cage-shell" interpolation method based on the solution of the flow equation on the boundary of the child between nodes shared with the parent grid. Error analysis using a test case indicates that the shared-node local grid refinement method with cage-shell boundary head interpolation is accurate and robust, and the resulting code is used to investigate three-dimensional local grid refinement of stream-aquifer interactions. Results reveal that (1) the parent and child grids interact to shift the true head and flux solution to a different solution where the heads and fluxes of both grids are in equilibrium, (2) the locally refined model provided a solution for both heads and fluxes in the region of the refinement that was more accurate than a model without refinement only if iterations are performed so that both heads and fluxes are in equilibrium, and (3) the accuracy of the coupling is limited by the parent-grid size - A coarse parent grid limits correct representation of the hydraulics in the feedback from the child grid.

  16. LPV Modeling of a Flexible Wing Aircraft Using Modal Alignment and Adaptive Gridding Methods

    NASA Technical Reports Server (NTRS)

    Al-Jiboory, Ali Khudhair; Zhu, Guoming; Swei, Sean Shan-Min; Su, Weihua; Nguyen, Nhan T.

    2017-01-01

    One of the earliest approaches in gain-scheduling control is the gridding based approach, in which a set of local linear time-invariant models are obtained at various gridded points corresponding to the varying parameters within the flight envelop. In order to ensure smooth and effective Linear Parameter-Varying control, aligning all the flexible modes within each local model and maintaining small number of representative local models over the gridded parameter space are crucial. In addition, since the flexible structural models tend to have large dimensions, a tractable model reduction process is necessary. In this paper, the notion of s-shifted H2- and H Infinity-norm are introduced and used as a metric to measure the model mismatch. A new modal alignment algorithm is developed which utilizes the defined metric for aligning all the local models over the entire gridded parameter space. Furthermore, an Adaptive Grid Step Size Determination algorithm is developed to minimize the number of local models required to represent the gridded parameter space. For model reduction, we propose to utilize the concept of Composite Modal Cost Analysis, through which the collective contribution of each flexible mode is computed and ranked. Therefore, a reduced-order model is constructed by retaining only those modes with significant contribution. The NASA Generic Transport Model operating at various flight speeds is studied for verification purpose, and the analysis and simulation results demonstrate the effectiveness of the proposed modeling approach.

  17. Spiking Neurons in a Hierarchical Self-Organizing Map Model Can Learn to Develop Spatial and Temporal Properties of Entorhinal Grid Cells and Hippocampal Place Cells

    PubMed Central

    Pilly, Praveen K.; Grossberg, Stephen

    2013-01-01

    Medial entorhinal grid cells and hippocampal place cells provide neural correlates of spatial representation in the brain. A place cell typically fires whenever an animal is present in one or more spatial regions, or places, of an environment. A grid cell typically fires in multiple spatial regions that form a regular hexagonal grid structure extending throughout the environment. Different grid and place cells prefer spatially offset regions, with their firing fields increasing in size along the dorsoventral axes of the medial entorhinal cortex and hippocampus. The spacing between neighboring fields for a grid cell also increases along the dorsoventral axis. This article presents a neural model whose spiking neurons operate in a hierarchy of self-organizing maps, each obeying the same laws. This spiking GridPlaceMap model simulates how grid cells and place cells may develop. It responds to realistic rat navigational trajectories by learning grid cells with hexagonal grid firing fields of multiple spatial scales and place cells with one or more firing fields that match neurophysiological data about these cells and their development in juvenile rats. The place cells represent much larger spaces than the grid cells, which enable them to support navigational behaviors. Both self-organizing maps amplify and learn to categorize the most frequent and energetic co-occurrences of their inputs. The current results build upon a previous rate-based model of grid and place cell learning, and thus illustrate a general method for converting rate-based adaptive neural models, without the loss of any of their analog properties, into models whose cells obey spiking dynamics. New properties of the spiking GridPlaceMap model include the appearance of theta band modulation. The spiking model also opens a path for implementation in brain-emulating nanochips comprised of networks of noisy spiking neurons with multiple-level adaptive weights for controlling autonomous adaptive robots capable of spatial navigation. PMID:23577130

  18. Effects of Grid Resolution on Modeled Air Pollutant Concentrations Due to Emissions from Large Point Sources: Case Study during KORUS-AQ 2016 Campaign

    NASA Astrophysics Data System (ADS)

    Ju, H.; Bae, C.; Kim, B. U.; Kim, H. C.; Kim, S.

    2017-12-01

    Large point sources in the Chungnam area received a nation-wide attention in South Korea because the area is located southwest of the Seoul Metropolitan Area whose population is over 22 million and the summertime prevalent winds in the area is northeastward. Therefore, emissions from the large point sources in the Chungnam area were one of the major observation targets during the KORUS-AQ 2016 including aircraft measurements. In general, horizontal grid resolutions of eulerian photochemical models have profound effects on estimated air pollutant concentrations. It is due to the formulation of grid models; that is, emissions in a grid cell will be assumed to be mixed well under planetary boundary layers regardless of grid cell sizes. In this study, we performed series of simulations with the Comprehensive Air Quality Model with eXetension (CAMx). For 9-km and 3-km simulations, we used meteorological fields obtained from the Weather Research and Forecast model while utilizing the "Flexi-nesting" option in the CAMx for the 1-km simulation. In "Flexi-nesting" mode, CAMx interpolates or assigns model inputs from the immediate parent grid. We compared modeled concentrations with ground observation data as well as aircraft measurements to quantify variations of model bias and error depending on horizontal grid resolutions.

  19. Combined effect of pulse density and grid cell size on predicting and mapping aboveground carbon in fast‑growing Eucalyptus forest plantation using airborne LiDAR data

    Treesearch

    Carlos Alberto Silva; Andrew Thomas Hudak; Carine Klauberg; Lee Alexandre Vierling; Carlos Gonzalez‑Benecke; Samuel de Padua Chaves Carvalho; Luiz Carlos Estraviz Rodriguez; Adrian Cardil

    2017-01-01

    LiDAR measurements can be used to predict and map AGC across variable-age Eucalyptus plantations with adequate levels of precision and accuracy using 5 pulses m− 2 and a grid cell size of 5 m. The promising results for AGC modeling in this study will allow for greater confidence in comparing AGC estimates with varying LiDAR sampling densities for Eucalyptus plantations...

  20. Comparison of Models for Spacer Grid Pressure Loss in Nuclear Fuel Bundles for One and Two-Phase Flows

    NASA Astrophysics Data System (ADS)

    Maskal, Alan B.

    Spacer grids maintain the structural integrity of the fuel rods within fuel bundles of nuclear power plants. They can also improve flow characteristics within the nuclear reactor core. However, spacer grids add reactor coolant pressure losses, which require estimation and engineering into the design. Several mathematical models and computer codes were developed over decades to predict spacer grid pressure loss. Most models use generalized characteristics, measured by older, less precise equipment. The study of OECD/US-NRC BWR Full-Size Fine Mesh Bundle Tests (BFBT) provides updated and detailed experimental single and two-phase results, using technically advanced flow measurements for a wide range of boundary conditions. This thesis compares the predictions from the mathematical models to the BFBT experimental data by utilizing statistical formulae for accuracy and precision. This thesis also analyzes the effects of BFBT flow characteristics on spacer grids. No single model has been identified as valid for all flow conditions. However, some models' predictions perform better than others within a range of flow conditions, based on the accuracy and precision of the models' predictions. This study also demonstrates that pressure and flow quality have a significant effect on two-phase flow spacer grid models' biases.

  1. HOMER: The Micropower Optimization Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2004-03-01

    HOMER, the micropower optimization model, helps users to design micropower systems for off-grid and grid-connected power applications. HOMER models micropower systems with one or more power sources including wind turbines, photovoltaics, biomass power, hydropower, cogeneration, diesel engines, cogeneration, batteries, fuel cells, and electrolyzers. Users can explore a range of design questions such as which technologies are most effective, what size should components be, how project economics are affected by changes in loads or costs, and is the renewable resource adequate.

  2. Impact of grid size on uniform scanning and IMPT plans in XiO treatment planning system for brain cancer

    PubMed Central

    Zheng, Yuanshui

    2015-01-01

    The main purposes of this study are to: 1) evaluate the accuracy of XiO treatment planning system (TPS) for different dose calculation grid size based on head phantom measurements in uniform scanning proton therapy (USPT); and 2) compare the dosimetric results for various dose calculation grid sizes based on real computed tomography (CT) dataset of pediatric brain cancer treatment plans generated by USPT and intensity‐modulated proton therapy (IMPT) techniques. For phantom study, we have utilized the anthropomorphic head proton phantom provided by Imaging and Radiation Oncology Core (IROC). The imaging, treatment planning, and beam delivery were carried out following the guidelines provided by the IROC. The USPT proton plan was generated in the XiO TPS, and dose calculations were performed for grid size ranged from 1 to 3 mm. The phantom containing thermoluminescent dosimeter (TLDs) and films was irradiated using uniform scanning proton beam. The irradiated TLDs were read by the IROC. The calculated doses from the XiO for different grid sizes were compared to the measured TLD doses provided by the IROC. Gamma evaluation was done by comparing calculated planar dose distribution of 3 mm grid size with measured planar dose distribution. Additionally, IMPT plan was generated based on the same CT dataset of the IROC phantom, and IMPT dose calculations were performed for grid size ranged from 1 to 3 mm. For comparative purpose, additional gamma analysis was done by comparing the planar dose distributions of standard grid size (3 mm) with that of other grid sizes (1, 1.5, 2, and 2.5 mm) for both the USPT and IMPT plans. For patient study, USPT plans of three pediatric brain cancer cases were selected. IMPT plans were generated for each of three pediatric cases. All patient treatment plans (USPT and IMPT) were generated in the XiO TPS for a total dose of 54 Gy (relative biological effectiveness [RBE]). Treatment plans (USPT and IMPT) of each case was recalculated for grid sizes of 1, 1.5, 2, and 2.5 mm; these dosimetric results were then compared with that of 3 mm grid size. Phantom study results: There was no distinct trend exhibiting the dependence of grid size on dose calculation accuracy when calculated point dose of different grid sizes were compared to the measured point (TLD) doses. On average, the calculated point dose was higher than the measured dose by 1.49% and 2.63% for the right and left TLDs, respectively. The gamma analysis showed very minimal differences among planar dose distributions of various grid sizes, with percentage of points meeting gamma index criteria 1% and 1 mm to be from 97.92% to 99.97%. The gamma evaluation using 2% and 2 mm criteria showed both the IMPT and USPT plans have 100% points meeting the criteria. Patient study results: In USPT, there was no very distinct relationship between the absolute difference in mean planning target volume (PTV) dose and grid size, whereas in IMPT, it was found that the decrease in grid size slightly increased the PTV maximum dose and decreased the PTV mean dose and PTV D50%. For the PTV doses, the average differences were up to 0.35 Gy (RBE) and 1.47 Gy (RBE) in the USPT and IMPT plans, respectively. Dependency on grid size was not very clear for the organs at risk (OARs), with average difference ranged from −0.61 Gy (RBE) to 0.53 Gy (RBE) in the USPT plans and from −0.83 Gy (RBE) to 1.39 Gy (RBE) in the IMPT plans. In conclusion, the difference in the calculated point dose between the smallest grid size (1 mm) and the largest grid size (3 mm) in phantom for USPT was typically less than 0.1%. Patient study results showed that the decrease in grid size slightly increased the PTV maximum dose in both the USPT and IMPT plans. However, no distinct trend was obtained between the absolute difference in dosimetric parameter and dose calculation grid size for the OARs. Grid size has a large effect on dose calculation efficiency, and use of 2 mm or less grid size can increase the dose calculation time significantly. It is recommended to use grid size either 2.5 or 3 mm for dose calculations of pediatric brain cancer plans generated by USPT and IMPT techniques in XiO TPS. PACS numbers: 87.55.D‐, 87.55.ne, 87.55.dk PMID:26699310

  3. SU-E-T-374: Evaluation and Verification of Dose Calculation Accuracy with Different Dose Grid Sizes for Intracranial Stereotactic Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, C; Schultheiss, T

    Purpose: In this study, we aim to evaluate the effect of dose grid size on the accuracy of calculated dose for small lesions in intracranial stereotactic radiosurgery (SRS), and to verify dose calculation accuracy with radiochromic film dosimetry. Methods: 15 intracranial lesions from previous SRS patients were retrospectively selected for this study. The planning target volume (PTV) ranged from 0.17 to 2.3 cm{sup 3}. A commercial treatment planning system was used to generate SRS plans using the volumetric modulated arc therapy (VMAT) technique using two arc fields. Two convolution-superposition-based dose calculation algorithms (Anisotropic Analytical Algorithm and Acuros XB algorithm) weremore » used to calculate volume dose distribution with dose grid size ranging from 1 mm to 3 mm with 0.5 mm step size. First, while the plan monitor units (MU) were kept constant, PTV dose variations were analyzed. Second, with 95% of the PTV covered by the prescription dose, variations of the plan MUs as a function of dose grid size were analyzed. Radiochomic films were used to compare the delivered dose and profile with the calculated dose distribution with different dose grid sizes. Results: The dose to the PTV, in terms of the mean dose, maximum, and minimum dose, showed steady decrease with increasing dose grid size using both algorithms. With 95% of the PTV covered by the prescription dose, the total MU increased with increasing dose grid size in most of the plans. Radiochromic film measurements showed better agreement with dose distributions calculated with 1-mm dose grid size. Conclusion: Dose grid size has significant impact on calculated dose distribution in intracranial SRS treatment planning with small target volumes. Using the default dose grid size could lead to under-estimation of delivered dose. A small dose grid size should be used to ensure calculation accuracy and agreement with QA measurements.« less

  4. National Assessment of Energy Storage for Grid Balancing and Arbitrage: Phase 1, WECC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kintner-Meyer, Michael CW; Balducci, Patrick J.; Colella, Whitney G.

    2012-06-01

    To examine the role that energy storage could play in mitigating the impacts of the stochastic variability of wind generation on regional grid operation, the Pacific Northwest National Laboratory (PNNL) examined a hypothetical 2020 grid scenario in which additional wind generation capacity is built to meet renewable portfolio standard targets in the Western Interconnection. PNNL developed a stochastic model for estimating the balancing requirements using historical wind statistics and forecasting error, a detailed engineering model to analyze the dispatch of energy storage and fast-ramping generation devices for estimating size requirements of energy storage and generation systems for meeting new balancingmore » requirements, and financial models for estimating the life-cycle cost of storage and generation systems in addressing the future balancing requirements for sub-regions in the Western Interconnection. Evaluated technologies include combustion turbines, sodium sulfur (Na-S) batteries, lithium ion batteries, pumped-hydro energy storage, compressed air energy storage, flywheels, redox flow batteries, and demand response. Distinct power and energy capacity requirements were estimated for each technology option, and battery size was optimized to minimize costs. Modeling results indicate that in a future power grid with high-penetration of renewables, the most cost competitive technologies for meeting balancing requirements include Na-S batteries and flywheels.« less

  5. A GRID OF THREE-DIMENSIONAL STELLAR ATMOSPHERE MODELS OF SOLAR METALLICITY. I. GENERAL PROPERTIES, GRANULATION, AND ATMOSPHERIC EXPANSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trampedach, Regner; Asplund, Martin; Collet, Remo

    2013-05-20

    Present grids of stellar atmosphere models are the workhorses in interpreting stellar observations and determining their fundamental parameters. These models rely on greatly simplified models of convection, however, lending less predictive power to such models of late-type stars. We present a grid of improved and more reliable stellar atmosphere models of late-type stars, based on deep, three-dimensional (3D), convective, stellar atmosphere simulations. This grid is to be used in general for interpreting observations and improving stellar and asteroseismic modeling. We solve the Navier Stokes equations in 3D and concurrent with the radiative transfer equation, for a range of atmospheric parameters,more » covering most of stellar evolution with convection at the surface. We emphasize the use of the best available atomic physics for quantitative predictions and comparisons with observations. We present granulation size, convective expansion of the acoustic cavity, and asymptotic adiabat as functions of atmospheric parameters.« less

  6. Evaluation of simplified stream-aquifer depletion models for water rights administration

    USGS Publications Warehouse

    Sophocleous, Marios; Koussis, Antonis; Martin, J.L.; Perkins, S.P.

    1995-01-01

    We assess the predictive accuracy of Glover's (1974) stream-aquifer analytical solutions, which are commonly used in administering water rights, and evaluate the impact of the assumed idealizations on administrative and management decisions. To achieve these objectives, we evaluate the predictive capabilities of the Glover stream-aquifer depletion model against the MODFLOW numerical standard, which, unlike the analytical model, can handle increasing hydrogeologic complexity. We rank-order and quantify the relative importance of the various assumptions on which the analytical model is based, the three most important being: (1) streambed clogging as quantified by streambed-aquifer hydraulic conductivity contrast; (2) degree of stream partial penetration; and (3) aquifer heterogeneity. These three factors relate directly to the multidimensional nature of the aquifer flow conditions. From these considerations, future efforts to reduce the uncertainty in stream depletion-related administrative decisions should primarily address these three factors in characterizing the stream-aquifer process. We also investigate the impact of progressively coarser model grid size on numerically estimating stream leakage and conclude that grid size effects are relatively minor. Therefore, when modeling is required, coarser model grids could be used thus minimizing the input data requirements.

  7. Visual Analytics for Power Grid Contingency Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Pak C.; Huang, Zhenyu; Chen, Yousu

    2014-01-20

    Contingency analysis is the process of employing different measures to model scenarios, analyze them, and then derive the best response to remove the threats. This application paper focuses on a class of contingency analysis problems found in the power grid management system. A power grid is a geographically distributed interconnected transmission network that transmits and delivers electricity from generators to end users. The power grid contingency analysis problem is increasingly important because of both the growing size of the underlying raw data that need to be analyzed and the urgency to deliver working solutions in an aggressive timeframe. Failure tomore » do so may bring significant financial, economic, and security impacts to all parties involved and the society at large. The paper presents a scalable visual analytics pipeline that transforms about 100 million contingency scenarios to a manageable size and form for grid operators to examine different scenarios and come up with preventive or mitigation strategies to address the problems in a predictive and timely manner. Great attention is given to the computational scalability, information scalability, visual scalability, and display scalability issues surrounding the data analytics pipeline. Most of the large-scale computation requirements of our work are conducted on a Cray XMT multi-threaded parallel computer. The paper demonstrates a number of examples using western North American power grid models and data.« less

  8. Numerical modelling of needle-grid electrodes for negative surface corona charging system

    NASA Astrophysics Data System (ADS)

    Zhuang, Y.; Chen, G.; Rotaru, M.

    2011-08-01

    Surface potential decay measurement is a simple and low cost tool to examine electrical properties of insulation materials. During the corona charging stage, a needle-grid electrodes system is often used to achieve uniform charge distribution on the surface of the sample. In this paper, a model using COMSOL Multiphysics has been developed to simulate the gas discharge. A well-known hydrodynamic drift-diffusion model was used. The model consists of a set of continuity equations accounting for the movement, generation and loss of charge carriers (electrons, positive and negative ions) coupled with Poisson's equation to take into account the effect of space and surface charges on the electric field. Four models with the grid electrode in different positions and several mesh sizes are compared with a model that only has the needle electrode. The results for impulse current and surface charge density on the sample clearly show the effect of the extra grid electrode with various positions.

  9. Three-dimensional hydrodynamic Bondi-Hoyle accretion. 2: Homogeneous medium at Mach 3 with gamma = 5/3

    NASA Technical Reports Server (NTRS)

    Ruffert, Maximilian; Arnett, David

    1994-01-01

    We investigate the hydrodynamics of three-dimensional classical Bondi-Hoyle accretion. Totally absorbing spheres of varying sizes (from 10 down to 0.01 accretion radii) move at Mach 3 relative to a homogeneous and slightly perturbed medium, which is taken to be an ideal gas (gamma = 5/3). To accommodate the long-range gravitational forces, the extent of the computational volume is 32(exp 3) accretion radii. We examine the influence of numerical procedure on physical behavior. The hydrodynamics is modeled by the 'piecewise parabolic method.' No energy sources (nuclear burning) or sinks (radiation, conduction) are included. The resolution in the vicinity of the accretor is increased by multiply nesting several (5-10) grids around the sphere, each finer grid being a factor of 2 smaller in zone dimension that the next coarser grid. The largest dynamic range (ratio of size of the largest grid to size of the finest zone) is 16,384. This allows us to include a coarse model for the surface of the accretor (vacuum sphere) on the finest grid, while at the same time evolving the gas on the coarser grids. Initially (at time t = 0-10), a shock front is set up, a Mach cone develops, and the accretion column is observable. Eventually the flow becomes unstable, destroying axisymmetry. This happens approximately when the mass accretion rate reaches the values (+/- 10%) predicted by the Bondi-Hoyle accretion formula (factor of 2 included). However, our three-dimensional models do not show the highly dynamic flip-flop flow so prominent in two-dimensional calculations performed by other authors. The flow, and thus the accretion rate of all quantities, shows quasi-periodic (P approximately equals 5) cycles between quiescent and active states. The interpolation formula proposed in an accompanying paper is found to follow the collected numerical data to within approximately 30%. The specific angular momentum accreted is of the same order of magnitude as the values previously found for two-dimensional flows.

  10. Abruptness of Cascade Failures in Power Grids

    NASA Astrophysics Data System (ADS)

    Pahwa, Sakshi; Scoglio, Caterina; Scala, Antonio

    2014-01-01

    Electric power-systems are one of the most important critical infrastructures. In recent years, they have been exposed to extreme stress due to the increasing demand, the introduction of distributed renewable energy sources, and the development of extensive interconnections. We investigate the phenomenon of abrupt breakdown of an electric power-system under two scenarios: load growth (mimicking the ever-increasing customer demand) and power fluctuations (mimicking the effects of renewable sources). Our results on real, realistic and synthetic networks indicate that increasing the system size causes breakdowns to become more abrupt; in fact, mapping the system to a solvable statistical-physics model indicates the occurrence of a first order transition in the large size limit. Such an enhancement for the systemic risk failures (black-outs) with increasing network size is an effect that should be considered in the current projects aiming to integrate national power-grids into ``super-grids''.

  11. Abruptness of cascade failures in power grids.

    PubMed

    Pahwa, Sakshi; Scoglio, Caterina; Scala, Antonio

    2014-01-15

    Electric power-systems are one of the most important critical infrastructures. In recent years, they have been exposed to extreme stress due to the increasing demand, the introduction of distributed renewable energy sources, and the development of extensive interconnections. We investigate the phenomenon of abrupt breakdown of an electric power-system under two scenarios: load growth (mimicking the ever-increasing customer demand) and power fluctuations (mimicking the effects of renewable sources). Our results on real, realistic and synthetic networks indicate that increasing the system size causes breakdowns to become more abrupt; in fact, mapping the system to a solvable statistical-physics model indicates the occurrence of a first order transition in the large size limit. Such an enhancement for the systemic risk failures (black-outs) with increasing network size is an effect that should be considered in the current projects aiming to integrate national power-grids into "super-grids".

  12. GridTool: A surface modeling and grid generation tool

    NASA Technical Reports Server (NTRS)

    Samareh-Abolhassani, Jamshid

    1995-01-01

    GridTool is designed around the concept that the surface grids are generated on a set of bi-linear patches. This type of grid generation is quite easy to implement, and it avoids the problems associated with complex CAD surface representations and associated surface parameterizations. However, the resulting surface grids are close to but not on the original CAD surfaces. This problem can be alleviated by projecting the resulting surface grids onto the original CAD surfaces. GridTool is designed primary for unstructured grid generation systems. Currently, GridTool supports VGRID and FELISA systems, and it can be easily extended to support other unstructured grid generation systems. The data in GridTool is stored parametrically so that once the problem is set up, one can modify the surfaces and the entire set of points, curves and patches will be updated automatically. This is very useful in a multidisciplinary design and optimization process. GridTool is written entirely in ANSI 'C', the interface is based on the FORMS library, and the graphics is based on the GL library. The code has been tested successfully on IRIS workstations running IRIX4.0 and above. The memory is allocated dynamically, therefore, memory size will depend on the complexity of geometry/grid. GridTool data structure is based on a link-list structure which allows the required memory to expand and contract dynamically according to the user's data size and action. Data structure contains several types of objects such as points, curves, patches, sources and surfaces. At any given time, there is always an active object which is drawn in magenta, or in their highlighted colors as defined by the resource file which will be discussed later.

  13. A Conceptual Approach to Assimilating Remote Sensing Data to Improve Soil Moisture Profile Estimates in a Surface Flux/Hydrology Model. 2; Aggregation

    NASA Technical Reports Server (NTRS)

    Schamschula, Marius; Crosson, William L.; Inguva, Ramarao; Yates, Thomas; Laymen, Charles A.; Caulfield, John

    1998-01-01

    This is a follow up on the preceding presentation by Crosson. The grid size for remote microwave measurements is much coarser than the hydrological model computational grids. To validate the hydrological models with measurements we propose mechanisms to aggregate the hydrological model outputs for soil moisture to allow comparison with measurements. Weighted neighborhood averaging methods are proposed to facilitate the comparison. We will also discuss such complications as misalignment, rotation and other distortions introduced by a generalized sensor image.

  14. Accurate path integration in continuous attractor network models of grid cells.

    PubMed

    Burak, Yoram; Fiete, Ila R

    2009-02-01

    Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of approximately 10-100 meters and approximately 1-10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other.

  15. Evolution of aerosol downwind of a major highway

    NASA Astrophysics Data System (ADS)

    Liggio, J.; Staebler, R. M.; Brook, J.; Li, S.; Vlasenko, A. L.; Sjostedt, S. J.; Gordon, M.; Makar, P.; Mihele, C.; Evans, G. J.; Jeong, C.; Wentzell, J. J.; Lu, G.; Lee, P.

    2010-12-01

    Primary aerosol from traffic emissions can have a considerable impact local and regional scale air quality. In order to assess the effect of these emissions and of future emissions scenarios, air quality models are required which utilize emissions representative of real world conditions. Often, the emissions processing systems which provide emissions input for the air quality models rely on laboratory testing of individual vehicles under non-ambient conditions. However, on the sub-grid scale particle evolution may lead to changes in the primary emitted size distribution and gas-particle partitioning that are not properly considered when the emissions are ‘instantly mixed’ within the grid volume. The affect of this modeling convention on model results is not well understood. In particular, changes in organic gas/particle partitioning may result in particle evaporation or condensation onto pre-existing aerosol. The result is a change in the particle distribution and/or an increase in the organic mass available for subsequent gas-phase oxidation. These effects may be missing from air-quality models, and a careful analysis of field data is necessary to quantify their impact. A study of the sub-grid evolution of aerosols (FEVER; Fast Evolution of Vehicle Emissions from Roadways) was conducted in the Toronto area in the summer of 2010. The study included mobile measurements of particle size distributions with a Fast mobility particle sizer (FMPS), aerosol composition with an Aerodyne aerosol mass spectrometer (AMS), black carbon (SP2, PA, LII), VOCs (PTR-MS) and other trace gases. The mobile laboratory was used to measure the concentration gradient of the emissions at perpendicular distances from the highway as well as the physical and chemical evolution of the aerosol. Stationary sites at perpendicular distances and upwind from the highway also monitored the particle size distribution. In addition, sonic anemometers mounted on the mobile lab provided measurements of turbulent dispersion as a function of distance from the highway, and a traffic camera was used to determine traffic density, composition and speed. These measurements differ from previous studies in that turbulence is measured under realistic conditions and hence the relationship of the aerosol evolution to atmospheric stability and mixing will also be quantified. Preliminary results suggest that aerosol size and composition does change on the sub-grid scale, and sub-grid scale parameterizations of turbulence and particle chemistry should be included in models to accurately represent these effects.

  16. Aspects on HTS applications in confined power grids

    NASA Astrophysics Data System (ADS)

    Arndt, T.; Grundmann, J.; Kuhnert, A.; Kummeth, P.; Nick, W.; Oomen, M.; Schacherer, C.; Schmidt, W.

    2014-12-01

    In an increasing number of electric power grids the share of distributed energy generation is also increasing. The grids have to cope with a considerable change of power flow, which has an impact on the optimum topology of the grids and sub-grids (high-voltage, medium-voltage and low-voltage sub-grids) and the size of quasi-autonomous grid sections. Furthermore the stability of grids is influenced by its size. Thus special benefits of HTS applications in the power grid might become most visible in confined power grids.

  17. Improvements in sub-grid, microphysics averages using quadrature based approaches

    NASA Astrophysics Data System (ADS)

    Chowdhary, K.; Debusschere, B.; Larson, V. E.

    2013-12-01

    Sub-grid variability in microphysical processes plays a critical role in atmospheric climate models. In order to account for this sub-grid variability, Larson and Schanen (2013) propose placing a probability density function on the sub-grid cloud microphysics quantities, e.g. autoconversion rate, essentially interpreting the cloud microphysics quantities as a random variable in each grid box. Random sampling techniques, e.g. Monte Carlo and Latin Hypercube, can be used to calculate statistics, e.g. averages, on the microphysics quantities, which then feed back into the model dynamics on the coarse scale. We propose an alternate approach using numerical quadrature methods based on deterministic sampling points to compute the statistical moments of microphysics quantities in each grid box. We have performed a preliminary test on the Kessler autoconversion formula, and, upon comparison with Latin Hypercube sampling, our approach shows an increased level of accuracy with a reduction in sample size by almost two orders of magnitude. Application to other microphysics processes is the subject of ongoing research.

  18. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: I. Model implementation

    NASA Astrophysics Data System (ADS)

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    We explore optimization methods for planning the placement, sizing and operations of flexible alternating current transmission system (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to series compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of linear programs (LP) that are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically sized networks that suffer congestion from a range of causes, including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically sized network.

  19. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: I. Model implementation

    DOE PAGES

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-24

    We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l 1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPowermore » Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.« less

  20. Grid-Based Surface Generalized Born Model for Calculation of Electrostatic Binding Free Energies.

    PubMed

    Forouzesh, Negin; Izadi, Saeed; Onufriev, Alexey V

    2017-10-23

    Fast and accurate calculation of solvation free energies is central to many applications, such as rational drug design. In this study, we present a grid-based molecular surface implementation of "R6" flavor of the generalized Born (GB) implicit solvent model, named GBNSR6. The speed, accuracy relative to numerical Poisson-Boltzmann treatment, and sensitivity to grid surface parameters are tested on a set of 15 small protein-ligand complexes and a set of biomolecules in the range of 268 to 25099 atoms. Our results demonstrate that the proposed model provides a relatively successful compromise between the speed and accuracy of computing polar components of the solvation free energies (ΔG pol ) and binding free energies (ΔΔG pol ). The model tolerates a relatively coarse grid size h = 0.5 Å, where the grid artifact error in computing ΔΔG pol remains in the range of k B T ∼ 0.6 kcal/mol. The estimated ΔΔG pol s are well correlated (r 2 = 0.97) with the numerical Poisson-Boltzmann reference, while showing virtually no systematic bias and RMSE = 1.43 kcal/mol. The grid-based GBNSR6 model is available in Amber (AmberTools) package of molecular simulation programs.

  1. Evaluation of Grid Modification Methods for On- and Off-Track Sonic Boom Analysis

    NASA Technical Reports Server (NTRS)

    Nayani, Sudheer N.; Campbell, Richard L.

    2013-01-01

    Grid modification methods have been under development at NASA to enable better predictions of low boom pressure signatures from supersonic aircraft. As part of this effort, two new codes, Stretched and Sheared Grid - Modified (SSG) and Boom Grid (BG), have been developed in the past year. The CFD results from these codes have been compared with ones from the earlier grid modification codes Stretched and Sheared Grid (SSGRID) and Mach Cone Aligned Prism (MCAP) and also with the available experimental results. NASA's unstructured grid suite of software TetrUSS and the automatic sourcing code AUTOSRC were used for base grid generation and flow solutions. The BG method has been evaluated on three wind tunnel models. Pressure signatures have been obtained up to two body lengths below a Gulfstream aircraft wind tunnel model. Good agreement with the wind tunnel results have been obtained for both on-track and off-track (up to 53 degrees) cases. On-track pressure signatures up to ten body lengths below a Straight Line Segmented Leading Edge (SLSLE) wind tunnel model have been extracted. Good agreement with the wind tunnel results have been obtained. Pressure signatures have been obtained at 1.5 body lengths below a Lockheed Martin aircraft wind tunnel model. Good agreement with the wind tunnel results have been obtained for both on-track and off-track (up to 40 degrees) cases. Grid sensitivity studies have been carried out to investigate any grid size related issues. Methods have been evaluated for fully turbulent, mixed laminar/turbulent and fully laminar flow conditions.

  2. Conceptual Design of the Everglades Depth Estimation Network (EDEN) Grid

    USGS Publications Warehouse

    Jones, John W.; Price, Susan D.

    2007-01-01

    INTRODUCTION The Everglades Depth Estimation Network (EDEN) offers a consistent and documented dataset that can be used to guide large-scale field operations, to integrate hydrologic and ecological responses, and to support biological and ecological assessments that measure ecosystem responses to the Comprehensive Everglades Restoration Plan (Telis, 2006). Ground elevation data for the greater Everglades and the digital ground elevation models derived from them form the foundation for all EDEN water depth and associated ecologic/hydrologic modeling (Jones, 2004, Jones and Price, 2007). To use EDEN water depth and duration information most effectively, it is important to be able to view and manipulate information on elevation data quality and other land cover and habitat characteristics across the Everglades region. These requirements led to the development of the geographic data layer described in this techniques and methods report. Relying on extensive experience in GIS data development, distribution, and analysis, a great deal of forethought went into the design of the geographic data layer used to index elevation and other surface characteristics for the Greater Everglades region. To allow for simplicity of design and use, the EDEN area was broken into a large number of equal-sized rectangles ('Cells') that in total are referred to here as the 'grid'. Some characteristics of this grid, such as the size of its cells, its origin, the area of Florida it is designed to represent, and individual grid cell identifiers, could not be changed once the grid database was developed. Therefore, these characteristics were selected to design as robust a grid as possible and to ensure the grid's long-term utility. It is desirable to include all pertinent information known about elevation and elevation data collection as grid attributes. Also, it is very important to allow for efficient grid post-processing, sub-setting, analysis, and distribution. This document details the conceptual design of the EDEN grid spatial parameters and cell attribute-table content.

  3. Development of a plume-in-grid model for industrial point and volume sources: application to power plant and refinery sources in the Paris region

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Seigneur, C.; Duclaux, O.

    2013-11-01

    Plume-in-grid (PinG) models incorporating a host Eulerian model and a subgrid-scale model (usually a Gaussian plume or puff model) have been used for the simulations of stack emissions (e.g., fossil fuel-fired power plants and cement plants) for gaseous and particulate species such as nitrogen oxides (NOx), sulfur dioxide (SO2), particulate matter (PM) and mercury (Hg). Here, we describe the extension of a PinG model to study the impact of an oil refinery where volatile organic compound (VOC) emissions can be important. The model is based on a reactive PinG model for ozone (O3), which incorporates a three-dimensional (3-D) Eulerian model and a Gaussian puff model. The model is extended to treat PM, with treatments of aerosol chemistry, particle size distribution, and the formation of secondary aerosols, which are consistent in both the 3-D Eulerian host model and the Gaussian puff model. Furthermore, the PinG model is extended to include the treatment of volume sources to simulate fugitive VOC emissions. The new PinG model is evaluated over Greater Paris during July 2009. Model performance is satisfactory for O3, PM2.5 and most PM2.5 components. Two industrial sources, a coal-fired power plant and an oil refinery, are simulated with the PinG model. The characteristics of the sources (stack height and diameter, exhaust temperature and velocity) govern the surface concentrations of primary pollutants (NOx, SO2 and VOC). O3 concentrations are impacted differently near the power plant than near the refinery, because of the presence of VOC emissions at the latter. The formation of sulfate is influenced by both the dispersion of SO2 and the oxidant concentration; however, the former tends to dominate in the simulations presented here. The impact of PinG modeling on the formation of secondary organic aerosols (SOA) is small and results mostly from the effect of different oxidant concentrations on biogenic SOA formation. The investigation of the criteria for injecting plumes into the host model (fixed travel time and/or puff size) shows that a size-based criterion is recommended to treat the formation of secondary aerosols (sulfate, nitrate, and ammonium), in particular, farther downwind of the sources (from about 15 km). The impacts of the PinG modeling are less significant in a simulation with a coarse grid size (10 km) than with a fine grid size (2 km), because the concentrations of the species emitted from the PinG sources are relatively less important compared to background concentrations when injected into the host model.

  4. Double ion production in mercury thrusters. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Peters, R. R.

    1976-01-01

    The development of a model which predicts doubly charged ion density is discussed. The accuracy of the model is shown to be good for two different thruster sizes and a total of 11 different cases. The model indicates that in most cases more than 80% of the doubly charged ions are produced from singly charged ions. This result can be used to develop a much simpler model which, along with correlations of the average plasma properties, can be used to determine the doubly charged ion density in ion thrusters with acceptable accuracy. Two different techniques which can be used to reduce the doubly charged ion density while maintaining good thruster operation, are identified as a result of an examination of the simple model. First, the electron density can be reduced and the thruster size then increased to maintain the same propellant utilization. Second, at a fixed thruster size, the plasma density, temperature and energy can be reduced and then to maintain a constant propellant utilization the open area of the grids to neutral propellant loss can be reduced through the use of a small hole accelerator grid.

  5. Sensitivity of LES results from turbine rim seals to changes in grid resolution and sector size

    NASA Astrophysics Data System (ADS)

    O'Mahoney, T.; Hills, N.; Chew, J.

    2012-07-01

    Large-Eddy Simulations (LES) were carried out for a turbine rim seal and the sensitivity of the results to changes in grid resolution and the size of the computational domain are investigated. Ingestion of hot annulus gas into the rotor-stator cavity is compared between LES results and against experiments and Unsteady Reynolds-Averaged Navier-Stokes (URANS) calculations. The LES calculations show greater ingestion than the URANS calculation and show better agreement with experiments. Increased grid resolution shows a small improvement in ingestion predictions whereas increasing the sector model size has little effect on the results. The contrast between the different CFD models is most stark in the inner cavity, where the URANS shows almost no ingestion. Particular attention is also paid to the presence of low frequency oscillations in the disc cavity. URANS calculations show such low frequency oscillations at different frequencies than the LES. The oscillations also take a very long time to develop in the LES. The results show that the difficult problem of estimating ingestion through rim seals could be overcome by using LES but that the computational requirements were still restrictive.

  6. Investigation of CO 2 capture using solid sorbents in a fluidized bed reactor: Cold flow hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Tingwen; Dietiker, Jean -Francois; Rogers, William

    2016-07-29

    Both experimental tests and numerical simulations were conducted to investigate the fluidization behavior of a solid CO 2 sorbent with a mean diameter of 100 μm and density of about 480 kg/m, which belongs to Geldart's Group A powder. A carefully designed fluidized bed facility was used to perform a series of experimental tests to study the flow hydrodynamics. Numerical simulations using the two-fluid model indicated that the grid resolution has a significant impact on the bed expansion and bubbling flow behavior. Due to the limited computational resource, no good grid independent results were achieved using the standard models asmore » far as the bed expansion is concerned. In addition, all simulations tended to under-predict the bubble size substantially. Effects of various model settings including both numerical and physical parameters have been investigated with no significant improvement observed. The latest filtered sub-grid drag model was then tested in the numerical simulations. Compared to the standard drag model, the filtered drag model with two markers not only predicted reasonable bed expansion but also yielded realistic bubbling behavior. As a result, a grid sensitivity study was conducted for the filtered sub-grid model and its applicability and limitation were discussed.« less

  7. Population dynamics of Microtus pennsylvanicus in corridor-linked patches

    USGS Publications Warehouse

    Coffman, C.J.; Nichols, J.D.; Pollock, K.H.

    2001-01-01

    Corridors have become a key issue in the discussion of conservation planning: however, few empirical data exist on the use of corridors and their effects on population dynamics. The objective of this replicated, population level, capture-re-capture experiment on meadow voles was to estimate and compare population characteristics of voles between (1) corridor-linked fragments, (2) isolated or non-linked fragments, and (3) unfragmented areas. We conducted two field experiments involving 22600 captures of 5700 individuals. In the first, the maintained corridor study, corridors were maintained at the time of fragmentation, and in the second, the constructed corridor study, we constructed corridors between patches that had been fragmented for some period of time. We applied multistate capture-recapture models with the robust design to estimate adult movement and survival rates, population size, temporal variation in population size, recruitment, and juvenile survival rates. Movement rates increased to a greater extent on constructed corridor-linked grids than on the unfragmented or non-linked fragmented grids between the pre- and post-treatment periods. We found significant differences in local survival on the treated (corridor-linked) grids compared to survival on the fragmented and unfragmented grids between the pre- and post-treatment periods. We found no clear pattern of treatment effects on population size or recruitment in either study. However, in both studies, we found that unfragmented grids were more stable than the fragmented grids based on lower temporal variability in population size. To our knowledge, this is the first experimental study demonstrating that corridors constructed between existing fragmented populations can indeed cause increases in movement and associated changes in demography, supporting the use of constructed corridors for this purpose in conservation biology.

  8. Fast Computation of Ground Motion Shaking Map base on the Modified Stochastic Finite Fault Modeling

    NASA Astrophysics Data System (ADS)

    Shen, W.; Zhong, Q.; Shi, B.

    2012-12-01

    Rapidly regional MMI mapping soon after a moderate-large earthquake is crucial to loss estimation, emergency services and planning of emergency action by the government. In fact, many countries show different degrees of attention on the technology of rapid estimation of MMI , and this technology has made significant progress in earthquake-prone countries. In recent years, numerical modeling of strong ground motion has been well developed with the advances of computation technology and earthquake science. The computational simulation of strong ground motion caused by earthquake faulting has become an efficient way to estimate the regional MMI distribution soon after earthquake. In China, due to the lack of strong motion observation in network sparse or even completely missing areas, the development of strong ground motion simulation method has become an important means of quantitative estimation of strong motion intensity. In many of the simulation models, stochastic finite fault model is preferred to rapid MMI estimating for its time-effectiveness and accuracy. In finite fault model, a large fault is divided into N subfaults, and each subfault is considered as a small point source. The ground motions contributed by each subfault are calculated by the stochastic point source method which is developed by Boore, and then summed at the observation point to obtain the ground motion from the entire fault with a proper time delay. Further, Motazedian and Atkinson proposed the concept of Dynamic Corner Frequency, with the new approach, the total radiated energy from the fault and the total seismic moment are conserved independent of subfault size over a wide range of subfault sizes. In current study, the program EXSIM developed by Motazedian and Atkinson has been modified for local or regional computations of strong motion parameters such as PGA, PGV and PGD, which are essential for MMI estimating. To make the results more reasonable, we consider the impact of V30 for the ground shaking intensity, and the results of the comparisons between the simulated and observed MMI for the 2004 Mw 6.0 Parkfield earthquake, the 2008 Mw 7.9Wenchuan earthquake and the 1976 Mw 7.6Tangshan earthquake is fairly well. Take Parkfield earthquake as example, the simulative result reflect the directivity effect and the influence of the shallow velocity structure well. On the other hand, the simulative data is in good agreement with the network data and NGA (Next Generation Attenuation). The consumed time depends on the number of the subfaults and the number of the grid point. For the 2004 Mw 6.0 Parkfield earthquake, the grid size we calculated is 2.5° × 2.5°, the grid space is 0.025°, and the total time consumed is about 1.3hours. For the 2008 Mw 7.9 Wenchuan earthquake, the grid size calculated is 10° × 10°, the grid space is 0.05°, the total number of grid point is more than 40,000, and the total time consumed is about 7.5 hours. For t the 1976 Mw 7.6 Tangshan earthquake, the grid size we calculated is 4° × 6°, the grid space is 0.05°, and the total time consumed is about 2.1 hours. The CPU we used is 3.40GHz, and such computational time could further reduce by using GPU computing technique and other parallel computing technique. This is also our next focus.

  9. SU-E-T-196: Comparative Analysis of Surface Dose Measurements Using MOSFET Detector and Dose Predicted by Eclipse - AAA with Varying Dose Calculation Grid Size

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Badkul, R; Nejaiman, S; Pokhrel, D

    2015-06-15

    Purpose: Skin dose can be the limiting factor and fairly common reason to interrupt the treatment, especially for treating head-and-neck with Intensity-modulated-radiation-therapy(IMRT) or Volumetrically-modulated - arc-therapy (VMAT) and breast with tangentially-directed-beams. Aim of this study was to investigate accuracy of near-surface dose predicted by Eclipse treatment-planning-system (TPS) using Anisotropic-Analytic Algorithm (AAA)with varying calculation grid-size and comparing with metal-oxide-semiconductor-field-effect-transistors(MOSFETs)measurements for a range of clinical-conditions (open-field,dynamic-wedge, physical-wedge, IMRT,VMAT). Methods: QUASAR™-Body-Phantom was used in this study with oval curved-surfaces to mimic breast, chest wall and head-and-neck sites.A CT-scan was obtained with five radio-opaque markers(ROM) placed on the surface of phantom to mimic themore » range of incident angles for measurements and dose prediction using 2mm slice thickness.At each ROM, small structure(1mmx2mm) were contoured to obtain mean-doses from TPS.Calculations were performed for open-field,dynamic-wedge,physical-wedge,IMRT and VMAT using Varian-21EX,6&15MV photons using twogrid-sizes:2.5mm and 1mm.Calibration checks were performed to ensure that MOSFETs response were within ±5%.Surface-doses were measured at five locations and compared with TPS calculations. Results: For 6MV: 2.5mm grid-size,mean calculated doses(MCD)were higher by 10%(±7.6),10%(±7.6),20%(±8.5),40%(±7.5),30%(±6.9) and for 1mm grid-size MCD were higher by 0%(±5.7),0%(±4.2),0%(±5.5),1.2%(±5.0),1.1% (±7.8) for open-field,dynamic-wedge,physical-wedge,IMRT,VMAT respectively.For 15MV: 2.5mm grid-size,MCD were higher by 30%(±14.6),30%(±14.6),30%(±14.0),40%(±11.0),30%(±3.5)and for 1mm grid-size MCD were higher by 10% (±10.6), 10%(±9.8),10%(±8.0),30%(±7.8),10%(±3.8) for open-field, dynamic-wedge, physical-wedge, IMRT, VMAT respectively.For 6MV, 86% and 56% of all measured values agreed better than ±20% for 1mm and 2.5mm grid-sizes respectively. For 18MV, 56% and 18% of all measured-values agreed better than ±20% for 1mm and 2.5mm grid-sizes respectively. Conclusion: Reliable Skin-dose calculations by TPS can be very difficult due to steep dose-gradient and inaccurate beam-modelling in buildup region.Our results showed that Eclipse over-estimates surface-dose.Impact of grid-size is also significant,surface-dose increased up to 40% from 1mm to 2.5mm,however, 1mm calculated-values closely agrees with measurements. Due to large uncertnities in skin-dose predictions from TPS, outmost caution must be exercised when skin dose is evaluated,a sufficiently smaller grid-size(1mm)can improve the accuracy and MOSFETs can be used for verification.« less

  10. Development of fine-resolution analyses and expanded large-scale forcing properties. Part II: Scale-awareness and application to single-column model experiments

    DOE PAGES

    Feng, Sha; Vogelmann, Andrew M.; Li, Zhijin; ...

    2015-01-20

    Fine-resolution three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy’s Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multi-scale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scalesmore » larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 (CAM5) is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagos, Samson M.; Feng, Zhe; Burleyson, Casey D.

    Regional cloud permitting model simulations of cloud populations observed during the 2011 ARM Madden Julian Oscillation Investigation Experiment/ Dynamics of Madden-Julian Experiment (AMIE/DYNAMO) field campaign are evaluated against radar and ship-based measurements. Sensitivity of model simulated surface rain rate statistics to parameters and parameterization of hydrometeor sizes in five commonly used WRF microphysics schemes are examined. It is shown that at 2 km grid spacing, the model generally overestimates rain rate from large and deep convective cores. Sensitivity runs involving variation of parameters that affect rain drop or ice particle size distribution (more aggressive break-up process etc) generally reduce themore » bias in rain-rate and boundary layer temperature statistics as the smaller particles become more vulnerable to evaporation. Furthermore significant improvement in the convective rain-rate statistics is observed when the horizontal grid-spacing is reduced to 1 km and 0.5 km, while it is worsened when run at 4 km grid spacing as increased turbulence enhances evaporation. The results suggest modulation of evaporation processes, through parameterization of turbulent mixing and break-up of hydrometeors may provide a potential avenue for correcting cloud statistics and associated boundary layer temperature biases in regional and global cloud permitting model simulations.« less

  12. Adaptive grid generation in a patient-specific cerebral aneurysm

    NASA Astrophysics Data System (ADS)

    Hodis, Simona; Kallmes, David F.; Dragomir-Daescu, Dan

    2013-11-01

    Adapting grid density to flow behavior provides the advantage of increasing solution accuracy while decreasing the number of grid elements in the simulation domain, therefore reducing the computational time. One method for grid adaptation requires successive refinement of grid density based on observed solution behavior until the numerical errors between successive grids are negligible. However, such an approach is time consuming and it is often neglected by the researchers. We present a technique to calculate the grid size distribution of an adaptive grid for computational fluid dynamics (CFD) simulations in a complex cerebral aneurysm geometry based on the kinematic curvature and torsion calculated from the velocity field. The relationship between the kinematic characteristics of the flow and the element size of the adaptive grid leads to a mathematical equation to calculate the grid size in different regions of the flow. The adaptive grid density is obtained such that it captures the more complex details of the flow with locally smaller grid size, while less complex flow characteristics are calculated on locally larger grid size. The current study shows that kinematic curvature and torsion calculated from the velocity field in a cerebral aneurysm can be used to find the locations of complex flow where the computational grid needs to be refined in order to obtain an accurate solution. We found that the complexity of the flow can be adequately described by velocity and vorticity and the angle between the two vectors. For example, inside the aneurysm bleb, at the bifurcation, and at the major arterial turns the element size in the lumen needs to be less than 10% of the artery radius, while at the boundary layer, the element size should be smaller than 1% of the artery radius, for accurate results within a 0.5% relative approximation error. This technique of quantifying flow complexity and adaptive remeshing has the potential to improve results accuracy and reduce computational time for patient-specific hemodynamics simulations, which are used to help assess the likelihood of aneurysm rupture using CFD calculated flow patterns.

  13. Increasing accuracy of dispersal kernels in grid-based population models

    USGS Publications Warehouse

    Slone, D.H.

    2011-01-01

    Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.

  14. An electrical betweenness approach for vulnerability assessment of power grids considering the capacity of generators and load

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Zhang, Bu-han; Zhang, Zhe; Yin, Xiang-gen; Wang, Bo

    2011-11-01

    Most existing research on the vulnerability of power grids based on complex networks ignores the electrical characteristics and the capacity of generators and load. In this paper, the electrical betweenness is defined by considering the maximal demand of load and the capacity of generators in power grids. The loss of load, which reflects the ability of power grids to provide sufficient power to customers, is introduced to measure the vulnerability together with the size of the largest cluster. The simulation results of the IEEE-118 bus system and the Central China Power Grid show that the cumulative distributions of node electrical betweenness follow a power-law and that the nodes with high electrical betweenness play critical roles in both topological structure and power transmission of power grids. The results prove that the model proposed in this paper is effective for analyzing the vulnerability of power grids.

  15. Modelling tidal current energy extraction in large area using a three-dimensional estuary model

    NASA Astrophysics Data System (ADS)

    Chen, Yaling; Lin, Binliang; Lin, Jie

    2014-11-01

    This paper presents a three-dimensional modelling study for simulating tidal current energy extraction in large areas, with a momentum sink term being added into the momentum equations. Due to the limits of computational capacity, the grid size of the numerical model is generally much larger than the turbine rotor diameter. Two models, i.e. a local grid refinement model and a coarse grid model, are employed and an idealized estuary is set up. The local grid refinement model is constructed to simulate the power generation of an isolated turbine and its impacts on hydrodynamics. The model is then used to determine the deployment of turbine farm and quantify a combined thrust coefficient for multiple turbines located in a grid element of coarse grid model. The model results indicate that the performance of power extraction is affected by array deployment, with more power generation from outer rows than inner rows due to velocity deficit influence of upstream turbines. Model results also demonstrate that the large-scale turbine farm has significant effects on the hydrodynamics. The tidal currents are attenuated within the turbine swept area, and both upstream and downstream of the array. While the currents are accelerated above and below turbines, which is contributed to speeding up the wake mixing process behind the arrays. The water levels are heightened in both low and high water levels as the turbine array spanning the full width of estuary. The magnitude of water level change is found to increase with the array expansion, especially at the low water level.

  16. Evaluation of multisectional and two-section particulate matter photochemical grid models in the Western United States.

    PubMed

    Morris, Ralph; Koo, Bonyoung; Yarwood, Greg

    2005-11-01

    Version 4.10s of the comprehensive air-quality model with extensions (CAMx) photochemical grid model has been developed, which includes two options for representing particulate matter (PM) size distribution: (1) a two-section representation that consists of fine (PM2.5) and coarse (PM2.5-10) modes that has no interactions between the sections and assumes all of the secondary PM is fine; and (2) a multisectional representation that divides the PM size distribution into N sections (e.g., N = 10) and simulates the mass transfer between sections because of coagulation, accumulation, evaporation, and other processes. The model was applied to Southern California using the two-section and multisection representation of PM size distribution, and we found that allowing secondary PM to grow into the coarse mode had a substantial effect on PM concentration estimates. CAMx was then applied to the Western United States for the 1996 annual period with a 36-km grid resolution using both the two-section and multisection PM representation. The Community Multiscale Air Quality (CMAQ) and Regional Modeling for Aerosol and Deposition (REMSAD) models were also applied to the 1996 annual period. Similar model performance was exhibited by the four models across the Interagency Monitoring of Protected Visual Environments (IMPROVE) and Clean Air Status and Trends Network monitoring networks. All four of the models exhibited fairly low annual bias for secondary PM sulfate and nitrate but with a winter overestimation and summer underestimation bias. The CAMx multisectional model estimated that coarse mode secondary sulfate and nitrate typically contribute <10% of the total sulfate and nitrate when averaged across the more rural IMPROVE monitoring network.

  17. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: II. Solutions and applications

    DOE PAGES

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~ 2700 nodes and ~ 3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polishmore » grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements.« less

  18. Efficient Algorithm for Locating and Sizing Series Compensation Devices in Large Transmission Grids: Solutions and Applications (PART II)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frolov, Vladimir; Backhaus, Scott N.; Chertkov, Michael

    2014-01-14

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~2700 nodes and ~3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polish grid ismore » used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements« less

  19. Aerodynamics and Aeroacoustics of Rotorcraft (l’ Aerodynamique et l’ aeroacoustique des aeronefs a voilure tournante).

    DTIC Science & Technology

    1995-08-01

    R.T.N. Chen: A survey of nonuniform 22) R.Houwink, A.E.P.Veldman: steady and inflow models for rotorcraft flight unsteady separated flow computations for...grid with con- see [17]). Because of the cylindrical nature of the stant grid sizes. If an arbitrary nonuniform grid is flow of a hovering rotor an O-H...research distributed around the blade section (figure 4) within a lairing at DRA Bedford on the DRA’s Aeromechanics Lynx Control which extends from 80

  20. Marine Physics: Internal-Surface Wave Interaction and Microstructure Measurement Program

    DTIC Science & Technology

    1974-12-31

    Stabilized Free-Fall Vehicles" 2. "On the Decay of Grid Generated Turbulence in Stratified Salt Water" Figure 1 Figure 2 Figure 3 Figure 4 Page 6 6...Ju.IJic.l modelling shows this vehicle to be stable ^h^iting tilts of less than 10-2 radians under fall into VTra e u Lving at 20 cm/sec. For...Fim^vm scaled according to an overall Froude number U/LN, scaling the vertical wake width, where U is the grid speed, L the mesh size of the grid

  1. CHARACTERISTIC LENGTH SCALE OF INPUT DATA IN DISTRIBUTED MODELS: IMPLICATIONS FOR MODELING GRID SIZE. (R824784)

    EPA Science Inventory

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model resp...

  2. Vascularized networks with two optimized channel sizes

    NASA Astrophysics Data System (ADS)

    Wang, K.-M.; Lorente, S.; Bejan, A.

    2006-07-01

    This paper reports the development of optimal vascularization for supplying self-healing smart materials with liquid that fills and seals the cracks that may occur throughout their volume. The vascularization consists of two-dimensional grids of interconnected orthogonal channels with two hydraulic diameters (D1, D2). The smallest square loop is designed to match the size (d) of the smallest crack. The network is sealed with respect to the outside and is filled with pressurized liquid. In this work, the crack site is modelled as a small spherical volume of diameter d. When a crack is formed, fluid flows from neighbouring channels to the crack site. This volume-to-point flow is optimized using two formulations: (1) incompressible liquid from steady constant-strength sources located in every node of the grid and from sources located equidistantly on the perimeter of the vascularized body of length scale L and (2) slightly compressible liquid from an initially pressurized grid discharging in time-dependent fashion into one crack site. The flow in every channel is laminar and fully developed. The objectives are (a) to minimize the global resistance to the flow from the grid to the crack site and (b) to minimize the time of discharge from the pressurized grid to the crack site. It is shown that methods (a) and (b) yield similar results. There is an optimal ratio of channel diameters D2/D1 < 1, and it decreases as the grid fineness (L/d) increases. The global flow resistance of the grid with optimized ratio of diameters is approximately half of the resistance of the corresponding grid with one channel size (D1 = D2). The optimized ratio of diameters and the minimized global resistance depend on how the grid intersects the crack site: this effect is minor and stresses the robustness of the vascularized design.

  3. Challenges and Opportunities in Modeling of the Global Atmosphere

    NASA Astrophysics Data System (ADS)

    Janjic, Zavisa; Djurdjevic, Vladimir; Vasic, Ratko

    2016-04-01

    Modeling paradigms on global scales may need to be reconsidered in order to better utilize the power of massively parallel processing. For high computational efficiency with distributed memory, each core should work on a small subdomain of the full integration domain, and exchange only few rows of halo data with the neighbouring cores. Note that the described scenario strongly favors horizontally local discretizations. This is relatively easy to achieve in regional models. However, the spherical geometry complicates the problem. The latitude-longitude grid with local in space and explicit in time differencing has been an early choice and remained in use ever since. The problem with this method is that the grid size in the longitudinal direction tends to zero as the poles are approached. So, in addition to having unnecessarily high resolution near the poles, polar filtering has to be applied in order to use a time step of a reasonable size. However, the polar filtering requires transpositions involving extra communications as well as more computations. The spectral transform method and the semi-implicit semi-Lagrangian schemes opened the way for application of spectral representation. With some variations, such techniques are currently dominating in global models. Unfortunately, the horizontal non-locality is inherent to the spectral representation and implicit time differencing, which inhibits scaling on a large number of cores. In this respect the lat-lon grid with polar filtering is a step in the right direction, particularly at high resolutions where the Legendre transforms become increasingly expensive. Other grids with reduced variability of grid distances, such as various versions of the cubed sphere and the hexagonal/pentagonal ("soccer ball") grids, were proposed almost fifty years ago. However, on these grids, large-scale (wavenumber 4 and 5) fictitious solutions ("grid imprinting") with significant amplitudes can develop. Due to their large scales, that are comparable to the scales of the dominant Rossby waves, such fictitious solutions are hard to identify and remove. Another new challenge on the global scale is that the limit of validity of the hydrostatic approximation is rapidly being approached. Relaxing the hydrostatic approximation requieres careful reformulation of the model dynamics and more computations and communications. The unified Non-hydrostatic Multi-scale Model (NMMB) will be briefly discussed as an example. The non-hydrostatic dynamics were designed in such a way as to avoid over-specification. The global version is run on the latitude-longitude grid, and the polar filter selectively slows down the waves that would otherwise be unstable without modifying their amplitudes. The model has been successfully tested on various scales. The skill of the medium range forecasts produced by the NMMB is comparable to that of other major medium range models, and its computational efficiency on parallel computers is good.

  4. Optimal Padding for the Two-Dimensional Fast Fourier Transform

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.; Aronstein, David L.; Smith, Jeffrey S.

    2011-01-01

    One-dimensional Fast Fourier Transform (FFT) operations work fastest on grids whose size is divisible by a power of two. Because of this, padding grids (that are not already sized to a power of two) so that their size is the next highest power of two can speed up operations. While this works well for one-dimensional grids, it does not work well for two-dimensional grids. For a two-dimensional grid, there are certain pad sizes that work better than others. Therefore, the need exists to generalize a strategy for determining optimal pad sizes. There are three steps in the FFT algorithm. The first is to perform a one-dimensional transform on each row in the grid. The second step is to transpose the resulting matrix. The third step is to perform a one-dimensional transform on each row in the resulting grid. Steps one and three both benefit from padding the row to the next highest power of two, but the second step needs a novel approach. An algorithm was developed that struck a balance between optimizing the grid pad size with prime factors that are small (which are optimal for one-dimensional operations), and with prime factors that are large (which are optimal for two-dimensional operations). This algorithm optimizes based on average run times, and is not fine-tuned for any specific application. It increases the amount of times that processor-requested data is found in the set-associative processor cache. Cache retrievals are 4-10 times faster than conventional memory retrievals. The tested implementation of the algorithm resulted in faster execution times on all platforms tested, but with varying sized grids. This is because various computer architectures process commands differently. The test grid was 512 512. Using a 540 540 grid on a Pentium V processor, the code ran 30 percent faster. On a PowerPC, a 256x256 grid worked best. A Core2Duo computer preferred either a 1040x1040 (15 percent faster) or a 1008x1008 (30 percent faster) grid. There are many industries that can benefit from this algorithm, including optics, image-processing, signal-processing, and engineering applications.

  5. Grid-Independent Large-Eddy Simulation in Turbulent Channel Flow using Three-Dimensional Explicit Filtering

    NASA Technical Reports Server (NTRS)

    Gullbrand, Jessica

    2003-01-01

    In this paper, turbulence-closure models are evaluated using the 'true' LES approach in turbulent channel flow. The study is an extension of the work presented by Gullbrand (2001), where fourth-order commutative filter functions are applied in three dimensions in a fourth-order finite-difference code. The true LES solution is the grid-independent solution to the filtered governing equations. The solution is obtained by keeping the filter width constant while the computational grid is refined. As the grid is refined, the solution converges towards the true LES solution. The true LES solution will depend on the filter width used, but will be independent of the grid resolution. In traditional LES, because the filter is implicit and directly connected to the grid spacing, the solution converges towards a direct numerical simulation (DNS) as the grid is refined, and not towards the solution of the filtered Navier-Stokes equations. The effect of turbulence-closure models is therefore difficult to determine in traditional LES because, as the grid is refined, more turbulence length scales are resolved and less influence from the models is expected. In contrast, in the true LES formulation, the explicit filter eliminates all scales that are smaller than the filter cutoff, regardless of the grid resolution. This ensures that the resolved length-scales do not vary as the grid resolution is changed. In true LES, the cell size must be smaller than or equal to the cutoff length scale of the filter function. The turbulence-closure models investigated are the dynamic Smagorinsky model (DSM), the dynamic mixed model (DMM), and the dynamic reconstruction model (DRM). These turbulence models were previously studied using two-dimensional explicit filtering in turbulent channel flow by Gullbrand & Chow (2002). The DSM by Germano et al. (1991) is used as the USFS model in all the simulations. This enables evaluation of different reconstruction models for the RSFS stresses. The DMM consists of the scale-similarity model (SSM) by Bardina et al. (1983), which is an RSFS model, in linear combination with the DSM. In the DRM, the RSFS stresses are modeled by using an estimate of the unfiltered velocity in the unclosed term, while the USFS stresses are modeled by the DSM. The DSM and the DMM are two commonly used turbulence-closure models, while the DRM is a more recent model.

  6. COMPARING AND LINKING PLUMES ACROSS MODELING APPROACHES

    EPA Science Inventory

    River plumes carry many pollutants, including microorganisms, into lakes and the coastal ocean. The physical scales of many stream and river plumes often lie between the scales for mixing zone plume models, such as the EPA Visual Plumes model, and larger-sized grid scales for re...

  7. Fast and precise dense grid size measurement method based on coaxial dual optical imaging system

    NASA Astrophysics Data System (ADS)

    Guo, Jiping; Peng, Xiang; Yu, Jiping; Hao, Jian; Diao, Yan; Song, Tao; Li, Ameng; Lu, Xiaowei

    2015-10-01

    Test sieves with dense grid structure are widely used in many fields, accurate gird size calibration is rather critical for success of grading analysis and test sieving. But traditional calibration methods suffer from the disadvantages of low measurement efficiency and shortage of sampling number of grids which could lead to quality judgment risk. Here, a fast and precise test sieve inspection method is presented. Firstly, a coaxial imaging system with low and high optical magnification probe is designed to capture the grid images of the test sieve. Then, a scaling ratio between low and high magnification probes can be obtained by the corresponding grids in captured images. With this, all grid dimensions in low magnification image can be obtained by measuring few corresponding grids in high magnification image with high accuracy. Finally, by scanning the stage of the tri-axis platform of the measuring apparatus, whole surface of the test sieve can be quickly inspected. Experiment results show that the proposed method can measure the test sieves with higher efficiency compare to traditional methods, which can measure 0.15 million grids (gird size 0.1mm) within only 60 seconds, and it can measure grid size range from 20μm to 5mm precisely. In a word, the presented method can calibrate the grid size of test sieve automatically with high efficiency and accuracy. By which, surface evaluation based on statistical method can be effectively implemented, and the quality judgment will be more reasonable.

  8. Modeling Lidar Multiple Scattering

    NASA Astrophysics Data System (ADS)

    Sato, Kaori; Okamoto, Hajime; Ishimoto, Hiroshi

    2016-06-01

    A practical model to simulate multiply scattered lidar returns from inhomogeneous cloud layers are developed based on Backward Monte Carlo (BMC) simulations. The estimated time delay of the backscattered intensities returning from different vertical grids by the developed model agreed well with that directly obtained from BMC calculations. The method was applied to the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) satellite data to improve the synergetic retrieval of cloud microphysics with CloudSat radar data at optically thick cloud grids. Preliminary results for retrieving mass fraction of co-existing cloud particles and drizzle size particles within lowlevel clouds are demonstrated.

  9. Recent developments and assessment of a three-dimensional PBL parameterization for improved wind forecasting over complex terrain

    NASA Astrophysics Data System (ADS)

    Kosovic, B.; Jimenez, P. A.; Haupt, S. E.; Martilli, A.; Olson, J.; Bao, J. W.

    2017-12-01

    At present, the planetary boundary layer (PBL) parameterizations available in most numerical weather prediction (NWP) models are one-dimensional. One-dimensional parameterizations are based on the assumption of horizontal homogeneity. This homogeneity assumption is appropriate for grid cell sizes greater than 10 km. However, for mesoscale simulations of flows in complex terrain with grid cell sizes below 1 km, the assumption of horizontal homogeneity is violated. Applying a one-dimensional PBL parameterization to high-resolution mesoscale simulations in complex terrain could result in significant error. For high-resolution mesoscale simulations of flows in complex terrain, we have therefore developed and implemented a three-dimensional (3D) PBL parameterization in the Weather Research and Forecasting (WRF) model. The implementation of the 3D PBL scheme is based on the developments outlined by Mellor and Yamada (1974, 1982). Our implementation in the Weather Research and Forecasting (WRF) model uses a pure algebraic model (level 2) to diagnose the turbulent fluxes. To evaluate the performance of the 3D PBL model, we use observations from the Wind Forecast Improvement Project 2 (WFIP2). The WFIP2 field study took place in the Columbia River Gorge area from 2015-2017. We focus on selected cases when physical phenomena of significance for wind energy applications such as mountain waves, topographic wakes, and gap flows were observed. Our assessment of the 3D PBL parameterization also considers a large-eddy simulation (LES). We carried out a nested LES with grid cell sizes of 30 m and 10 m covering a large fraction of the WFIP2 study area. Both LES domains were discretized using 6000 x 3000 x 200 grid cells in zonal, meridional, and vertical direction, respectively. The LES results are used to assess the relative magnitude of horizontal gradients of turbulent stresses and fluxes in comparison to vertical gradients. The presentation will highlight the advantages of the 3D PBL scheme in regions of complex terrain.

  10. Sensitivity studies of high-resolution RegCM3 simulations of precipitation over the European Alps: the effect of lateral boundary conditions and domain size

    NASA Astrophysics Data System (ADS)

    Nadeem, Imran; Formayer, Herbert

    2016-11-01

    A suite of high-resolution (10 km) simulations were performed with the International Centre for Theoretical Physics (ICTP) Regional Climate Model (RegCM3) to study the effect of various lateral boundary conditions (LBCs), domain size, and intermediate domains on simulated precipitation over the Great Alpine Region. The boundary conditions used were ECMWF ERA-Interim Reanalysis with grid spacing 0.75∘, the ECMWF ERA-40 Reanalysis with grid spacing 1.125 and 2.5∘, and finally the 2.5∘ NCEP/DOE AMIP-II Reanalysis. The model was run in one-way nesting mode with direct nesting of the high-resolution RCM (horizontal grid spacing Δx = 10 km) with driving reanalysis, with one intermediate resolution nest (Δx = 30 km) between high-resolution RCM and reanalysis forcings, and also with two intermediate resolution nests (Δx = 90 km and Δx = 30 km) for simulations forced with LBC of resolution 2.5∘. Additionally, the impact of domain size was investigated. The results of multiple simulations were evaluated using different analysis techniques, e.g., Taylor diagram and a newly defined useful statistical parameter, called Skill-Score, for evaluation of daily precipitation simulated by the model. It has been found that domain size has the major impact on the results, while different resolution and versions of LBCs, e.g., 1.125∘ ERA40 and 0.7∘ ERA-Interim, do not produce significantly different results. It is also noticed that direct nesting with reasonable domain size, seems to be the most adequate method for reproducing precipitation over complex terrain, while introducing intermediate resolution nests seems to deteriorate the results.

  11. Mesoscale Convective Systems During SCSMEX: Simulations with a Regional Climate Model and a Cloud-Resolving Model

    NASA Technical Reports Server (NTRS)

    Tao, W.-K.; Wang, Y.; Qian, J.-H.; Shie, C.-L.; Lau, W. K.-M.; Kakar, R.; Starr, David (Technical Monitor)

    2002-01-01

    The South China Sea Monsoon Experiment (SCSMEX) was conducted in May-June 1998. One of its major objectives is to better understand the key physical processes for the onset and evolution of the summer monsoon over Southeast Asia and southern China. Multiple observation platforms (e.g., upper-air soundings, Doppler radar, ships, wind profilers, radiometers, etc.) during SCSMEX provided a first attempt at investigating the detailed characteristics of convection and circulation changes associated with monsoons over the South China Sea region. SCSMEX also provided precipitation derived from atmospheric budgets and comparison to those obtained from the Tropical Rainfall Measuring Mission (TRMM). In this paper, a regional scale model (with grid size of 20 km) and Goddard Cumulus Ensemble (GCE) model (with 1 km grid size) are used to perform multi-day integration to understand the precipitation processes associated with the summer monsoon over Southeast Asia and southern China. The regional climate model is used to understand the soil-precipitation interaction and feedback associated with a flood event that occurred in and around China's Yantz River during SCSMEX Sensitivity tests on various land surface models, sea surface temperature (SST) variations, and cloud processes are performed to understand the precipitation processes associated with the onset of the monsoon over the S. China Sea during SCSMEX. These tests have indicated that the land surface model has a major impact on the circulation over the S. China Sea. Cloud processes can effect the precipitation pattern while SST variation can effect the precipitation amounts over both land and ocean. The exact location (region) of the flooding can be effected by the soil-rainfall feedback. The GCE-model results captured many observed precipitation characteristics because it used a fine grid size. For example, the model simulated rainfall temporal variation compared quite well to the sounding-estimated rainfall. The results show there are more latent heat fluxes prior to the onset of the monsoon. However, more rainfall was simulated after the onset of the monsoon. This modeling study indicates the latent heat fluxes (or evaporation) have more of an impact on precipitation processes and rainfall in the regional climate model simulations than in the cloud-resolving model simulations. Research is underway to determine if the difference in the grid sizes or the moist processes used in these two models is responsible for the differing influence of surface fluxes an precipitation processes.

  12. An algorithm for fast elastic wave simulation using a vectorized finite difference operator

    NASA Astrophysics Data System (ADS)

    Malkoti, Ajay; Vedanti, Nimisha; Tiwari, Ram Krishna

    2018-07-01

    Modern geophysical imaging techniques exploit the full wavefield information which can be simulated numerically. These numerical simulations are computationally expensive due to several factors, such as a large number of time steps and nodes, big size of the derivative stencil and huge model size. Besides these constraints, it is also important to reformulate the numerical derivative operator for improved efficiency. In this paper, we have introduced a vectorized derivative operator over the staggered grid with shifted coordinate systems. The operator increases the efficiency of simulation by exploiting the fact that each variable can be represented in the form of a matrix. This operator allows updating all nodes of a variable defined on the staggered grid, in a manner similar to the collocated grid scheme and thereby reducing the computational run-time considerably. Here we demonstrate an application of this operator to simulate the seismic wave propagation in elastic media (Marmousi model), by discretizing the equations on a staggered grid. We have compared the performance of this operator on three programming languages, which reveals that it can increase the execution speed by a factor of at least 2-3 times for FORTRAN and MATLAB; and nearly 100 times for Python. We have further carried out various tests in MATLAB to analyze the effect of model size and the number of time steps on total simulation run-time. We find that there is an additional, though small, computational overhead for each step and it depends on total number of time steps used in the simulation. A MATLAB code package, 'FDwave', for the proposed simulation scheme is available upon request.

  13. The Surface Pressure Response of a NACA 0015 Airfoil Immersed in Grid Turbulence. Volume 1; Characteristics of the Turbulence

    NASA Technical Reports Server (NTRS)

    Bereketab, Semere; Wang, Hong-Wei; Mish, Patrick; Devenport, William J.

    2000-01-01

    Two grids have been developed for the Virginia Tech 6 ft x 6 ft Stability wind tunnel for the purpose of generating homogeneous isotropic turbulent flows for the study of unsteady airfoil response. The first, a square bi-planar grid with a 12" mesh size and an open area ratio of 69.4%, was mounted in the wind tunnel contraction. The second grid, a metal weave with a 1.2 in. mesh size and an open area ratio of 68.2% was mounted in the tunnel test section. Detailed statistical and spectral measurements of the turbulence generated by the two grids are presented for wind tunnel free stream speeds of 10, 20, 30 and 40 m/s. These measurements show the flows to be closely homogeneous and isotropic. Both grids produce flows with a turbulence intensity of about 4% at the location planned for the airfoil leading edge. Turbulence produced by the large grid has an integral scale of some 3.2 inches here. Turbulence produced by the small grid is an order of magnitude smaller. For wavenumbers below the upper limit of the inertial subrange, the spectra and correlations measured with both grids at all speeds can be represented using the von Karman interpolation formula with a single velocity and length scale. The spectra maybe accurately represented over the entire wavenumber range by a modification of the von Karman interpolation formula that includes the effects of dissipation. These models are most accurate at the higher speeds (30 and 40 m/s).

  14. A Cell-Centered Multigrid Algorithm for All Grid Sizes

    NASA Technical Reports Server (NTRS)

    Gjesdal, Thor

    1996-01-01

    Multigrid methods are optimal; that is, their rate of convergence is independent of the number of grid points, because they use a nested sequence of coarse grids to represent different scales of the solution. This nesting does, however, usually lead to certain restrictions of the permissible size of the discretised problem. In cases where the modeler is free to specify the whole problem, such constraints are of little importance because they can be taken into consideration from the outset. We consider the situation in which there are other competing constraints on the resolution. These restrictions may stem from the physical problem (e.g., if the discretised operator contains experimental data measured on a fixed grid) or from the need to avoid limitations set by the hardware. In this paper we discuss a modification to the cell-centered multigrid algorithm, so that it can be used br problems with any resolution. We discuss in particular a coarsening strategy and choice of intergrid transfer operators that can handle grids with both an even or odd number of cells. The method is described and applied to linear equations obtained by discretization of two- and three-dimensional second-order elliptic PDEs.

  15. Multi-Resolution Unstructured Grid-Generation for Geophysical Applications on the Sphere

    NASA Technical Reports Server (NTRS)

    Engwirda, Darren

    2015-01-01

    An algorithm for the generation of non-uniform unstructured grids on ellipsoidal geometries is described. This technique is designed to generate high quality triangular and polygonal meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric and ocean simulation, and numerical weather predication. Using a recently developed Frontal-Delaunay-refinement technique, a method for the construction of high-quality unstructured ellipsoidal Delaunay triangulations is introduced. A dual polygonal grid, derived from the associated Voronoi diagram, is also optionally generated as a by-product. Compared to existing techniques, it is shown that the Frontal-Delaunay approach typically produces grids with near-optimal element quality and smooth grading characteristics, while imposing relatively low computational expense. Initial results are presented for a selection of uniform and non-uniform ellipsoidal grids appropriate for large-scale geophysical applications. The use of user-defined mesh-sizing functions to generate smoothly graded, non-uniform grids is discussed.

  16. Variational formulation of macroparticle models for electromagnetic plasma simulations

    DOE PAGES

    Stamm, Alexander B.; Shadwick, Bradley A.; Evstatiev, Evstati G.

    2014-06-01

    A variational method is used to derive a self-consistent macroparticle model for relativistic electromagnetic kinetic plasma simulations. Extending earlier work, discretization of the electromagnetic Low Lagrangian is performed via a reduction of the phase-space distribution function onto a collection of finite-sized macroparticles of arbitrary shape and discretization of field quantities onto a spatial grid. This approach may be used with lab frame coordinates or moving window coordinates; the latter can greatly improve computational efficiency for studying some types of laser-plasma interactions. The primary advantage of the variational approach is the preservation of Lagrangian symmetries, which in our case leads tomore » energy conservation and thus avoids difficulties with grid heating. In addition, this approach decouples particle size from grid spacing and relaxes restrictions on particle shape, leading to low numerical noise. The variational approach also guarantees consistent approximations in the equations of motion and is amenable to higher order methods in both space and time. We restrict our attention to the 1.5-D case (one coordinate and two momenta). Lastly, simulations are performed with the new models and demonstrate energy conservation and low noise.« less

  17. A Virtual Study of Grid Resolution on Experiments of a Highly-Resolved Turbulent Plume

    NASA Astrophysics Data System (ADS)

    Maisto, Pietro M. F.; Marshall, Andre W.; Gollner, Michael J.; Fire Protection Engineering Department Collaboration

    2017-11-01

    An accurate representation of sub-grid scale turbulent mixing is critical for modeling fire plumes and smoke transport. In this study, PLIF and PIV diagnostics are used with the saltwater modeling technique to provide highly-resolved instantaneous field measurements in unconfined turbulent plumes useful for statistical analysis, physical insight, and model validation. The effect of resolution was investigated employing a virtual interrogation window (of varying size) applied to the high-resolution field measurements. Motivated by LES low-pass filtering concepts, the high-resolution experimental data in this study can be analyzed within the interrogation windows (i.e. statistics at the sub-grid scale) and on interrogation windows (i.e. statistics at the resolved scale). A dimensionless resolution threshold (L/D*) criterion was determined to achieve converged statistics on the filtered measurements. Such a criterion was then used to establish the relative importance between large and small-scale turbulence phenomena while investigating specific scales for the turbulent flow. First order data sets start to collapse at a resolution of 0.3D*, while for second and higher order statistical moments the interrogation window size drops down to 0.2D*.

  18. Cascading failures in ac electricity grids.

    PubMed

    Rohden, Martin; Jung, Daniel; Tamrakar, Samyak; Kettemann, Stefan

    2016-09-01

    Sudden failure of a single transmission element in a power grid can induce a domino effect of cascading failures, which can lead to the isolation of a large number of consumers or even to the failure of the entire grid. Here we present results of the simulation of cascading failures in power grids, using an alternating current (AC) model. We first apply this model to a regular square grid topology. For a random placement of consumers and generators on the grid, the probability to find more than a certain number of unsupplied consumers decays as a power law and obeys a scaling law with respect to system size. Varying the transmitted power threshold above which a transmission line fails does not seem to change the power-law exponent q≈1.6. Furthermore, we study the influence of the placement of generators and consumers on the number of affected consumers and demonstrate that large clusters of generators and consumers are especially vulnerable to cascading failures. As a real-world topology, we consider the German high-voltage transmission grid. Applying the dynamic AC model and considering a random placement of consumers, we find that the probability to disconnect more than a certain number of consumers depends strongly on the threshold. For large thresholds the decay is clearly exponential, while for small ones the decay is slow, indicating a power-law decay.

  19. SU-E-T-454: Impact of Calculation Grid Size On Dosimetry and Radiobiological Parameters for Head and Neck IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Srivastava, S; Das, I; Indiana University Health Methodist Hospital, Indianapolis, IN

    2014-06-01

    Purpose: IMRT has become standard of care for complex treatments to optimize dose to target and spare normal tissues. However, the impact of calculation grid size is not widely known especially dose distribution, tumor control probability (TCP) and normal tissue complication probability (NTCP) which is investigated in this study. Methods: Ten head and neck IMRT patients treated with 6 MV photons were chosen for this study. Using Eclipse TPS, treatment plans were generated for different grid sizes in the range 1–5 mm for the same optimization criterion with specific dose-volume constraints. The dose volume histogram (DVH) was calculated for allmore » IMRT plans and dosimetric data were compared. ICRU-83 dose points such as D2%, D50%, D98%, as well as the homogeneity and conformity indices (HI, CI) were calculated. In addition, TCP and NTCP were calculated from DVH data. Results: The PTV mean dose and TCP decreases with increasing grid size with an average decrease in mean dose by 2% and TCP by 3% respectively. Increasing grid size from 1–5 mm grid size, the average mean dose and NTCP for left parotid was increased by 6.0% and 8.0% respectively. Similar patterns were observed for other OARs such as cochlea, parotids and spinal cord. The HI increases up to 60% and CI decreases on average by 3.5% between 1 and 5 mm grid that resulted in decreased TCP and increased NTCP values. The number of points meeting the gamma criteria of ±3% dose difference and ±3mm DTA was higher with a 1 mm on average (97.2%) than with a 5 mm grid (91.3%). Conclusion: A smaller calculation grid provides superior dosimetry with improved TCP and reduced NTCP values. The effect is more pronounced for smaller OARs. Thus, the smallest possible grid size should be used for accurate dose calculation especially in H and N planning.« less

  20. Semantic 3d City Model to Raster Generalisation for Water Run-Off Modelling

    NASA Astrophysics Data System (ADS)

    Verbree, E.; de Vries, M.; Gorte, B.; Oude Elberink, S.; Karimlou, G.

    2013-09-01

    Water run-off modelling applied within urban areas requires an appropriate detailed surface model represented by a raster height grid. Accurate simulations at this scale level have to take into account small but important water barriers and flow channels given by the large-scale map definitions of buildings, street infrastructure, and other terrain objects. Thus, these 3D features have to be rasterised such that each cell represents the height of the object class as good as possible given the cell size limitations. Small grid cells will result in realistic run-off modelling but with unacceptable computation times; larger grid cells with averaged height values will result in less realistic run-off modelling but fast computation times. This paper introduces a height grid generalisation approach in which the surface characteristics that most influence the water run-off flow are preserved. The first step is to create a detailed surface model (1:1.000), combining high-density laser data with a detailed topographic base map. The topographic map objects are triangulated to a set of TIN-objects by taking into account the semantics of the different map object classes. These TIN objects are then rasterised to two grids with a 0.5m cell-spacing: one grid for the object class labels and the other for the TIN-interpolated height values. The next step is to generalise both raster grids to a lower resolution using a procedure that considers the class label of each cell and that of its neighbours. The results of this approach are tested and validated by water run-off model runs for different cellspaced height grids at a pilot area in Amersfoort (the Netherlands). Two national datasets were used in this study: the large scale Topographic Base map (BGT, map scale 1:1.000), and the National height model of the Netherlands AHN2 (10 points per square meter on average). Comparison between the original AHN2 height grid and the semantically enriched and then generalised height grids shows that water barriers are better preserved with the new method. This research confirms the idea that topographical information, mainly the boundary locations and object classes, can enrich the height grid for this hydrological application.

  1. Effect of Finite Particle Size on Convergence of Point Particle Models in Euler-Lagrange Multiphase Dispersed Flow

    NASA Astrophysics Data System (ADS)

    Nili, Samaun; Park, Chanyoung; Haftka, Raphael T.; Kim, Nam H.; Balachandar, S.

    2017-11-01

    Point particle methods are extensively used in simulating Euler-Lagrange multiphase dispersed flow. When particles are much smaller than the Eulerian grid the point particle model is on firm theoretical ground. However, this standard approach of evaluating the gas-particle coupling at the particle center fails to converge as the Eulerian grid is reduced below particle size. We present an approach to model the interaction between particles and fluid for finite size particles that permits convergence. We use the generalized Faxen form to compute the force on a particle and compare the results against traditional point particle method. We apportion the different force components on the particle to fluid cells based on the fraction of particle volume or surface in the cell. The application is to a one-dimensional model of shock propagation through a particle-laden field at moderate volume fraction, where the convergence is achieved for a well-formulated force model and back coupling for finite size particles. Comparison with 3D direct fully resolved numerical simulations will be used to check if the approach also improves accuracy compared to the point particle model. Work supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA0002378.

  2. A FEDERATED PARTNERSHIP FOR URBAN METEOROLOGICAL AND AIR QUALITY MODELING

    EPA Science Inventory

    Recently, applications of urban meteorological and air quality models have been performed at resolutions on the order of km grid sizes. This necessitated development and incorporation of high resolution landcover data and additional boundary layer parameters that serve to descri...

  3. Catching ghosts with a coarse net: use and abuse of spatial sampling data in detecting synchronization

    PubMed Central

    2017-01-01

    Synchronization of population dynamics in different habitats is a frequently observed phenomenon. A common mathematical tool to reveal synchronization is the (cross)correlation coefficient between time courses of values of the population size of a given species where the population size is evaluated from spatial sampling data. The corresponding sampling net or grid is often coarse, i.e. it does not resolve all details of the spatial configuration, and the evaluation error—i.e. the difference between the true value of the population size and its estimated value—can be considerable. We show that this estimation error can make the value of the correlation coefficient very inaccurate or even irrelevant. We consider several population models to show that the value of the correlation coefficient calculated on a coarse sampling grid rarely exceeds 0.5, even if the true value is close to 1, so that the synchronization is effectively lost. We also observe ‘ghost synchronization’ when the correlation coefficient calculated on a coarse sampling grid is close to 1 but in reality the dynamics are not correlated. Finally, we suggest a simple test to check the sampling grid coarseness and hence to distinguish between the true and artifactual values of the correlation coefficient. PMID:28202589

  4. Challenges in Modeling of the Global Atmosphere

    NASA Astrophysics Data System (ADS)

    Janjic, Zavisa; Djurdjevic, Vladimir; Vasic, Ratko; Black, Tom

    2015-04-01

    The massively parallel computer architectures require that some widely adopted modeling paradigms be reconsidered in order to utilize more productively the power of parallel processing. For high computational efficiency with distributed memory, each core should work on a small subdomain of the full integration domain, and exchange only few rows of halo data with the neighbouring cores. However, the described scenario implies that the discretization used in the model is horizontally local. The spherical geometry further complicates the problem. Various grid topologies will be discussed and examples will be shown. The latitude-longitude grid with local in space and explicit in time differencing has been an early choice and remained in use ever since. The problem with this method is that the grid size in the longitudinal direction tends to zero as the poles are approached. So, in addition to having unnecessarily high resolution near the poles, polar filtering has to be applied in order to use a time step of decent size. However, the polar filtering requires transpositions involving extra communications. The spectral transform method and the semi-implicit semi-Lagrangian schemes opened the way for a wide application of the spectral representation. With some variations, these techniques are used in most major centers. However, the horizontal non-locality is inherent to the spectral representation and implicit time differencing, which inhibits scaling on a large number of cores. In this respect the lat-lon grid with a fast Fourier transform represents a significant step in the right direction, particularly at high resolutions where the Legendre transforms become increasingly expensive. Other grids with reduced variability of grid distances such as various versions of the cubed sphere and the hexagonal/pentagonal ("soccer ball") grids were proposed almost fifty years ago. However, on these grids, large-scale (wavenumber 4 and 5) fictitious solutions ("grid imprinting") with significant amplitudes can develop. Due to their large scales, that are comparable to the scales of the dominant Rossby waves, such fictitious solutions are hard to identify and remove. Another new challenge on the global scale is that the limit of validity of the hydrostatic approximation is rapidly being approached. Having in mind the sensitivity of extended deterministic forecasts to small disturbances, we may need global non-hydrostatic models sooner than we think. The unified Non-hydrostatic Multi-scale Model (NMMB) that is being developed at the National Centers for Environmental Prediction (NCEP) as a part of the new NOAA Environmental Modeling System (NEMS) will be discussed as an example. The non-hydrostatic dynamics were designed in such a way as to avoid over-specification. The global version is run on the latitude-longitude grid, and the polar filter selectively slows down the waves that would otherwise be unstable. The model formulation has been successfully tested on various scales. A global forecasting system based on the NMMB has been run in order to test and tune the model. The skill of the medium range forecasts produced by the NMMB is comparable to that of other major medium range models. The computational efficiency of the global NMMB on parallel computers is good.

  5. Sub-grid drag models for horizontal cylinder arrays immersed in gas-particle multiphase flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarkar, Avik; Sun, Xin; Sundaresan, Sankaran

    2013-09-08

    Immersed cylindrical tube arrays often are used as heat exchangers in gas-particle fluidized beds. In multiphase computational fluid dynamics (CFD) simulations of large fluidized beds, explicit resolution of small cylinders is computationally infeasible. Instead, the cylinder array may be viewed as an effective porous medium in coarse-grid simulations. The cylinders' influence on the suspension as a whole, manifested as an effective drag force, and on the relative motion between gas and particles, manifested as a correction to the gas-particle drag, must be modeled via suitable sub-grid constitutive relationships. In this work, highly resolved unit-cell simulations of flow around an arraymore » of horizontal cylinders, arranged in a staggered configuration, are filtered to construct sub-grid, or `filtered', drag models, which can be implemented in coarse-grid simulations. The force on the suspension exerted by the cylinders is comprised of, as expected, a buoyancy contribution, and a kinetic component analogous to fluid drag on a single cylinder. Furthermore, the introduction of tubes also is found to enhance segregation at the scale of the cylinder size, which, in turn, leads to a reduction in the filtered gas-particle drag.« less

  6. Implication of observed cloud variability for parameterizations of microphysical and radiative transfer processes in climate models

    NASA Astrophysics Data System (ADS)

    Huang, D.; Liu, Y.

    2014-12-01

    The effects of subgrid cloud variability on grid-average microphysical rates and radiative fluxes are examined by use of long-term retrieval products at the Tropical West Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy's Atmospheric Radiation Measurement (ARM) Program. Four commonly used distribution functions, the truncated Gaussian, Gamma, lognormal, and Weibull distributions, are constrained to have the same mean and standard deviation as observed cloud liquid water content. The PDFs are then used to upscale relevant physical processes to obtain grid-average process rates. It is found that the truncated Gaussian representation results in up to 30% mean bias in autoconversion rate whereas the mean bias for the lognormal representation is about 10%. The Gamma and Weibull distribution function performs the best for the grid-average autoconversion rate with the mean relative bias less than 5%. For radiative fluxes, the lognormal and truncated Gaussian representations perform better than the Gamma and Weibull representations. The results show that the optimal choice of subgrid cloud distribution function depends on the nonlinearity of the process of interest and thus there is no single distribution function that works best for all parameterizations. Examination of the scale (window size) dependence of the mean bias indicates that the bias in grid-average process rates monotonically increases with increasing window sizes, suggesting the increasing importance of subgrid variability with increasing grid sizes.

  7. Ocean regional circulation model sensitizes to resolution of the lateral boundary conditions

    NASA Astrophysics Data System (ADS)

    Pham, Van Sy; Hwang, Jin Hwan

    2017-04-01

    Dynamical downscaling with nested regional oceanographic models is an effective approach for forecasting operationally coastal weather and projecting long term climate on the ocean. Nesting procedures deliver the unwanted in dynamic downscaling due to the differences of numerical grid sizes and updating steps. Therefore, such unavoidable errors restrict the application of the Ocean Regional Circulation Model (ORCMs) in both short-term forecasts and long-term projections. The current work identifies the effects of errors induced by computational limitations during nesting procedures on the downscaled results of the ORCMs. The errors are quantitatively evaluated for each error source and its characteristics by the Big-Brother Experiments (BBE). The BBE separates identified errors from each other and quantitatively assess the amount of uncertainties employing the same model to simulate for both nesting and nested model. Here, we focus on discussing errors resulting from two main matters associated with nesting procedures. They should be the spatial grids' differences and the temporal updating steps. After the diverse cases from separately running of the BBE, a Taylor diagram was adopted to analyze the results and suggest an optimization intern of grid size and updating period and domain sizes. Key words: lateral boundary condition, error, ocean regional circulation model, Big-Brother Experiment. Acknowledgement: This research was supported by grants from the Korean Ministry of Oceans and Fisheries entitled "Development of integrated estuarine management system" and a National Research Foundation of Korea (NRF) Grant (No. 2015R1A5A 7037372) funded by MSIP of Korea. The authors thank the Integrated Research Institute of Construction and Environmental Engineering of Seoul National University for administrative support.

  8. NAS Grid Benchmarks. 1.0

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob; Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We provide a paper-and-pencil specification of a benchmark suite for computational grids. It is based on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks (NPB) and is called the NAS Grid Benchmarks (NGB). NGB problems are presented as data flow graphs encapsulating an instance of a slightly modified NPB task in each graph node, which communicates with other nodes by sending/receiving initialization data. Like NPB, NGB specifies several different classes (problem sizes). In this report we describe classes S, W, and A, and provide verification values for each. The implementor has the freedom to choose any language, grid environment, security model, fault tolerance/error correction mechanism, etc., as long as the resulting implementation passes the verification test and reports the turnaround time of the benchmark.

  9. Development of a hybrid 3-D hydrological model to simulate hillslopes and the regional unconfined aquifer system in Earth system models

    NASA Astrophysics Data System (ADS)

    Hazenberg, P.; Broxton, P. D.; Brunke, M.; Gochis, D.; Niu, G. Y.; Pelletier, J. D.; Troch, P. A. A.; Zeng, X.

    2015-12-01

    The terrestrial hydrological system, including surface and subsurface water, is an essential component of the Earth's climate system. Over the past few decades, land surface modelers have built one-dimensional (1D) models resolving the vertical flow of water through the soil column for use in Earth system models (ESMs). These models generally have a relatively coarse model grid size (~25-100 km) and only account for sub-grid lateral hydrological variations using simple parameterization schemes. At the same time, hydrologists have developed detailed high-resolution (~0.1-10 km grid size) three dimensional (3D) models and showed the importance of accounting for the vertical and lateral redistribution of surface and subsurface water on soil moisture, the surface energy balance and ecosystem dynamics on these smaller scales. However, computational constraints have limited the implementation of the high-resolution models for continental and global scale applications. The current work presents a hybrid-3D hydrological approach is presented, where the 1D vertical soil column model (available in many ESMs) is coupled with a high-resolution lateral flow model (h2D) to simulate subsurface flow and overland flow. H2D accounts for both local-scale hillslope and regional-scale unconfined aquifer responses (i.e. riparian zone and wetlands). This approach was shown to give comparable results as those obtained by an explicit 3D Richards model for the subsurface, but improves runtime efficiency considerably. The h3D approach is implemented for the Delaware river basin, where Noah-MP land surface model (LSM) is used to calculated vertical energy and water exchanges with the atmosphere using a 10km grid resolution. Noah-MP was coupled within the WRF-Hydro infrastructure with the lateral 1km grid resolution h2D model, for which the average depth-to-bedrock, hillslope width function and soil parameters were estimated from digital datasets. The ability of this h3D approach to simulate the hydrological dynamics of the Delaware River basin will be assessed by comparing the model results (both hydrological performance and numerical efficiency) with the standard setup of the NOAH-MP model and a high-resolution (1km) version of NOAH-MP, which also explicitly accounts for lateral subsurface and overland flow.

  10. Facial recognition using simulated prosthetic pixelized vision.

    PubMed

    Thompson, Robert W; Barnett, G David; Humayun, Mark S; Dagnelie, Gislin

    2003-11-01

    To evaluate a model of simulated pixelized prosthetic vision using noncontiguous circular phosphenes, to test the effects of phosphene and grid parameters on facial recognition. A video headset was used to view a reference set of four faces, followed by a partially averted image of one of those faces viewed through a square pixelizing grid that contained 10x10 to 32x32 dots separated by gaps. The grid size, dot size, gap width, dot dropout rate, and gray-scale resolution were varied separately about a standard test condition, for a total of 16 conditions. All tests were first performed at 99% contrast and then repeated at 12.5% contrast. Discrimination speed and performance were influenced by all stimulus parameters. The subjects achieved highly significant facial recognition accuracy for all high-contrast tests except for grids with 70% random dot dropout and two gray levels. In low-contrast tests, significant facial recognition accuracy was achieved for all but the most adverse grid parameters: total grid area less than 17% of the target image, 70% dropout, four or fewer gray levels, and a gap of 40.5 arcmin. For difficult test conditions, a pronounced learning effect was noticed during high-contrast trials, and a more subtle practice effect on timing was evident during subsequent low-contrast trials. These findings suggest that reliable face recognition with crude pixelized grids can be learned and may be possible, even with a crude visual prosthesis.

  11. High-resolution two-dimensional and three-dimensional modeling of wire grid polarizers and micropolarizer arrays

    NASA Astrophysics Data System (ADS)

    Vorobiev, Dmitry; Ninkov, Zoran

    2017-11-01

    Recent advances in photolithography allowed the fabrication of high-quality wire grid polarizers for the visible and near-infrared regimes. In turn, micropolarizer arrays (MPAs) based on wire grid polarizers have been developed and used to construct compact, versatile imaging polarimeters. However, the contrast and throughput of these polarimeters are significantly worse than one might expect based on the performance of large area wire grid polarizers or MPAs, alone. We investigate the parameters that affect the performance of wire grid polarizers and MPAs, using high-resolution two-dimensional and three-dimensional (3-D) finite-difference time-domain simulations. We pay special attention to numerical errors and other challenges that arise in models of these and other subwavelength optical devices. Our tests show that simulations of these structures in the visible and near-IR begin to converge numerically when the mesh size is smaller than ˜4 nm. The performance of wire grid polarizers is very sensitive to the shape, spacing, and conductivity of the metal wires. Using 3-D simulations of micropolarizer "superpixels," we directly study the cross talk due to diffraction at the edges of each micropolarizer, which decreases the contrast of MPAs to ˜200∶1.

  12. Assessment of an Euler-Interacting Boundary Layer Method Using High Reynolds Number Transonic Flight Data

    NASA Technical Reports Server (NTRS)

    Bonhaus, Daryl L.; Maddalon, Dal V.

    1998-01-01

    Flight-measured high Reynolds number turbulent-flow pressure distributions on a transport wing in transonic flow are compared to unstructured-grid calculations to assess the predictive ability of a three-dimensional Euler code (USM3D) coupled to an interacting boundary layer module. The two experimental pressure distributions selected for comparative analysis with the calculations are complex and turbulent but typical of an advanced technology laminar flow wing. An advancing front method (VGRID) was used to generate several tetrahedral grids for each test case. Initial calculations left considerable room for improvement in accuracy. Studies were then made of experimental errors, transition location, viscous effects, nacelle flow modeling, number and placement of spanwise boundary layer stations, and grid resolution. The most significant improvements in the accuracy of the calculations were gained by improvement of the nacelle flow model and by refinement of the computational grid. Final calculations yield results in close agreement with the experiment. Indications are that further grid refinement would produce additional improvement but would require more computer memory than is available. The appendix data compare the experimental attachment line location with calculations for different grid sizes. Good agreement is obtained between the experimental and calculated attachment line locations.

  13. An Optimal Current Controller Design for a Grid Connected Inverter to Improve Power Quality and Test Commercial PV Inverters.

    PubMed

    Algaddafi, Ali; Altuwayjiri, Saud A; Ahmed, Oday A; Daho, Ibrahim

    2017-01-01

    Grid connected inverters play a crucial role in generating energy to be fed to the grid. A filter is commonly used to suppress the switching frequency harmonics produced by the inverter, this being passive, and either an L- or LCL-filter. The latter is smaller in size compared to the L-filter. But choosing the optimal values of the LCL-filter is challenging due to resonance, which can affect stability. This paper presents a simple inverter controller design with an L-filter. The control topology is simple and applied easily using traditional control theory. Fast Fourier Transform analysis is used to compare different grid connected inverter control topologies. The modelled grid connected inverter with the proposed controller complies with the IEEE-1547 standard, and total harmonic distortion of the output current of the modelled inverter has been just 0.25% with an improved output waveform. Experimental work on a commercial PV inverter is then presented, including the effect of strong and weak grid connection. Inverter effects on the resistive load connected at the point of common coupling are presented. Results show that the voltage and current of resistive load, when the grid is interrupted, are increased, which may cause failure or damage for connecting appliances.

  14. An Optimal Current Controller Design for a Grid Connected Inverter to Improve Power Quality and Test Commercial PV Inverters

    PubMed Central

    Altuwayjiri, Saud A.; Ahmed, Oday A.; Daho, Ibrahim

    2017-01-01

    Grid connected inverters play a crucial role in generating energy to be fed to the grid. A filter is commonly used to suppress the switching frequency harmonics produced by the inverter, this being passive, and either an L- or LCL-filter. The latter is smaller in size compared to the L-filter. But choosing the optimal values of the LCL-filter is challenging due to resonance, which can affect stability. This paper presents a simple inverter controller design with an L-filter. The control topology is simple and applied easily using traditional control theory. Fast Fourier Transform analysis is used to compare different grid connected inverter control topologies. The modelled grid connected inverter with the proposed controller complies with the IEEE-1547 standard, and total harmonic distortion of the output current of the modelled inverter has been just 0.25% with an improved output waveform. Experimental work on a commercial PV inverter is then presented, including the effect of strong and weak grid connection. Inverter effects on the resistive load connected at the point of common coupling are presented. Results show that the voltage and current of resistive load, when the grid is interrupted, are increased, which may cause failure or damage for connecting appliances. PMID:28540362

  15. Impact of cell size on inventory and mapping errors in a cellular geographic information system

    NASA Technical Reports Server (NTRS)

    Wehde, M. E. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. The effect of grid position was found insignificant for maps but highly significant for isolated mapping units. A modelable relationship between mapping error and cell size was observed for the map segment analyzed. Map data structure was also analyzed with an interboundary distance distribution approach. Map data structure and the impact of cell size on that structure were observed. The existence of a model allowing prediction of mapping error based on map structure was hypothesized and two generations of models were tested under simplifying assumptions.

  16. An improved rotated staggered-grid finite-difference method with fourth-order temporal accuracy for elastic-wave modeling in anisotropic media

    DOE PAGES

    Gao, Kai; Huang, Lianjie

    2017-08-31

    The rotated staggered-grid (RSG) finite-difference method is a powerful tool for elastic-wave modeling in 2D anisotropic media where the symmetry axes of anisotropy are not aligned with the coordinate axes. We develop an improved RSG scheme with fourth-order temporal accuracy to reduce the numerical dispersion associated with prolonged wave propagation or a large temporal step size. The high-order temporal accuracy is achieved by including high-order temporal derivatives, which can be converted to high-order spatial derivatives to reduce computational cost. Dispersion analysis and numerical tests show that our method exhibits very low temporal dispersion even with a large temporal step sizemore » for elastic-wave modeling in complex anisotropic media. Using the same temporal step size, our method is more accurate than the conventional RSG scheme. In conclusion, our improved RSG scheme is therefore suitable for prolonged modeling of elastic-wave propagation in 2D anisotropic media.« less

  17. An improved rotated staggered-grid finite-difference method with fourth-order temporal accuracy for elastic-wave modeling in anisotropic media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Huang, Lianjie

    The rotated staggered-grid (RSG) finite-difference method is a powerful tool for elastic-wave modeling in 2D anisotropic media where the symmetry axes of anisotropy are not aligned with the coordinate axes. We develop an improved RSG scheme with fourth-order temporal accuracy to reduce the numerical dispersion associated with prolonged wave propagation or a large temporal step size. The high-order temporal accuracy is achieved by including high-order temporal derivatives, which can be converted to high-order spatial derivatives to reduce computational cost. Dispersion analysis and numerical tests show that our method exhibits very low temporal dispersion even with a large temporal step sizemore » for elastic-wave modeling in complex anisotropic media. Using the same temporal step size, our method is more accurate than the conventional RSG scheme. In conclusion, our improved RSG scheme is therefore suitable for prolonged modeling of elastic-wave propagation in 2D anisotropic media.« less

  18. Detecting Surface Changes from an Underground Explosion in Granite Using Unmanned Aerial System Photogrammetry

    DOE PAGES

    Schultz-Fellenz, Emily S.; Coppersmith, Ryan T.; Sussman, Aviva J.; ...

    2017-08-19

    Efficient detection and high-fidelity quantification of surface changes resulting from underground activities are important national and global security efforts. In this investigation, a team performed field-based topographic characterization by gathering high-quality photographs at very low altitudes from an unmanned aerial system (UAS)-borne camera platform. The data collection occurred shortly before and after a controlled underground chemical explosion as part of the United States Department of Energy’s Source Physics Experiments (SPE-5) series. The high-resolution overlapping photographs were used to create 3D photogrammetric models of the site, which then served to map changes in the landscape down to 1-cm-scale. Separate models weremore » created for two areas, herein referred to as the test table grid region and the nearfield grid region. The test table grid includes the region within ~40 m from surface ground zero, with photographs collected at a flight altitude of 8.5 m above ground level (AGL). The near-field grid area covered a broader area, 90–130 m from surface ground zero, and collected at a flight altitude of 22 m AGL. The photographs, processed using Agisoft Photoscan® in conjunction with 125 surveyed ground control point targets, yielded a 6-mm pixel-size digital elevation model (DEM) for the test table grid region. This provided the ≤3 cm resolution in the topographic data to map in fine detail a suite of features related to the underground explosion: uplift, subsidence, surface fractures, and morphological change detection. The near-field grid region data collection resulted in a 2-cm pixel-size DEM, enabling mapping of a broader range of features related to the explosion, including: uplift and subsidence, rock fall, and slope sloughing. This study represents one of the first works to constrain, both temporally and spatially, explosion-related surface damage using a UAS photogrammetric platform; these data will help to advance the science of underground explosion detection.« less

  19. Detecting Surface Changes from an Underground Explosion in Granite Using Unmanned Aerial System Photogrammetry

    NASA Astrophysics Data System (ADS)

    Schultz-Fellenz, Emily S.; Coppersmith, Ryan T.; Sussman, Aviva J.; Swanson, Erika M.; Cooley, James A.

    2017-08-01

    Efficient detection and high-fidelity quantification of surface changes resulting from underground activities are important national and global security efforts. In this investigation, a team performed field-based topographic characterization by gathering high-quality photographs at very low altitudes from an unmanned aerial system (UAS)-borne camera platform. The data collection occurred shortly before and after a controlled underground chemical explosion as part of the United States Department of Energy's Source Physics Experiments (SPE-5) series. The high-resolution overlapping photographs were used to create 3D photogrammetric models of the site, which then served to map changes in the landscape down to 1-cm-scale. Separate models were created for two areas, herein referred to as the test table grid region and the nearfield grid region. The test table grid includes the region within 40 m from surface ground zero, with photographs collected at a flight altitude of 8.5 m above ground level (AGL). The near-field grid area covered a broader area, 90-130 m from surface ground zero, and collected at a flight altitude of 22 m AGL. The photographs, processed using Agisoft Photoscan® in conjunction with 125 surveyed ground control point targets, yielded a 6-mm pixel-size digital elevation model (DEM) for the test table grid region. This provided the ≤3 cm resolution in the topographic data to map in fine detail a suite of features related to the underground explosion: uplift, subsidence, surface fractures, and morphological change detection. The near-field grid region data collection resulted in a 2-cm pixel-size DEM, enabling mapping of a broader range of features related to the explosion, including: uplift and subsidence, rock fall, and slope sloughing. This study represents one of the first works to constrain, both temporally and spatially, explosion-related surface damage using a UAS photogrammetric platform; these data will help to advance the science of underground explosion detection.

  20. Detecting Surface Changes from an Underground Explosion in Granite Using Unmanned Aerial System Photogrammetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schultz-Fellenz, Emily S.; Coppersmith, Ryan T.; Sussman, Aviva J.

    Efficient detection and high-fidelity quantification of surface changes resulting from underground activities are important national and global security efforts. In this investigation, a team performed field-based topographic characterization by gathering high-quality photographs at very low altitudes from an unmanned aerial system (UAS)-borne camera platform. The data collection occurred shortly before and after a controlled underground chemical explosion as part of the United States Department of Energy’s Source Physics Experiments (SPE-5) series. The high-resolution overlapping photographs were used to create 3D photogrammetric models of the site, which then served to map changes in the landscape down to 1-cm-scale. Separate models weremore » created for two areas, herein referred to as the test table grid region and the nearfield grid region. The test table grid includes the region within ~40 m from surface ground zero, with photographs collected at a flight altitude of 8.5 m above ground level (AGL). The near-field grid area covered a broader area, 90–130 m from surface ground zero, and collected at a flight altitude of 22 m AGL. The photographs, processed using Agisoft Photoscan® in conjunction with 125 surveyed ground control point targets, yielded a 6-mm pixel-size digital elevation model (DEM) for the test table grid region. This provided the ≤3 cm resolution in the topographic data to map in fine detail a suite of features related to the underground explosion: uplift, subsidence, surface fractures, and morphological change detection. The near-field grid region data collection resulted in a 2-cm pixel-size DEM, enabling mapping of a broader range of features related to the explosion, including: uplift and subsidence, rock fall, and slope sloughing. This study represents one of the first works to constrain, both temporally and spatially, explosion-related surface damage using a UAS photogrammetric platform; these data will help to advance the science of underground explosion detection.« less

  1. Performance model for grid-connected photovoltaic inverters.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyson, William Earl; Galbraith, Gary M.; King, David L.

    2007-09-01

    This document provides an empirically based performance model for grid-connected photovoltaic inverters used for system performance (energy) modeling and for continuous monitoring of inverter performance during system operation. The versatility and accuracy of the model were validated for a variety of both residential and commercial size inverters. Default parameters for the model can be obtained from manufacturers specification sheets, and the accuracy of the model can be further refined using measurements from either well-instrumented field measurements in operational systems or using detailed measurements from a recognized testing laboratory. An initial database of inverter performance parameters was developed based on measurementsmore » conducted at Sandia National Laboratories and at laboratories supporting the solar programs of the California Energy Commission.« less

  2. Comparative analysis of zonal systems for macro-level crash modeling.

    PubMed

    Cai, Qing; Abdel-Aty, Mohamed; Lee, Jaeyoung; Eluru, Naveen

    2017-06-01

    Macro-level traffic safety analysis has been undertaken at different spatial configurations. However, clear guidelines for the appropriate zonal system selection for safety analysis are unavailable. In this study, a comparative analysis was conducted to determine the optimal zonal system for macroscopic crash modeling considering census tracts (CTs), state-wide traffic analysis zones (STAZs), and a newly developed traffic-related zone system labeled traffic analysis districts (TADs). Poisson lognormal models for three crash types (i.e., total, severe, and non-motorized mode crashes) are developed based on the three zonal systems without and with consideration of spatial autocorrelation. The study proposes a method to compare the modeling performance of the three types of geographic units at different spatial configurations through a grid based framework. Specifically, the study region is partitioned to grids of various sizes and the model prediction accuracy of the various macro models is considered within these grids of various sizes. These model comparison results for all crash types indicated that the models based on TADs consistently offer a better performance compared to the others. Besides, the models considering spatial autocorrelation outperform the ones that do not consider it. Based on the modeling results and motivation for developing the different zonal systems, it is recommended using CTs for socio-demographic data collection, employing TAZs for transportation demand forecasting, and adopting TADs for transportation safety planning. The findings from this study can help practitioners select appropriate zonal systems for traffic crash modeling, which leads to develop more efficient policies to enhance transportation safety. Copyright © 2017 Elsevier Ltd and National Safety Council. All rights reserved.

  3. Air-core grid for scattered x-ray rejection

    DOEpatents

    Logan, C.M.; Lane, S.M.

    1995-10-03

    The invention is directed to a grid used in x-ray imaging applications to block scattered radiation while allowing the desired imaging radiation to pass through, and to process for making the grid. The grid is composed of glass containing lead oxide, and eliminates the spacer material used in prior known grids, and is therefore, an air-core grid. The glass is arranged in a pattern so that a large fraction of the area is open allowing the imaging radiation to pass through. A small pore size is used and the grid has a thickness chosen to provide high scatter rejection. For example, the grid may be produced with a 200 {micro}m pore size, 80% open area, and 4 mm thickness. 2 figs.

  4. Air-core grid for scattered x-ray rejection

    DOEpatents

    Logan, Clinton M.; Lane, Stephen M.

    1995-01-01

    The invention is directed to a grid used in x-ray imaging applications to block scattered radiation while allowing the desired imaging radiation to pass through, and to process for making the grid. The grid is composed of glass containing lead oxide, and eliminates the spacer material used in prior known grids, and is therefore, an air-core grid. The glass is arranged in a pattern so that a large fraction of the area is open allowing the imaging radiation to pass through. A small pore size is used and the grid has a thickness chosen to provide high scatter rejection. For example, the grid may be produced with a 200 .mu.m pore size, 80% open area, and 4 mm thickness.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pokharel, S; Rana, S

    Purpose: purpose of this study is to evaluate the effect of grid size in Eclipse AcurosXB dose calculation algorithm for SBRT lung. Methods: Five cases of SBRT lung previously treated have been chosen for present study. Four of the plans were 5 fields conventional IMRT and one was Rapid Arc plan. All five cases have been calculated with five grid sizes (1, 1.5, 2, 2.5 and 3mm) available for AXB algorithm with same plan normalization. Dosimetric indices relevant to SBRT along with MUs and time have been recorded for different grid sizes. The maximum difference was calculated as a percentagemore » of mean of all five values. All the plans were IMRT QAed with portal dosimetry. Results: The maximum difference of MUs was within 2%. The time increased was as high as 7 times from highest 3mm to lowest 1mm grid size. The largest difference of PTV minimum, maximum and mean dose were 7.7%, 1.5% and 1.6% respectively. The highest D2-Max difference was 6.1%. The highest difference in ipsilateral lung mean, V5Gy, V10Gy and V20Gy were 2.6%, 2.4%, 1.9% and 3.8% respectively. The maximum difference of heart, cord and esophagus dose were 6.5%, 7.8% and 4.02% respectively. The IMRT Gamma passing rate at 2%/2mm remains within 1.5% with at least 98% points passing with all grid sizes. Conclusion: This work indicates the lowest grid size of 1mm available in AXB is not necessarily required for accurate dose calculation. The IMRT passing rate was insignificant or not observed with the reduction of grid size less than 2mm. Although the maximum percentage difference of some of the dosimetric indices appear large, most of them are clinically insignificant in absolute dose values. So we conclude that 2mm grid size calculation is best compromise in light of dose calculation accuracy and time it takes to calculate dose.« less

  6. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    USGS Publications Warehouse

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from capture–recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of population size (NÌ‚) and effective sampling area (AÌ‚) are used to calculate density (DÌ‚ = NÌ‚/AÌ‚); and models applicable to sampling regimes using distance-sampling theory (typically transect lines or trapping webs) to estimate detection functions and densities directly from the distance data. However, few studies have evaluated these respective models for accuracy, precision, and bias on known field populations, and no studies have been conducted that compare the two approaches under controlled field conditions. In this study, we evaluated both classes of density estimators on known densities of enclosed rodent populations. Test data sets (n = 11) were developed using nine rodent species from capture–recapture live-trapping on both trapping grids and trapping webs in four replicate 4.2-ha enclosures on the Sevilleta National Wildlife Refuge in central New Mexico, USA. Additional “saturation” trapping efforts resulted in an enumeration of the rodent populations in each enclosure, allowing the computation of true densities. Density estimates (DÌ‚) were calculated using program CAPTURE for the grid data sets and program DISTANCE for the web data sets, and these results were compared to the known true densities (D) to evaluate each model's relative mean square error, accuracy, precision, and bias. In addition, we evaluated a variety of approaches to each data set's analysis by having a group of independent expert analysts calculate their best density estimates without a priori knowledge of the true densities; this “blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research studies involving small-mammal abundances.

  7. Study of LCL filter performance for inverter fed grid connected system

    NASA Astrophysics Data System (ADS)

    Thamizh Thentral, T. M.; Geetha, A.; Subramani, C.

    2018-04-01

    The abandoned use of power electronic converters in the application of grid connected system paves a way for critical injected harmonics. Hence the use of filter becomes a significant play among the present scenario. Higher order passive filter is mostly preferred in this application because of its reduced cost and size. This paper focuses on the design of LCL filter for the reduction of injected harmonics. The reason behind choosing LCL filter is inductor sizing and good ripple component attenuation over the other conventional filters. This work is simulated in MATLAB platform and the results are prominent to the objectives mentioned above. Also, the simulation results are verified with the implemented hardware model.

  8. Grid generation methodology and CFD simulations in sliding vane compressors and expanders

    NASA Astrophysics Data System (ADS)

    Bianchi, Giuseppe; Rane, Sham; Kovacevic, Ahmed; Cipollone, Roberto; Murgia, Stefano; Contaldi, Giulio

    2017-08-01

    The limiting factor for the employment of advanced 3D CFD tools in the analysis and design of rotary vane machines is the unavailability of methods for generation of computational grids suitable for fast and reliable numerical analysis. The paper addresses this challenge presenting the development of an analytical grid generation for vane machines that is based on the user defined nodal displacement. In particular, mesh boundaries are defined as parametric curves generated using trigonometrical modelling of the axial cross section of the machine while the distribution of computational nodes is performed using algebraic algorithms with transfinite interpolation, post orthogonalisation and smoothing. Algebraic control functions are introduced for distribution of nodes on the rotor and casing boundaries in order to achieve good grid quality in terms of cell size and expansion. In this way, the moving and deforming fluid domain of the sliding vane machine is discretized and the conservation of intrinsic quantities in ensured by maintaining the cell connectivity and structure. For validation of generated grids, a mid-size air compressor and a small-scale expander for Organic Rankine Cycle applications have been investigated in this paper. Remarks on implementation of the mesh motion algorithm, stability and robustness experienced with the ANSYS CFX solver as well as the obtained flow results are presented.

  9. A Conceptual Approach to Assimilating Remote Sensing Data to Improve Soil Moisture Profile Estimates in a Surface Flux/Hydrology Model. 3; Disaggregation

    NASA Technical Reports Server (NTRS)

    Caulfield, John; Crosson, William L.; Inguva, Ramarao; Laymon, Charles A.; Schamschula, Marius

    1998-01-01

    This is a followup on the preceding presentation by Crosson and Schamschula. The grid size for remote microwave measurements is much coarser than the hydrological model computational grids. To validate the hydrological models with measurements we propose mechanisms to disaggregate the microwave measurements to allow comparison with outputs from the hydrological models. Weighted interpolation and Bayesian methods are proposed to facilitate the comparison. While remote measurements occur at a large scale, they reflect underlying small-scale features. We can give continuing estimates of the small scale features by correcting the simple 0th-order, starting with each small-scale model with each large-scale measurement using a straightforward method based on Kalman filtering.

  10. Effect of particle size distribution of maize and soybean meal on the precaecal amino acid digestibility in broiler chickens.

    PubMed

    Siegert, W; Ganzer, C; Kluth, H; Rodehutscord, M

    2018-02-01

    1. Herein, it was investigated whether different particle size distributions of feed ingredients achieved by grinding through a 2- or 3-mm grid would have an effect on precaecal (pc) amino acid (AA) digestibility. Maize and soybean meal were used as the test ingredients. 2. Maize and soybean meal was ground with grid sizes of 2 or 3 mm. Nine diets were prepared. The basal diet contained 500 g/kg of maize starch. The other experimental diets contained maize or soybean meal samples at concentrations of 250 and 500, and 150 and 300 g/kg, respectively, instead of maize starch. Each diet was tested using 6 replicate groups of 10 birds each. The regression approach was applied to calculate the pc AA digestibility of the test ingredients. 3. The reduction of the grid size from 3 to 2 mm reduced the average particle size of both maize and soybean meal, mainly by reducing the proportion of coarse particles. Reducing the grid size significantly (P < 0.050) increased the pc digestibility of all AA in the soybean meal. In maize, reducing the grid size decreased the pc digestibility of all AA numerically, but not significantly (P > 0.050). The mean numerical differences in pc AA digestibility between the grid sizes were 0.045 and 0.055 in maize and soybean meal, respectively. 4. Future studies investigating the pc AA digestibility should specify the particle size distribution and should investigate the test ingredients ground similarly for practical applications.

  11. The interplay of various sources of noise on reliability of species distribution models hinges on ecological specialisation.

    PubMed

    Soultan, Alaaeldin; Safi, Kamran

    2017-01-01

    Digitized species occurrence data provide an unprecedented source of information for ecologists and conservationists. Species distribution model (SDM) has become a popular method to utilise these data for understanding the spatial and temporal distribution of species, and for modelling biodiversity patterns. Our objective is to study the impact of noise in species occurrence data (namely sample size and positional accuracy) on the performance and reliability of SDM, considering the multiplicative impact of SDM algorithms, species specialisation, and grid resolution. We created a set of four 'virtual' species characterized by different specialisation levels. For each of these species, we built the suitable habitat models using five algorithms at two grid resolutions, with varying sample sizes and different levels of positional accuracy. We assessed the performance and reliability of the SDM according to classic model evaluation metrics (Area Under the Curve and True Skill Statistic) and model agreement metrics (Overall Concordance Correlation Coefficient and geographic niche overlap) respectively. Our study revealed that species specialisation had by far the most dominant impact on the SDM. In contrast to previous studies, we found that for widespread species, low sample size and low positional accuracy were acceptable, and useful distribution ranges could be predicted with as few as 10 species occurrences. Range predictions for narrow-ranged species, however, were sensitive to sample size and positional accuracy, such that useful distribution ranges required at least 20 species occurrences. Against expectations, the MAXENT algorithm poorly predicted the distribution of specialist species at low sample size.

  12. Grid-connected photovoltaic (PV) systems with batteries storage as solution to electrical grid outages in Burkina Faso

    NASA Astrophysics Data System (ADS)

    Abdoulaye, D.; Koalaga, Z.; Zougmore, F.

    2012-02-01

    This paper deals with a key solution for power outages problem experienced by many African countries and this through grid-connected photovoltaic (PV) systems with batteries storage. African grids are characterized by an insufficient power supply and frequent interruptions. Due to this fact, users who especially use classical grid-connected photovoltaic systems are unable to profit from their installation even if there is sun. In this study, we suggest the using of a grid-connected photovoltaic system with batteries storage as a solution to these problems. This photovoltaic system works by injecting the surplus of electricity production into grid and can also deliver electricity as a stand-alone system with all security needed. To achieve our study objectives, firstly we conducted a survey of a real situation of one African electrical grid, the case of Burkina Faso (SONABEL: National Electricity Company of Burkina). Secondly, as study case, we undertake a sizing, a modeling and a simulation of a grid-connected PV system with batteries storage for the LAME laboratory at the University of Ouagadougou. The simulation shows that the proposed grid-connected system allows users to profit from their photovoltaic installation at any time even if the public electrical grid has some failures either during the day or at night.

  13. Multiresolution Iterative Reconstruction in High-Resolution Extremity Cone-Beam CT

    PubMed Central

    Cao, Qian; Zbijewski, Wojciech; Sisniega, Alejandro; Yorkston, John; Siewerdsen, Jeffrey H; Stayman, J Webster

    2016-01-01

    Application of model-based iterative reconstruction (MBIR) to high resolution cone-beam CT (CBCT) is computationally challenging because of the very fine discretization (voxel size <100 µm) of the reconstructed volume. Moreover, standard MBIR techniques require that the complete transaxial support for the acquired projections is reconstructed, thus precluding acceleration by restricting the reconstruction to a region-of-interest. To reduce the computational burden of high resolution MBIR, we propose a multiresolution Penalized-Weighted Least Squares (PWLS) algorithm, where the volume is parameterized as a union of fine and coarse voxel grids as well as selective binning of detector pixels. We introduce a penalty function designed to regularize across the boundaries between the two grids. The algorithm was evaluated in simulation studies emulating an extremity CBCT system and in a physical study on a test-bench. Artifacts arising from the mismatched discretization of the fine and coarse sub-volumes were investigated. The fine grid region was parameterized using 0.15 mm voxels and the voxel size in the coarse grid region was varied by changing a downsampling factor. No significant artifacts were found in either of the regions for downsampling factors of up to 4×. For a typical extremities CBCT volume size, this downsampling corresponds to an acceleration of the reconstruction that is more than five times faster than a brute force solution that applies fine voxel parameterization to the entire volume. For certain configurations of the coarse and fine grid regions, in particular when the boundary between the regions does not cross high attenuation gradients, downsampling factors as high as 10× can be used without introducing artifacts, yielding a ~50× speedup in PWLS. The proposed multiresolution algorithm significantly reduces the computational burden of high resolution iterative CBCT reconstruction and can be extended to other applications of MBIR where computationally expensive, high-fidelity forward models are applied only to a sub-region of the field-of-view. PMID:27694701

  14. Aerodynamic analysis of three advanced configurations using the TranAir full-potential code

    NASA Technical Reports Server (NTRS)

    Madson, M. D.; Carmichael, R. L.; Mendoza, J. P.

    1989-01-01

    Computational results are presented for three advanced configurations: the F-16A with wing tip missiles and under wing fuel tanks, the Oblique Wing Research Aircraft, and an Advanced Turboprop research model. These results were generated by the latest version of the TranAir full potential code, which solves for transonic flow over complex configurations. TranAir embeds a surface paneled geometry definition in a uniform rectangular flow field grid, thus avoiding the use of surface conforming grids, and decoupling the grid generation process from the definition of the configuration. The new version of the code locally refines the uniform grid near the surface of the geometry, based on local panel size and/or user input. This method distributes the flow field grid points much more efficiently than the previous version of the code, which solved for a grid that was uniform everywhere in the flow field. TranAir results are presented for the three configurations and are compared with wind tunnel data.

  15. Long Range Debye-Hückel Correction for Computation of Grid-based Electrostatic Forces Between Biomacromolecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mereghetti, Paolo; Martinez, M.; Wade, Rebecca C.

    Brownian dynamics (BD) simulations can be used to study very large molecular systems, such as models of the intracellular environment, using atomic-detail structures. Such simulations require strategies to contain the computational costs, especially for the computation of interaction forces and energies. A common approach is to compute interaction forces between macromolecules by precomputing their interaction potentials on three-dimensional discretized grids. For long-range interactions, such as electrostatics, grid-based methods are subject to finite size errors. We describe here the implementation of a Debye-Hückel correction to the grid-based electrostatic potential used in the SDA BD simulation software that was applied to simulatemore » solutions of bovine serum albumin and of hen egg white lysozyme.« less

  16. Deep Part Load Flow Analysis in a Francis Model turbine by means of two-phase unsteady flow simulations

    NASA Astrophysics Data System (ADS)

    Conrad, Philipp; Weber, Wilhelm; Jung, Alexander

    2017-04-01

    Hydropower plants are indispensable to stabilize the grid by reacting quickly to changes of the energy demand. However, an extension of the operating range towards high and deep part load conditions without fatigue of the hydraulic components is desirable to increase their flexibility. In this paper a model sized Francis turbine at low discharge operating conditions (Q/QBEP = 0.27) is analyzed by means of computational fluid dynamics (CFD). Unsteady two-phase simulations for two Thoma-number conditions are conducted. Stochastic pressure oscillations, observed on the test rig at low discharge, require sophisticated numerical models together with small time steps, large grid sizes and long simulation times to cope with these fluctuations. In this paper the BSL-EARSM model (Explicit Algebraic Reynolds Stress) was applied as a compromise between scale resolving and two-equation turbulence models with respect to computational effort and accuracy. Simulation results are compared to pressure measurements showing reasonable agreement in resolving the frequency spectra and amplitude. Inner blade vortices were predicted successfully in shape and size. Surface streamlines in blade-to-blade view are presented, giving insights to the formation of the inner blade vortices. The acquired time dependent pressure fields can be used for quasi-static structural analysis (FEA) for fatigue calculations in the future.

  17. Approximate deconvolution model for the simulation of turbulent gas-solid flows: An a priori analysis

    NASA Astrophysics Data System (ADS)

    Schneiderbauer, Simon; Saeedipour, Mahdi

    2018-02-01

    Highly resolved two-fluid model (TFM) simulations of gas-solid flows in vertical periodic channels have been performed to study closures for the filtered drag force and the Reynolds-stress-like contribution stemming from the convective terms. An approximate deconvolution model (ADM) for the large-eddy simulation of turbulent gas-solid suspensions is detailed and subsequently used to reconstruct those unresolved contributions in an a priori manner. With such an approach, an approximation of the unfiltered solution is obtained by repeated filtering allowing the determination of the unclosed terms of the filtered equations directly. A priori filtering shows that predictions of the ADM model yield fairly good agreement with the fine grid TFM simulations for various filter sizes and different particle sizes. In particular, strong positive correlation (ρ > 0.98) is observed at intermediate filter sizes for all sub-grid terms. Additionally, our study reveals that the ADM results moderately depend on the choice of the filters, such as box and Gaussian filter, as well as the deconvolution order. The a priori test finally reveals that ADM is superior compared to isotropic functional closures proposed recently [S. Schneiderbauer, "A spatially-averaged two-fluid model for dense large-scale gas-solid flows," AIChE J. 63, 3544-3562 (2017)].

  18. Multicore runup simulation by under water avalanche using two-layer 1D shallow water equations

    NASA Astrophysics Data System (ADS)

    Bagustara, B. A. R. H.; Simanjuntak, C. A.; Gunawan, P. H.

    2018-03-01

    The increasing of layers in shallow water equations (SWE) produces more dynamic model than the one-layer SWE model. The two-layer 1D SWE model has different density for each layer. This model becomes more dynamic and natural, for instance in the ocean, the density of water will decreasing from the bottom to the surface. Here, the source-centered hydro-static reconstruction (SCHR) numerical scheme will be used to approximate the solution of two-layer 1D SWE model, since this scheme is proved to satisfy the mathematical properties for shallow water equation. Additionally in this paper, the algorithm of SCHR is adapted to the multicore architecture. The simulation of runup by under water avalanche is elaborated here. The results show that the runup is depend on the ratio of density of each layers. Moreover by using grid sizes Nx = 8000, the speedup and efficiency by 2 threads are obtained 1.74779 times and 87.3896 % respectively. Nevertheless, by 4 threads the speedup and efficiency are obtained 2.93132 times and 73.2830 % respectively by similar number of grid sizes Nx = 8000.

  19. The impact of mesoscale convective systems on global precipitation: A modeling study

    NASA Astrophysics Data System (ADS)

    Tao, Wei-Kuo

    2017-04-01

    The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. Typical MCSs have horizontal scales of a few hundred kilometers (km); therefore, a large domain and high resolution are required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multi-scale modeling frameworks (MMFs) with 32 CRM grid points and 4 km grid spacing also might not have sufficient resolution and domain size for realistically simulating MCSs. In this study, the impact of MCSs on precipitation processes is examined by conducting numerical model simulations using the Goddard Cumulus Ensemble model (GCE) and Goddard MMF (GMMF). The results indicate that both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with less grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show that the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are either weaker or reduced in the GMMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feed back are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures (SSTs) is conducted and results in both reduced surface rainfall and evaporation.

  20. An Intelligent Approach to Strengthening of the Rural Electrical Power Supply Using Renewable Energy Resources

    NASA Astrophysics Data System (ADS)

    Robert, F. C.; Sisodia, G. S.; Gopalan, S.

    2017-08-01

    The healthy growth of economy lies in the balance between rural and urban development. Several developing countries have achieved a successful growth of urban areas, yet rural infrastructure has been neglected until recently. The rural electrical grids are weak with heavy losses and low capacity. Renewable energy represents an efficient way to generate electricity locally. However, the renewable energy generation may be limited by the low grid capacity. The current solutions focus on grid reinforcement only. This article presents a model for improving renewable energy integration in rural grids with the intelligent combination of three strategies: 1) grid reinforcement, 2) use of storage and 3) renewable energy curtailments. Such approach provides a solution to integrate a maximum of renewable energy generation on low capacity grids while minimising project cost and increasing the percentage of utilisation of assets. The test cases show that a grid connection agreement and a main inverter sized at 60 kW (resp. 80 kW) can accommodate a 100 kWp solar park (resp. 100 kW wind turbine) with minimal storage.

  1. FitEM2EM—Tools for Low Resolution Study of Macromolecular Assembly and Dynamics

    PubMed Central

    Frankenstein, Ziv; Sperling, Joseph; Sperling, Ruth; Eisenstein, Miriam

    2008-01-01

    Studies of the structure and dynamics of macromolecular assemblies often involve comparison of low resolution models obtained using different techniques such as electron microscopy or atomic force microscopy. We present new computational tools for comparing (matching) and docking of low resolution structures, based on shape complementarity. The matched or docked objects are represented by three dimensional grids where the value of each grid point depends on its position with regard to the interior, surface or exterior of the object. The grids are correlated using fast Fourier transformations producing either matches of related objects or docking models depending on the details of the grid representations. The procedures incorporate thickening and smoothing of the surfaces of the objects which effectively compensates for differences in the resolution of the matched/docked objects, circumventing the need for resolution modification. The presented matching tool FitEM2EMin successfully fitted electron microscopy structures obtained at different resolutions, different conformers of the same structure and partial structures, ranking correct matches at the top in every case. The differences between the grid representations of the matched objects can be used to study conformation differences or to characterize the size and shape of substructures. The presented low-to-low docking tool FitEM2EMout ranked the expected models at the top. PMID:18974836

  2. Climatological Impact of Atmospheric River Based on NARCCAP and DRI-RCM Datasets

    NASA Astrophysics Data System (ADS)

    Mejia, J. F.; Perryman, N. M.

    2012-12-01

    This study evaluates spatial responses of extreme precipitation environments, typically associated with Atmospheric River events, using Regional Climate Model (RCM) output from NARCCAP dataset (50km grid size) and the Desert Research Institute-RCM simulations (36 and 12 km grid size). For this study, a pattern-detection algorithm was developed to characterize Atmospheric Rivers (ARs)-like features from climate models. Topological analysis of the enhanced elongated moisture flux (500-300hPa; daily means) cores is used to objectively characterize such AR features in two distinct groups: (i) zonal, north Pacific ARs, and (ii) subtropical ARs, also known as "Pineapple Express" events. We computed the climatological responses of the different RCMs upon these two AR groups, from which intricate differences among RCMs stand out. This study presents these climatological responses from historical and scenario driven simulations, as well as implications for precipitation extreme-value analyses.

  3. Massive parallel 3D PIC simulation of negative ion extraction

    NASA Astrophysics Data System (ADS)

    Revel, Adrien; Mochalskyy, Serhiy; Montellano, Ivar Mauricio; Wünderlich, Dirk; Fantz, Ursel; Minea, Tiberiu

    2017-09-01

    The 3D PIC-MCC code ONIX is dedicated to modeling Negative hydrogen/deuterium Ion (NI) extraction and co-extraction of electrons from radio-frequency driven, low pressure plasma sources. It provides valuable insight on the complex phenomena involved in the extraction process. In previous calculations, a mesh size larger than the Debye length was used, implying numerical electron heating. Important steps have been achieved in terms of computation performance and parallelization efficiency allowing successful massive parallel calculations (4096 cores), imperative to resolve the Debye length. In addition, the numerical algorithms have been improved in terms of grid treatment, i.e., the electric field near the complex geometry boundaries (plasma grid) is calculated more accurately. The revised model preserves the full 3D treatment, but can take advantage of a highly refined mesh. ONIX was used to investigate the role of the mesh size, the re-injection scheme for lost particles (extracted or wall absorbed), and the electron thermalization process on the calculated extracted current and plasma characteristics. It is demonstrated that all numerical schemes give the same NI current distribution for extracted ions. Concerning the electrons, the pair-injection technique is found well-adapted to simulate the sheath in front of the plasma grid.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Som, Sibendu; Wang, Zihan; Pei, Yuanjiang

    A state-of-the-art spray modeling methodology, recently presented by Senecal et al. [ , , ], is applied to Large Eddy Simulations (LES) of vaporizing gasoline sprays. Simulations of non-combusting Spray G (gasoline fuel) from the Engine Combustion Network are performed. Adaptive mesh refinement (AMR) with cell sizes from 0.09 mm to 0.5 mm are utilized to further demonstrate grid convergence of the dynamic structure LES model for the gasoline sprays. Grid settings are recommended to optimize the accuracy/runtime tradeoff for LES-based spray simulations at different injection pressure conditions typically encountered in gasoline direct injection (GDI) applications. The influence of LESmore » sub-grid scale (SGS) models is explored by comparing the results from dynamic structure and Smagorinsky based models against simulations without any SGS model. Twenty different realizations are simulated by changing the random number seed used in the spray sub-models. It is shown that for global quantities such as spray penetration, comparing a single LES simulation to experimental data is reasonable. Through a detailed analysis using the relevance index (RI) criteria, recommendations are made regarding the minimum number of LES realizations required for accurate prediction of the gasoline sprays.« less

  5. Optimal Sizing of a Solar-Plus-Storage System for Utility Bill Savings and Resiliency Benefits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpkins, Travis; Anderson, Kate; Cutler, Dylan

    Solar-plus-storage systems can achieve significant utility savings in behind-the-meter deployments in buildings, campuses, or industrial sites. Common applications include demand charge reduction, energy arbitrage, time-shifting of excess photovoltaic (PV) production, and selling ancillary services to the utility grid. These systems can also offer some energy resiliency during grid outages. It is often difficult to quantify the amount of resiliency that these systems can provide, however, and this benefit is often undervalued or omitted during the design process. We propose a method for estimating the resiliency that a solar-plus-storage system can provide at a given location. We then present an optimizationmore » model that can optimally size the system components to minimize the lifecycle cost of electricity to the site, including the costs incurred during grid outages. The results show that including the value of resiliency during the feasibility stage can result in larger systems and increased resiliency.« less

  6. Optimal Sizing of a Solar-Plus-Storage System For Utility Bill Savings and Resiliency Benefits: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpkins, Travis; Anderson, Kate; Cutler, Dylan

    Solar-plus-storage systems can achieve significant utility savings in behind-the-meter deployments in buildings, campuses, or industrial sites. Common applications include demand charge reduction, energy arbitrage, time-shifting of excess photovoltaic (PV) production, and selling ancillary services to the utility grid. These systems can also offer some energy resiliency during grid outages. It is often difficult to quantify the amount of resiliency that these systems can provide, however, and this benefit is often undervalued or omitted during the design process. We propose a method for estimating the resiliency that a solar-plus-storage system can provide at a given location. We then present an optimizationmore » model that can optimally size the system components to minimize the lifecycle cost of electricity to the site, including the costs incurred during grid outages. The results show that including the value of resiliency during the feasibility stage can result in larger systems and increased resiliency.« less

  7. Site-specific strong ground motion prediction using 2.5-D modelling

    NASA Astrophysics Data System (ADS)

    Narayan, J. P.

    2001-08-01

    An algorithm was developed using the 2.5-D elastodynamic wave equation, based on the displacement-stress relation. One of the most significant advantages of the 2.5-D simulation is that the 3-D radiation pattern can be generated using double-couple point shear-dislocation sources in the 2-D numerical grid. A parsimonious staggered grid scheme was adopted instead of the standard staggered grid scheme, since this is the only scheme suitable for computing the dislocation. This new 2.5-D numerical modelling avoids the extensive computational cost of 3-D modelling. The significance of this exercise is that it makes it possible to simulate the strong ground motion (SGM), taking into account the energy released, 3-D radiation pattern, path effects and local site conditions at any location around the epicentre. The slowness vector (py) was used in the supersonic region for each layer, so that all the components of the inertia coefficient are positive. The double-couple point shear-dislocation source was implemented in the numerical grid using the moment tensor components as the body-force couples. The moment per unit volume was used in both the 3-D and 2.5-D modelling. A good agreement in the 3-D and 2.5-D responses for different grid sizes was obtained when the moment per unit volume was further reduced by a factor equal to the finite-difference grid size in the case of the 2.5-D modelling. The components of the radiation pattern were computed in the xz-plane using 3-D and 2.5-D algorithms for various focal mechanisms, and the results were in good agreement. A comparative study of the amplitude behaviour of the 3-D and 2.5-D wavefronts in a layered medium reveals the spatial and temporal damped nature of the 2.5-D elastodynamic wave equation. 3-D and 2.5-D simulated responses at a site using a different strike direction reveal that strong ground motion (SGM) can be predicted just by rotating the strike of the fault counter-clockwise by the same amount as the azimuth of the site with respect to the epicentre. This adjustment is necessary since the response is computed keeping the epicentre, focus and the desired site in the same xz-plane, with the x-axis pointing in the north direction.

  8. Grid cells on steeply sloping terrain: evidence for planar rather than volumetric encoding

    PubMed Central

    Hayman, Robin M. A.; Casali, Giulio; Wilson, Jonathan J.; Jeffery, Kate J.

    2015-01-01

    Neural encoding of navigable space involves a network of structures centered on the hippocampus, whose neurons –place cells – encode current location. Input to the place cells includes afferents from the entorhinal cortex, which contains grid cells. These are neurons expressing spatially localized activity patches, or firing fields, that are evenly spaced across the floor in a hexagonal close-packed array called a grid. It is thought that grids function to enable the calculation of distances. The question arises as to whether this odometry process operates in three dimensions, and so we queried whether grids permeate three-dimensional (3D) space – that is, form a lattice – or whether they simply follow the environment surface. If grids form a 3D lattice then this lattice would ordinarily be aligned horizontally (to explain the usual hexagonal pattern observed). A tilted floor would transect several layers of this putative lattice, resulting in interruption of the hexagonal pattern. We model this prediction with simulated grid lattices, and show that the firing of a grid cell on a 40°-tilted surface should cover proportionally less of the surface, with smaller field size, fewer fields, and reduced hexagonal symmetry. However, recording of real grid cells as animals foraged on a 40°-tilted surface found that firing of grid cells was almost indistinguishable, in pattern or rate, from that on the horizontal surface, with if anything increased coverage and field number, and preserved field size. It thus appears unlikely that the sloping surface transected a lattice. However, grid cells on the slope displayed slightly degraded firing patterns, with reduced coherence and slightly reduced symmetry. These findings collectively suggest that the grid cell component of the metric representation of space is not fixed in absolute 3D space but is influenced both by the surface the animal is on and by the relationship of this surface to the horizontal, supporting the hypothesis that the neural map of space is “multi-planar” rather than fully volumetric. PMID:26236245

  9. Atomisation and droplet formation mechanisms in a model two-phase mixing layer

    NASA Astrophysics Data System (ADS)

    Zaleski, Stephane; Ling, Yue; Fuster, Daniel; Tryggvason, Gretar

    2017-11-01

    We study atomization in a turbulent two-phase mixing layer inspired by the Grenoble air-water experiments. A planar gas jet of large velocity is emitted on top of a planar liquid jet of smaller velocity. The density ratio and momentum ratios are both set at 20 in the numerical simulation in order to ease the simulation. We use a Volume-Of-Fluid method with good parallelisation properties, implemented in our code http://parissimulator.sf.net. Our simulations show two distinct droplet formation mechanisms, one in which thin liquid sheets are punctured to form rapidly expanding holes and the other in which ligaments of irregular shape form and breakup in a manner similar but not identical to jets in Rayleigh-Plateau-Savart instabilities. Observed distributions of particle sizes are extracted for a sequence of ever more refined grids, the largest grid containing approximately eight billion points. Although their accuracy is limited at small sizes by the grid resolution and at large size by statistical effects, the distributions overlap in the central region. The observed distributions are much closer to log normal distributions than to gamma distributions as is also the case for experiments.

  10. Grid sensitivity capability for large scale structures

    NASA Technical Reports Server (NTRS)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  11. Modeling CCN effects on tropical convection: An statistical perspective

    NASA Astrophysics Data System (ADS)

    Carrio, G. G.; Cotton, W. R.; Massie, S. T.

    2012-12-01

    This modeling study examines the response of tropical convection to the enhancement of CCN concentrations from a statistical perspective. The sensitivity runs were performed using RAMS version 6.0, covering almost the entire Amazonian Aerosol Characterization Experiment period (AMAZE, wet season of 2008). The main focus of the analysis was the indirect aerosol effects on the probability density functions (PDFs) of various cloud properties. RAMS was configured to work with four two-way interactive nested grids with 42 vertical levels and horizontal grid spacing of 150, 37.5, 7.5, and 1.5 km. Grids 2 and 3 were used to simulate the synoptic and mesoscale environments, while grid 4 was used to resolve deep convection. Comparisons were made using the finest grid with a domain size of 300 X 300km, approximately centered on the city of Manaus (3.1S, 60.01W). The vertical grid was stretched using with 75m spacing at the finest levels to provide better resolution within the first 1.5 km, and the model top extended to approximately 22 km above ground level. RAMS was initialized on February 10 2008 (00:00 UTC), the length of simulations was 32 days, and GSF data were used for initialization and nudging of the coarser-grid boundaries. The control run considered a CCN concentration of 300cm-3 while other several other simulations considered an influx of higher CCN concentrations (up to 1300/cc) . The latter concentration was observed near the end of the AMAZE project period. Both direct and indirect effects of these CCN particles were considered. Model output data (finest grid) every 15 min were used to compute the PDFs for each model level. When increasing aerosol concentrations, significant impacts were simulated for the PDFs of the water contents of various hydrometeors, vertical motions, area with precipitation, latent heat releases, among other quantities. In most cases, they exhibited a peculiar non-monotonic response similar to that seen in two previous studies of ours (for isolated cloud systems). It is well known that a reduction in sizes of cloud droplets reduces coalescence, increases their probability of reaching super-cooled levels, and convective cells are intensified by additional release of latent heat of freezing. However, indirect aerosol effects tend to revert when aerosol concentrations are greatly enhanced due to the riming efficiency reduction of ice particles. However, some quantities show a different response; for instance, the water content associated with small ice crystals large contents are always more likely at high levels when considering air masses more polluted in terms of CCN. Conversely, the PDF's of water contents of larger ice crystals at high altitudes exhibit the aforementioned non-monotonic behavior.

  12. Optimal Sizing Tool for Battery Storage in Grid Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-09-24

    The battery storage sizing tool developed at Pacific Northwest National Laboratory can be used to evaluate economic performance and determine the optimal size of battery storage in different use cases considering multiple power system applications. The considered use cases include i) utility owned battery storage, and ii) battery storage behind customer meter. The power system applications from energy storage include energy arbitrage, balancing services, T&D deferral, outage mitigation, demand charge reduction etc. Most of existing solutions consider only one or two grid services simultaneously, such as balancing service and energy arbitrage. ES-select developed by Sandia and KEMA is able tomore » consider multiple grid services but it stacks the grid services based on priorities instead of co-optimization. This tool is the first one that provides a co-optimization for systematic and local grid services.« less

  13. Abruptness of Cascade Failures in Power Grids

    PubMed Central

    Pahwa, Sakshi; Scoglio, Caterina; Scala, Antonio

    2014-01-01

    Electric power-systems are one of the most important critical infrastructures. In recent years, they have been exposed to extreme stress due to the increasing demand, the introduction of distributed renewable energy sources, and the development of extensive interconnections. We investigate the phenomenon of abrupt breakdown of an electric power-system under two scenarios: load growth (mimicking the ever-increasing customer demand) and power fluctuations (mimicking the effects of renewable sources). Our results on real, realistic and synthetic networks indicate that increasing the system size causes breakdowns to become more abrupt; in fact, mapping the system to a solvable statistical-physics model indicates the occurrence of a first order transition in the large size limit. Such an enhancement for the systemic risk failures (black-outs) with increasing network size is an effect that should be considered in the current projects aiming to integrate national power-grids into “super-grids”. PMID:24424239

  14. Spectral nudging to eliminate the effects of domain position and geometry in regional climate model simulations

    NASA Astrophysics Data System (ADS)

    Miguez-Macho, Gonzalo; Stenchikov, Georgiy L.; Robock, Alan

    2004-07-01

    It is well known that regional climate simulations are sensitive to the size and position of the domain chosen for calculations. Here we study the physical mechanisms of this sensitivity. We conducted simulations with the Regional Atmospheric Modeling System (RAMS) for June 2000 over North America at 50 km horizontal resolution using a 7500 km × 5400 km grid and NCEP/NCAR reanalysis as boundary conditions. The position of the domain was displaced in several directions, always maintaining the U.S. in the interior, out of the buffer zone along the lateral boundaries. Circulation biases developed a large scale structure, organized by the Rocky Mountains, resulting from a systematic shifting of the synoptic wave trains that crossed the domain. The distortion of the large-scale circulation was produced by interaction of the modeled flow with the lateral boundaries of the nested domain and varied when the position of the grid was altered. This changed the large-scale environment among the different simulations and translated into diverse conditions for the development of the mesoscale processes that produce most of precipitation for the Great Plains in the summer season. As a consequence, precipitation results varied, sometimes greatly, among the experiments with the different grid positions. To eliminate the dependence of results on the position of the domain, we used spectral nudging of waves longer than 2500 km above the boundary layer. Moisture was not nudged at any level. This constrained the synoptic scales to follow reanalysis while allowing the model to develop the small-scale dynamics responsible for the rainfall. Nudging of the large scales successfully eliminated the variation of precipitation results when the grid was moved. We suggest that this technique is necessary for all downscaling studies with regional models with domain sizes of a few thousand kilometers and larger embedded in global models.

  15. Revision of the documentation for a model for calculating effects of liquid waste disposal in deep saline aquifers

    USGS Publications Warehouse

    INTERA Environmental Consultants, Inc.

    1979-01-01

    The major limitation of the model arises using second-order correct (central-difference) finite-difference approximation in space. To avoid numerical oscillations in the solution, the user must restrict grid block and time step sizes depending upon the magnitude of the dispersivity.

  16. On Improving 4-km Mesoscale Model Simulations

    NASA Astrophysics Data System (ADS)

    Deng, Aijun; Stauffer, David R.

    2006-03-01

    A previous study showed that use of analysis-nudging four-dimensional data assimilation (FDDA) and improved physics in the fifth-generation Pennsylvania State University National Center for Atmospheric Research Mesoscale Model (MM5) produced the best overall performance on a 12-km-domain simulation, based on the 18 19 September 1983 Cross-Appalachian Tracer Experiment (CAPTEX) case. However, reducing the simulated grid length to 4 km had detrimental effects. The primary cause was likely the explicit representation of convection accompanying a cold-frontal system. Because no convective parameterization scheme (CPS) was used, the convective updrafts were forced on coarser-than-realistic scales, and the rainfall and the atmospheric response to the convection were too strong. The evaporative cooling and downdrafts were too vigorous, causing widespread disruption of the low-level winds and spurious advection of the simulated tracer. In this study, a series of experiments was designed to address this general problem involving 4-km model precipitation and gridpoint storms and associated model sensitivities to the use of FDDA, planetary boundary layer (PBL) turbulence physics, grid-explicit microphysics, a CPS, and enhanced horizontal diffusion. Some of the conclusions include the following: 1) Enhanced parameterized vertical mixing in the turbulent kinetic energy (TKE) turbulence scheme has shown marked improvements in the simulated fields. 2) Use of a CPS on the 4-km grid improved the precipitation and low-level wind results. 3) Use of the Hong and Pan Medium-Range Forecast PBL scheme showed larger model errors within the PBL and a clear tendency to predict much deeper PBL heights than the TKE scheme. 4) Combining observation-nudging FDDA with a CPS produced the best overall simulations. 5) Finer horizontal resolution does not always produce better simulations, especially in convectively unstable environments, and a new CPS suitable for 4-km resolution is needed. 6) Although use of current CPSs may violate their underlying assumptions related to the size of the convective element relative to the grid size, the gridpoint storm problem was greatly reduced by applying a CPS to the 4-km grid.

  17. Improved image quality of cone beam CT scans for radiotherapy image guidance using fiber-interspaced antiscatter grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stankovic, Uros; Herk, Marcel van; Ploeger, Lennert S.

    Purpose: Medical linear accelerator mounted cone beam CT (CBCT) scanner provides useful soft tissue contrast for purposes of image guidance in radiotherapy. The presence of extensive scattered radiation has a negative effect on soft tissue visibility and uniformity of CBCT scans. Antiscatter grids (ASG) are used in the field of diagnostic radiography to mitigate the scatter. They usually do increase the contrast of the scan, but simultaneously increase the noise. Therefore, and considering other scatter mitigation mechanisms present in a CBCT scanner, the applicability of ASGs with aluminum interspacing for a wide range of imaging conditions has been inconclusive inmore » previous studies. In recent years, grids using fiber interspacers have appeared, providing grids with higher scatter rejection while maintaining reasonable transmission of primary radiation. The purpose of this study was to evaluate the impact of one such grid on CBCT image quality. Methods: The grid used (Philips Medical Systems) had ratio of 21:1, frequency 36 lp/cm, and nominal selectivity of 11.9. It was mounted on the kV flat panel detector of an Elekta Synergy linear accelerator and tested in a phantom and a clinical study. Due to the flex of the linac and presence of gridline artifacts an angle dependent gain correction algorithm was devised to mitigate resulting artifacts. Scan reconstruction was performed using XVI4.5 augmented with inhouse developed image lag correction and Hounsfield unit calibration. To determine the necessary parameters for Hounsfield unit calibration and software scatter correction parameters, the Catphan 600 (The Phantom Laboratory) phantom was used. Image quality parameters were evaluated using CIRS CBCT Image Quality and Electron Density Phantom (CIRS) in two different geometries: one modeling head and neck and other pelvic region. Phantoms were acquired with and without the grid and reconstructed with and without software correction which was adapted for the different acquisition scenarios. Parameters used in the phantom study weret{sub cup} for nonuniformity and contrast-to-noise ratio (CNR) for soft tissue visibility. Clinical scans were evaluated in an observer study in which four experienced radiotherapy technologists rated soft tissue visibility and uniformity of scans with and without the grid. Results: The proposed angle dependent gain correction algorithm suppressed the visible ring artifacts. Grid had a beneficial impact on nonuniformity, contrast to noise ratio, and Hounsfield unit accuracy for both scanning geometries. The nonuniformity reduced by 90% for head sized object and 91% for pelvic-sized object. CNR improved compared to no corrections on average by a factor 2.8 for the head sized object, and 2.2 for the pelvic sized phantom. Grid outperformed software correction alone, but adding additional software correction to the grid was overall the best strategy. In the observer study, a significant improvement was found in both soft tissue visibility and nonuniformity of scans when grid is used. Conclusions: The evaluated fiber-interspaced grid improved the image quality of the CBCT system for broad range of imaging conditions. Clinical scans show significant improvement in soft tissue visibility and uniformity without the need to increase the imaging dose.« less

  18. Improved image quality of cone beam CT scans for radiotherapy image guidance using fiber-interspaced antiscatter grid.

    PubMed

    Stankovic, Uros; van Herk, Marcel; Ploeger, Lennert S; Sonke, Jan-Jakob

    2014-06-01

    Medical linear accelerator mounted cone beam CT (CBCT) scanner provides useful soft tissue contrast for purposes of image guidance in radiotherapy. The presence of extensive scattered radiation has a negative effect on soft tissue visibility and uniformity of CBCT scans. Antiscatter grids (ASG) are used in the field of diagnostic radiography to mitigate the scatter. They usually do increase the contrast of the scan, but simultaneously increase the noise. Therefore, and considering other scatter mitigation mechanisms present in a CBCT scanner, the applicability of ASGs with aluminum interspacing for a wide range of imaging conditions has been inconclusive in previous studies. In recent years, grids using fiber interspacers have appeared, providing grids with higher scatter rejection while maintaining reasonable transmission of primary radiation. The purpose of this study was to evaluate the impact of one such grid on CBCT image quality. The grid used (Philips Medical Systems) had ratio of 21:1, frequency 36 lp/cm, and nominal selectivity of 11.9. It was mounted on the kV flat panel detector of an Elekta Synergy linear accelerator and tested in a phantom and a clinical study. Due to the flex of the linac and presence of gridline artifacts an angle dependent gain correction algorithm was devised to mitigate resulting artifacts. Scan reconstruction was performed using XVI4.5 augmented with inhouse developed image lag correction and Hounsfield unit calibration. To determine the necessary parameters for Hounsfield unit calibration and software scatter correction parameters, the Catphan 600 (The Phantom Laboratory) phantom was used. Image quality parameters were evaluated using CIRS CBCT Image Quality and Electron Density Phantom (CIRS) in two different geometries: one modeling head and neck and other pelvic region. Phantoms were acquired with and without the grid and reconstructed with and without software correction which was adapted for the different acquisition scenarios. Parameters used in the phantom study were t(cup) for nonuniformity and contrast-to-noise ratio (CNR) for soft tissue visibility. Clinical scans were evaluated in an observer study in which four experienced radiotherapy technologists rated soft tissue visibility and uniformity of scans with and without the grid. The proposed angle dependent gain correction algorithm suppressed the visible ring artifacts. Grid had a beneficial impact on nonuniformity, contrast to noise ratio, and Hounsfield unit accuracy for both scanning geometries. The nonuniformity reduced by 90% for head sized object and 91% for pelvic-sized object. CNR improved compared to no corrections on average by a factor 2.8 for the head sized object, and 2.2 for the pelvic sized phantom. Grid outperformed software correction alone, but adding additional software correction to the grid was overall the best strategy. In the observer study, a significant improvement was found in both soft tissue visibility and nonuniformity of scans when grid is used. The evaluated fiber-interspaced grid improved the image quality of the CBCT system for broad range of imaging conditions. Clinical scans show significant improvement in soft tissue visibility and uniformity without the need to increase the imaging dose.

  19. Complex Dynamics of the Power Transmission Grid (and other Critical Infrastructures)

    NASA Astrophysics Data System (ADS)

    Newman, David

    2015-03-01

    Our modern societies depend crucially on a web of complex critical infrastructures such as power transmission networks, communication systems, transportation networks and many others. These infrastructure systems display a great number of the characteristic properties of complex systems. Important among these characteristics, they exhibit infrequent large cascading failures that often obey a power law distribution in their probability versus size. This power law behavior suggests that conventional risk analysis does not apply to these systems. It is thought that much of this behavior comes from the dynamical evolution of the system as it ages, is repaired, upgraded, and as the operational rules evolve with human decision making playing an important role in the dynamics. In this talk, infrastructure systems as complex dynamical systems will be introduced and some of their properties explored. The majority of the talk will then be focused on the electric power transmission grid though many of the results can be easily applied to other infrastructures. General properties of the grid will be discussed and results from a dynamical complex systems power transmission model will be compared with real world data. Then we will look at a variety of uses of this type of model. As examples, we will discuss the impact of size and network homogeneity on the grid robustness, the change in risk of failure as generation mix (more distributed vs centralized for example) changes, as well as the effect of operational changes such as the changing the operational risk aversion or grid upgrade strategies. One of the important outcomes from this work is the realization that ``improvements'' in the system components and operational efficiency do not always improve the system robustness, and can in fact greatly increase the risk, when measured as a risk of large failure.

  20. A comparative analysis of dynamic grids vs. virtual grids using the A3pviGrid framework.

    PubMed

    Shankaranarayanan, Avinas; Amaldas, Christine

    2010-11-01

    With the proliferation of Quad/Multi-core micro-processors in mainstream platforms such as desktops and workstations; a large number of unused CPU cycles can be utilized for running virtual machines (VMs) as dynamic nodes in distributed environments. Grid services and its service oriented business broker now termed cloud computing could deploy image based virtualization platforms enabling agent based resource management and dynamic fault management. In this paper we present an efficient way of utilizing heterogeneous virtual machines on idle desktops as an environment for consumption of high performance grid services. Spurious and exponential increases in the size of the datasets are constant concerns in medical and pharmaceutical industries due to the constant discovery and publication of large sequence databases. Traditional algorithms are not modeled at handing large data sizes under sudden and dynamic changes in the execution environment as previously discussed. This research was undertaken to compare our previous results with running the same test dataset with that of a virtual Grid platform using virtual machines (Virtualization). The implemented architecture, A3pviGrid utilizes game theoretic optimization and agent based team formation (Coalition) algorithms to improve upon scalability with respect to team formation. Due to the dynamic nature of distributed systems (as discussed in our previous work) all interactions were made local within a team transparently. This paper is a proof of concept of an experimental mini-Grid test-bed compared to running the platform on local virtual machines on a local test cluster. This was done to give every agent its own execution platform enabling anonymity and better control of the dynamic environmental parameters. We also analyze performance and scalability of Blast in a multiple virtual node setup and present our findings. This paper is an extension of our previous research on improving the BLAST application framework using dynamic Grids on virtualization platforms such as the virtual box.

  1. Three-dimensional forward modeling of DC resistivity using the aggregation-based algebraic multigrid method

    NASA Astrophysics Data System (ADS)

    Chen, Hui; Deng, Ju-Zhi; Yin, Min; Yin, Chang-Chun; Tang, Wen-Wu

    2017-03-01

    To speed up three-dimensional (3D) DC resistivity modeling, we present a new multigrid method, the aggregation-based algebraic multigrid method (AGMG). We first discretize the differential equation of the secondary potential field with mixed boundary conditions by using a seven-point finite-difference method to obtain a large sparse system of linear equations. Then, we introduce the theory behind the pairwise aggregation algorithms for AGMG and use the conjugate-gradient method with the V-cycle AGMG preconditioner (AGMG-CG) to solve the linear equations. We use typical geoelectrical models to test the proposed AGMG-CG method and compare the results with analytical solutions and the 3DDCXH algorithm for 3D DC modeling (3DDCXH). In addition, we apply the AGMG-CG method to different grid sizes and geoelectrical models and compare it to different iterative methods, such as ILU-BICGSTAB, ILU-GCR, and SSOR-CG. The AGMG-CG method yields nearly linearly decreasing errors, whereas the number of iterations increases slowly with increasing grid size. The AGMG-CG method is precise and converges fast, and thus can improve the computational efficiency in forward modeling of three-dimensional DC resistivity.

  2. Insights into the physico-chemical evolution of pyrogenic organic carbon emissions from biomass burning using coupled Lagrangian-Eulerian simulations

    NASA Astrophysics Data System (ADS)

    Suciu, L. G.; Griffin, R. J.; Masiello, C. A.

    2017-12-01

    Wildfires and prescribed burning are important sources of particulate and gaseous pyrogenic organic carbon (PyOC) emissions to the atmosphere. These emissions impact atmospheric chemistry, air quality and climate, but the spatial and temporal variabilities of these impacts are poorly understood, primarily because small and fresh fire plumes are not well predicted by three-dimensional Eulerian chemical transport models due to their coarser grid size. Generally, this results in underestimation of downwind deposition of PyOC, hydroxyl radical reactivity, secondary organic aerosol formation and ozone (O3) production. However, such models are very good for simulation of multiple atmospheric processes that could affect the lifetimes of PyOC emissions over large spatiotemporal scales. Finer resolution models, such as Lagrangian reactive plumes models (or plume-in-grid), could be used to trace fresh emissions at the sub-grid level of the Eulerian model. Moreover, Lagrangian plume models need background chemistry predicted by the Eulerian models to accurately simulate the interactions of the plume material with the background air during plume aging. Therefore, by coupling the two models, the physico-chemical evolution of the biomass burning plumes can be tracked from local to regional scales. In this study, we focus on the physico-chemical changes of PyOC emissions from sub-grid to grid levels using an existing chemical mechanism. We hypothesize that finer scale Lagrangian-Eulerian simulations of several prescribed burns in the U.S. will allow more accurate downwind predictions (validated by airborne observations from smoke plumes) of PyOC emissions (i.e., submicron particulate matter, organic aerosols, refractory black carbon) as well as O3 and other trace gases. Simulation results could be used to optimize the implementation of additional PyOC speciation in the existing chemical mechanism.

  3. Sensitivity Analysis of Repeat Track Estimation Techniques for Detection of Elevation Change in Polar Ice Sheets

    NASA Astrophysics Data System (ADS)

    Harpold, R. E.; Urban, T. J.; Schutz, B. E.

    2008-12-01

    Interest in elevation change detection in the polar regions has increased recently due to concern over the potential sea level rise from the melting of the polar ice caps. Repeat track analysis can be used to estimate elevation change rate by fitting elevation data to model parameters. Several aspects of this method have been tested to improve the recovery of the model parameters. Elevation data from ICESat over Antarctica and Greenland from 2003-2007 are used to test several grid sizes and types, such as grids based on latitude and longitude and grids centered on the ICESat reference groundtrack. Different sets of parameters are estimated, some of which include seasonal terms or alternate types of slopes (linear, quadratic, etc.). In addition, the effects of including crossovers and other solution constraints are evaluated. Simulated data are used to infer potential errors due to unmodeled parameters.

  4. Wire-grid electromagnetic modelling of metallic cylindrical objects with arbitrary section, for Ground Penetrating Radar applications

    NASA Astrophysics Data System (ADS)

    Adabi, Saba; Pajewski, Lara

    2014-05-01

    This work deals with the electromagnetic wire-grid modelling of metallic cylindrical objects, buried in the ground or embedded in a structure, for example in a wall or in a concrete slab. Wire-grid modelling of conducting objects was introduced by Richmond in 1966 [1] and, since then, this method has been extensively used over the years to simulate arbitrarily-shaped objects and compute radiation patterns of antennas, as well as the electromagnetic field scattered by targets. For any wire-grid model, a fundamental question is the choice of the optimum wire radius and grid spacing. The most widely used criterion to fix the wire size is the so-called same-area rule [2], coming from empirical observation: the total surface area of the wires has to be equal to the surface area of the object being modelled. However, just few authors have investigated the validity of this criterion. Ludwig [3] studied the reliability of the rule by examining the canonical radiation problem of a transverse magnetic field by a circular cylinder fed with a uniform surface current, compared with a wire-grid model; he concluded that the same-area rule is optimum and that too thin wires are just as bad as too thick ones. Paknys [4] investigated the accuracy of the same-area rule for the modelling of a circular cylinder with a uniform current on it, continuing the study initiated in [3], or illuminated by a transverse magnetic monochromatic plane wave; he deduced that the same-area rule is optimal and that the field inside the cylinder is most sensitive to the wire radius than the field outside the object, so being a good error indicator. In [5], a circular cylinder was considered, embedded in a dielectric half-space and illuminated by a transverse magnetic monochromatic plane wave; the scattered near field was calculated by using the Cylindrical-Wave Approach and numerical results, obtained for different wire-grid models in the spectral domain, were compared with the exact solution. The Authors demonstrated that the well-known same-area criterion yields affordable results but is quite far from being the optimum: better results can be obtained with a wire radius shorter than what is suggested by the rule. In utility detection, quality controls of reinforced concrete, and other civil-engineering applications, many sought targets are long and thin: in these cases, two-dimensional scattering methods can be employed for the electromagnetic modelling of scenarios. In the present work, the freeware tool GPRMAX2D [6], implementing the Finite-Difference Time-Domain method, is used to implement the wire-grid modelling of buried two-dimensional objects. The source is a line of current, with Ricker waveform. Results obtained in [5] are confirmed in the time domain and for different geometries. The highest accuracy is obtained by shortening the radius of about 10%. It seems that fewer (and larger) wires need minor shortening; however, more detailed investigations are required. We suggest to use at least 8 - 10 wires per wavelength if the field scattered by the structure has to be evaluated. The internal field is much more sensitive to the modelling configuration than the external one, and more wires should be employed when shielding effects are concerned. We plan to conduct a more comprehensive analysis, in order to extract guidelines for wire sizing, to be validated on different shapes. We also look forward to verifying the possibility of using the wire-grid modelling method for the simulation of slotted objects. This work is a contribution to COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar". The Authors thanks COST for funding COST Action TU1208. References [1] J.H. Richmond, A wire grid model for scattering by conducting bodies, IEEE Trans. Antennas Propagation AP-14 (1966), pp. 782-786. [2] S.M. Rao, D.R. Wilton, A.W. Glisson, Electromagnetic scattering by surfaces of arbitrary shape, IEEE Trans. Antennas Propagation AP-30 (1982), pp. 409-418. [3] A.C. Ludwig, Wire grid modeling of surfaces, IEEE Trans. Antennas Propagation AP-35 (1987), pp. 1045-1048. [4] R.J. Paknys, The near field of a wire grid model, IEEE Trans. Antennas Propagation 39 (1991), pp. 994-999. [5] F. Frezza, L. Pajewski, C. Ponti, G. Schettini, Accurate wire-grid modelling of buried conducting cylindrical scatterers, Nondestructive Testing and Evaluation (2012), 27, pp. 199-207. [6] A. Giannopoulos, Modelling ground penetrating radar by GPRMAX. Construction and Building Materials (2005), 19, pp. 755-762.

  5. Control of nanoparticle size and amount by using the mesh grid and applying DC-bias to the substrate in silane ICP-CVD process

    NASA Astrophysics Data System (ADS)

    Yoo, Seung-Wan; Hwang, Nong-Moon; You, Shin-Jae; Kim, Jung-Hyung; Seong, Dae-Jin

    2017-11-01

    The effect of applying a bias to the substrate on the size and amount of charged crystalline silicon nanoparticles deposited on the substrate was investigated in the inductively coupled plasma chemical vapor deposition process. By inserting the grounded grid with meshes above the substrate, the region just above the substrate was separated from the plasma. Thereby, crystalline Si nanoparticles formed by the gas-phase reaction in the plasma could be deposited directly on the substrate, successfully avoiding the formation of a film. Moreover, the size and the amount of deposited nanoparticles could be changed by applying direct current bias to the substrate. When the grid of 1 × 1-mm-sized mesh was used, the nanoparticle flux was increased as the negative substrate bias increased from 0 to - 50 V. On the other hand, when a positive bias was applied to the substrate, Si nanoparticles were not deposited at all. Regardless of substrate bias voltages, the most frequently observed nanoparticles synthesized with the grid of 1 × 1-mm-sized mesh had the size range of 10-12 nm in common. When the square mesh grid of 2-mm size was used, as the substrate bias was increased from - 50 to 50 V, the size of the nanoparticles observed most frequently increased from the range of 8-10 to 40-45 nm but the amount that was deposited on the substrate decreased.

  6. Operational forecasting with the subgrid technique on the Elbe Estuary

    NASA Astrophysics Data System (ADS)

    Sehili, Aissa

    2017-04-01

    Modern remote sensing technologies can deliver very detailed land surface height data that should be considered for more accurate simulations. In that case, and even if some compromise is made with regard to grid resolution of an unstructured grid, simulations still will require large grids which can be computationally very demanding. The subgrid technique, first published by Casulli (2009), is based on the idea of making use of the available detailed subgrid bathymetric information while performing computations on relatively coarse grids permitting large time steps. Consequently, accuracy and efficiency are drastically enhanced if compared to the classical linear method, where the underlying bathymetry is solely discretized by the computational grid. The algorithm guarantees rigorous mass conservation and nonnegative water depths for any time step size. Computational grid-cells are permitted to be wet, partially wet or dry and no drying threshold is needed. The subgrid technique is used in an operational forecast model for water level, current velocity, salinity and temperature of the Elbe estuary in Germany. Comparison is performed with the comparatively highly resolved classical unstructured grid model UnTRIM. The daily meteorological forcing data are delivered by the German Weather Service (DWD) using the ICON-EU model. Open boundary data are delivered by the coastal model BSHcmod of the German Federal Maritime and Hydrographic Agency (BSH). Comparison of predicted water levels between classical and subgrid model shows a very good agreement. The speedup in computational performance due to the use of the subgrid technique is about a factor of 20. A typical daily forecast can be carried out within less than 10 minutes on standard PC-like hardware. The model is capable of permanently delivering highly resolved temporal and spatial information on water level, current velocity, salinity and temperature for the whole estuary. The model offers also the possibility to recalculate any previous situation. This can be helpful to figure out for instance the context in which a certain event occurred like an accident. In addition to measurement, the model can be used to improve navigability by adjusting the tidal transit-schedule for container vessels that are depending on the tide to approach or leave the port of Hamburg.

  7. X-ray photon correlation spectroscopy using a fast pixel array detector with a grid mask resolution enhancer.

    PubMed

    Hoshino, Taiki; Kikuchi, Moriya; Murakami, Daiki; Harada, Yoshiko; Mitamura, Koji; Ito, Kiminori; Tanaka, Yoshihito; Sasaki, Sono; Takata, Masaki; Jinnai, Hiroshi; Takahara, Atsushi

    2012-11-01

    The performance of a fast pixel array detector with a grid mask resolution enhancer has been demonstrated for X-ray photon correlation spectroscopy (XPCS) measurements to investigate fast dynamics on a microscopic scale. A detecting system, in which each pixel of a single-photon-counting pixel array detector, PILATUS, is covered by grid mask apertures, was constructed for XPCS measurements of silica nanoparticles in polymer melts. The experimental results are confirmed to be consistent by comparison with other independent experiments. By applying this method, XPCS measurements can be carried out by customizing the hole size of the grid mask to suit the experimental conditions, such as beam size, detector size and sample-to-detector distance.

  8. Evaluation of model-predicted hazardous air pollutants (HAPs) near a mid-sized U.S. airport

    NASA Astrophysics Data System (ADS)

    Vennam, Lakshmi Pradeepa; Vizuete, William; Arunachalam, Saravanan

    2015-10-01

    Accurate modeling of aircraft-emitted pollutants in the vicinity of airports is essential to study the impact on local air quality and to answer policy and health-impact related issues. To quantify air quality impacts of airport-related hazardous air pollutants (HAPs), we carried out a fine-scale (4 × 4 km horizontal resolution) Community Multiscale Air Quality model (CMAQ) model simulation at the T.F. Green airport in Providence (PVD), Rhode Island. We considered temporally and spatially resolved aircraft emissions from the new Aviation Environmental Design Tool (AEDT). These model predictions were then evaluated with observations from a field campaign focused on assessing HAPs near the PVD airport. The annual normalized mean error (NME) was in the range of 36-70% normalized mean error for all HAPs except for acrolein (>70%). The addition of highly resolved aircraft emissions showed only marginally incremental improvements in performance (1-2% decrease in NME) of some HAPs (formaldehyde, xylene). When compared to a coarser 36 × 36 km grid resolution, the 4 × 4 km grid resolution did improve performance by up to 5-20% NME for formaldehyde and acetaldehyde. The change in power setting (from traditional International Civil Aviation Organization (ICAO) 7% to observation studies based 4%) doubled the aircraft idling emissions of HAPs, but led to only a 2% decrease in NME. Overall modeled aircraft-attributable contributions are in the range of 0.5-28% near a mid-sized airport grid-cell with maximum impacts seen only within 4-16 km from the airport grid-cell. Comparison of CMAQ predictions with HAP estimates from EPA's National Air Toxics Assessment (NATA) did show similar annual mean concentrations and equally poor performance. Current estimates of HAPs for PVD are a challenge for modeling systems and refinements in our ability to simulate aircraft emissions have made only incremental improvements. Even with unrealistic increases in HAPs aviation emissions the model could not match observed concentrations near the runway airport site. Our results suggest other uncertainties in the modeling system such as meteorology, HAPs chemistry, or other emission sources require increased scrutiny.

  9. Drag Prediction for the NASA CRM Wing-Body-Tail Using CFL3D and OVERFLOW on an Overset Mesh

    NASA Technical Reports Server (NTRS)

    Sclafani, Anthony J.; DeHaan, Mark A.; Vassberg, John C.; Rumsey, Christopher L.; Pulliam, Thomas H.

    2010-01-01

    In response to the fourth AIAA CFD Drag Prediction Workshop (DPW-IV), the NASA Common Research Model (CRM) wing-body and wing-body-tail configurations are analyzed using the Reynolds-averaged Navier-Stokes (RANS) flow solvers CFL3D and OVERFLOW. Two families of structured, overset grids are built for DPW-IV. Grid Family 1 (GF1) consists of a coarse (7.2 million), medium (16.9 million), fine (56.5 million), and extra-fine (189.4 million) mesh. Grid Family 2 (GF2) is an extension of the first and includes a superfine (714.2 million) and an ultra-fine (2.4 billion) mesh. The medium grid anchors both families with an established build process for accurate cruise drag prediction studies. This base mesh is coarsened and enhanced to form a set of parametrically equivalent grids that increase in size by a factor of roughly 3.4 from one level to the next denser level. Both CFL3D and OVERFLOW are run on GF1 using a consistent numerical approach. Additional OVERFLOW runs are made to study effects of differencing scheme and turbulence model on GF1 and to obtain results for GF2. All CFD results are post-processed using Richardson extrapolation, and approximate grid-converged values of drag are compared. The medium grid is also used to compute a trimmed drag polar for both codes.

  10. Propagation of Disturbances in AC Electricity Grids.

    PubMed

    Tamrakar, Samyak; Conrath, Michael; Kettemann, Stefan

    2018-04-24

    The energy transition towards high shares of renewable energy will affect the stability of electricity grids in many ways. Here, we aim to study its impact on propagation of disturbances by solving nonlinear swing equations describing coupled rotating masses of synchronous generators and motors on different grid topologies. We consider a tree, a square grid and as a real grid topology, the german transmission grid. We identify ranges of parameters with different transient dynamics: the disturbance decays exponentially in time, superimposed by oscillations with the fast decay rate of a single node, or with a smaller decay rate without oscillations. Most remarkably, as the grid inertia is lowered, nodes may become correlated, slowing down the propagation from ballistic to diffusive motion, decaying with a power law in time. Applying linear response theory we show that tree grids have a spectral gap leading to exponential relaxation as protected by topology and independent on grid size. Meshed grids are found to have a spectral gap which decreases with increasing grid size, leading to slow power law relaxation and collective diffusive propagation of disturbances. We conclude by discussing consequences if no measures are undertaken to preserve the grid inertia in the energy transition.

  11. Laser-induced superhydrophobic grid patterns on PDMS for droplet arrays formation

    NASA Astrophysics Data System (ADS)

    Farshchian, Bahador; Gatabi, Javad R.; Bernick, Steven M.; Park, Sooyeon; Lee, Gwan-Hyoung; Droopad, Ravindranath; Kim, Namwon

    2017-02-01

    We demonstrate a facile single step laser treatment process to render a polydimethylsiloxane (PDMS) surface superhydrophobic. By synchronizing a pulsed nanosecond laser source with a motorized stage, superhydrophobic grid patterns were written on the surface of PDMS. Hierarchical micro and nanostructures were formed in the irradiated areas while non-irradiated areas were covered by nanostructures due to deposition of ablated particles. Arrays of droplets form spontaneously on the laser-patterned PDMS with superhydrophobic grid pattern when the PDMS sample is simply immersed in and withdrawn from water due to different wetting properties of the irradiated and non-irradiated areas. The effects of withdrawal speed and pitch size of superhydrophobic grid on the size of formed droplets were investigated experimentally. The droplet size increases initially with increasing the withdrawal speed and then does not change significantly beyond certain points. Moreover, larger droplets are formed by increasing the pitch size of the superhydrophobic grid. The droplet arrays formed on the laser-patterned PDMS with wettability contrast can be used potentially for patterning of particles, chemicals, and bio-molecules and also for cell screening applications.

  12. Numerical generation of two-dimensional grids by the use of Poisson equations with grid control at boundaries

    NASA Technical Reports Server (NTRS)

    Sorenson, R. L.; Steger, J. L.

    1980-01-01

    A method for generating boundary-fitted, curvilinear, two dimensional grids by the use of the Poisson equations is presented. Grids of C-type and O-type were made about airfoils and other shapes, with circular, rectangular, cascade-type, and other outer boundary shapes. Both viscous and inviscid spacings were used. In all cases, two important types of grid control can be exercised at both inner and outer boundaries. First is arbitrary control of the distances between the boundaries and the adjacent lines of the same coordinate family, i.e., stand-off distances. Second is arbitrary control of the angles with which lines of the opposite coordinate family intersect the boundaries. Thus, both grid cell size (or aspect ratio) and grid cell skewness are controlled at boundaries. Reasonable cell size and shape are ensured even in cases wherein extreme boundary shapes would tend to cause skewness or poorly controlled grid spacing. An inherent feature of the Poisson equations is that lines in the interior of the grid smoothly connect the boundary points (the grid mapping functions are second order differentiable).

  13. Spatial Variability of CCN Sized Aerosol Particles

    NASA Astrophysics Data System (ADS)

    Asmi, A.; Väänänen, R.

    2014-12-01

    The computational limitations restrict the grid size used in GCM models, and for many cloud types they are too large when compared to the scale of the cloud formation processes. Several parameterizations for e.g. convective cloud formation exist, but information on spatial subgrid variation of the cloud condensation nuclei (CCNs) sized aerosol concentration is not known. We quantify this variation as a function of the spatial scale by using datasets from airborne aerosol measurement campaigns around the world including EUCAARI LONGREX, ATAR, INCA, INDOEX, CLAIRE, PEGASOS and several regional airborne campaigns in Finland. The typical shapes of the distributions are analyzed. When possible, we use information obtained by CCN counters. In some other cases, we use particle size distribution measured by for example SMPS to get approximated CCN concentration. Other instruments used include optical particle counters or condensational particle counters. When using the GCM models, the CCN concentration used for each the grid-box is often considered to be either flat, or as an arithmetic mean of the concentration inside the grid-box. However, the aircraft data shows that the concentration values are often lognormal distributed. This, combined with the subgrid variations in the land use and atmospheric properties, might cause that the aerosol-cloud interactions calculated by using mean values to vary significantly from the true effects both temporary and spatially. This, in turn, can cause non-linear bias into the GCMs. We calculate the CCN aerosol concentration distribution as a function of different spatial scales. The measurements allow us to study the variation of these distributions within from hundreds of meters up to hundreds of kilometers. This is used to quantify the potential error when mean values are used in GCMs.

  14. Semi-Infinite Geology Modeling Algorithm (SIGMA): a Modular Approach to 3D Gravity

    NASA Astrophysics Data System (ADS)

    Chang, J. C.; Crain, K.

    2015-12-01

    Conventional 3D gravity computations can take up to days, weeks, and even months, depending on the size and resolution of the data being modeled. Additional modeling runs, due to technical malfunctions or additional data modifications, only compound computation times even further. We propose a new modeling algorithm that utilizes vertical line elements to approximate mass, and non-gridded (point) gravity observations. This algorithm is (1) magnitudes faster than conventional methods, (2) accurate to less than 0.1% error, and (3) modular. The modularity of this methodology means that researchers can modify their geology/terrain or gravity data, and only the modified component needs to be re-run. Additionally, land-, sea-, and air-based platforms can be modeled at their observation point, without having to filter data into a synthesized grid.

  15. Assessing accuracy of point fire intervals across landscapes with simulation modelling

    Treesearch

    Russell A. Parsons; Emily K. Heyerdahl; Robert E. Keane; Brigitte Dorner; Joseph Fall

    2007-01-01

    We assessed accuracy in point fire intervals using a simulation model that sampled four spatially explicit simulated fire histories. These histories varied in fire frequency and size and were simulated on a flat landscape with two forest types (dry versus mesic). We used three sampling designs (random, systematic grids, and stratified). We assessed the sensitivity of...

  16. Numerically exploring habitat fragmentation effects on populations using cell-based coupled map lattices

    Treesearch

    Michael Bevers; Curtis H. Flather

    1999-01-01

    We examine habitat size, shape, and arrangement effects on populations using a discrete reaction-diffusion model. Diffusion is modeled passively and applied to a cellular grid of territories forming a coupled map lattice. Dispersal mortality is proportional to the amount of nonhabitat and fully occupied habitat surrounding a given cell, with distance decay. After...

  17. Multiscale image processing and antiscatter grids in digital radiography.

    PubMed

    Lo, Winnie Y; Hornof, William J; Zwingenberger, Allison L; Robertson, Ian D

    2009-01-01

    Scatter radiation is a source of noise and results in decreased signal-to-noise ratio and thus decreased image quality in digital radiography. We determined subjectively whether a digitally processed image made without a grid would be of similar quality to an image made with a grid but without image processing. Additionally the effects of exposure dose and of a using a grid with digital radiography on overall image quality were studied. Thoracic and abdominal radiographs of five dogs of various sizes were made. Four acquisition techniques were included (1) with a grid, standard exposure dose, digital image processing; (2) without a grid, standard exposure dose, digital image processing; (3) without a grid, half the exposure dose, digital image processing; and (4) with a grid, standard exposure dose, no digital image processing (to mimic a film-screen radiograph). Full-size radiographs as well as magnified images of specific anatomic regions were generated. Nine reviewers rated the overall image quality subjectively using a five-point scale. All digitally processed radiographs had higher overall scores than nondigitally processed radiographs regardless of patient size, exposure dose, or use of a grid. The images made at half the exposure dose had a slightly lower quality than those made at full dose, but this was only statistically significant in magnified images. Using a grid with digital image processing led to a slight but statistically significant increase in overall quality when compared with digitally processed images made without a grid but whether this increase in quality is clinically significant is unknown.

  18. Reliability of numerical wind tunnels for VAWT simulation

    NASA Astrophysics Data System (ADS)

    Raciti Castelli, M.; Masi, M.; Battisti, L.; Benini, E.; Brighenti, A.; Dossena, V.; Persico, G.

    2016-09-01

    Computational Fluid Dynamics (CFD) based on the Unsteady Reynolds Averaged Navier Stokes (URANS) equations have long been widely used to study vertical axis wind turbines (VAWTs). Following a comprehensive experimental survey on the wakes downwind of a troposkien-shaped rotor, a campaign of bi-dimensional simulations is presented here, with the aim of assessing its reliability in reproducing the main features of the flow, also identifying areas needing additional research. Starting from both a well consolidated turbulence model (k-ω SST) and an unstructured grid typology, the main simulation settings are here manipulated in a convenient form to tackle rotating grids reproducing a VAWT operating in an open jet wind tunnel. The dependence of the numerical predictions from the selected grid spacing is investigated, thus establishing the less refined grid size that is still capable of capturing some relevant flow features such as integral quantities (rotor torque) and local ones (wake velocities).

  19. Counterrotating prop-fan simulations which feature a relative-motion multiblock grid decomposition enabling arbitrary time-steps

    NASA Technical Reports Server (NTRS)

    Janus, J. Mark; Whitfield, David L.

    1990-01-01

    Improvements are presented of a computer algorithm developed for the time-accurate flow analysis of rotating machines. The flow model is a finite volume method utilizing a high-resolution approximate Riemann solver for interface flux definitions. The numerical scheme is a block LU implicit iterative-refinement method which possesses apparent unconditional stability. Multiblock composite gridding is used to orderly partition the field into a specified arrangement of blocks exhibiting varying degrees of similarity. Block-block relative motion is achieved using local grid distortion to reduce grid skewness and accommodate arbitrary time step selection. A general high-order numerical scheme is applied to satisfy the geometric conservation law. An even-blade-count counterrotating unducted fan configuration is chosen for a computational study comparing solutions resulting from altering parameters such as time step size and iteration count. The solutions are compared with measured data.

  20. A Study of the Response of Deep Tropical Clouds to Mesoscale Processes. Part 1; Modeling Strategies and Simulations of TOGA-COARE Convective Systems

    NASA Technical Reports Server (NTRS)

    Johnson, Daniel E.; Tao, W.-K.; Simpson, J.; Sui, C.-H.; Einaudi, Franco (Technical Monitor)

    2001-01-01

    Interactions between deep tropical clouds over the western Pacific warm pool and the larger-scale environment are key to understanding climate change. Cloud models are an extremely useful tool in simulating and providing statistical information on heat and moisture transfer processes between cloud systems and the environment, and can therefore be utilized to substantially improve cloud parameterizations in climate models. In this paper, the Goddard Cumulus Ensemble (GCE) cloud-resolving model is used in multi-day simulations of deep tropical convective activity over the Tropical Ocean-Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA COARE). Large-scale temperature and moisture advective tendencies, and horizontal momentum from the TOGA-COARE Intensive Flux Array (IFA) region, are applied to the GCE version which incorporates cyclical boundary conditions. Sensitivity experiments show that grid domain size produces the largest response to domain-mean temperature and moisture deviations, as well as cloudiness, when compared to grid horizontal or vertical resolution, and advection scheme. It is found that a minimum grid-domain size of 500 km is needed to adequately resolve the convective cloud features. The control experiment shows that the atmospheric heating and moistening is primarily a response to cloud latent processes of condensation/evaporation, and deposition/sublimation, and to a lesser extent, melting of ice particles. Air-sea exchange of heat and moisture is found to be significant, but of secondary importance, while the radiational response is small. The simulated rainfall and atmospheric heating and moistening, agrees well with observations, and performs favorably to other models simulating this case.

  1. Computation at a coordinate singularity

    NASA Astrophysics Data System (ADS)

    Prusa, Joseph M.

    2018-05-01

    Coordinate singularities are sometimes encountered in computational problems. An important example involves global atmospheric models used for climate and weather prediction. Classical spherical coordinates can be used to parameterize the manifold - that is, generate a grid for the computational spherical shell domain. This particular parameterization offers significant benefits such as orthogonality and exact representation of curvature and connection (Christoffel) coefficients. But it also exhibits two polar singularities and at or near these points typical continuity/integral constraints on dependent fields and their derivatives are generally inadequate and lead to poor model performance and erroneous results. Other parameterizations have been developed that eliminate polar singularities, but problems of weaker singularities and enhanced grid noise compared to spherical coordinates (away from the poles) persist. In this study reparameterization invariance of geometric objects (scalars, vectors and the forms generated by their covariant derivatives) is utilized to generate asymptotic forms for dependent fields of interest valid in the neighborhood of a pole. The central concept is that such objects cannot be altered by the metric structure of a parameterization. The new boundary conditions enforce symmetries that are required for transformations of geometric objects. They are implemented in an implicit polar filter of a structured grid, nonhydrostatic global atmospheric model that is simulating idealized Held-Suarez flows. A series of test simulations using different configurations of the asymptotic boundary conditions are made, along with control simulations that use the default model numerics with no absorber, at three different grid sizes. Typically the test simulations are ∼ 20% faster in wall clock time than the control-resulting from a decrease in noise at the poles in all cases. In the control simulations adverse numerical effects from the polar singularity are observed to increase with grid resolution. In contrast, test simulations demonstrate robust polar behavior independent of grid resolution.

  2. Performance prediction using geostatistics and window reservoir simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fontanilla, J.P.; Al-Khalawi, A.A.; Johnson, S.G.

    1995-11-01

    This paper is the first window model study in the northern area of a large carbonate reservoir in Saudi Arabia. It describes window reservoir simulation with geostatistics to model uneven water encroachment in the southwest producing area of the northern portion of the reservoir. In addition, this paper describes performance predictions that investigate the sweep efficiency of the current peripheral waterflood. A 50 x 50 x 549 (240 m. x 260 m. x 0.15 m. average grid block size) geological model was constructed with geostatistics software. Conditional simulation was used to obtain spatial distributions of porosity and volume of dolomite.more » Core data transforms were used to obtain horizontal and vertical permeability distributions. Simple averaging techniques were used to convert the 549-layer geological model to a 50 x 50 x 10 (240 m. x 260 m. x 8 m. average grid block size) window reservoir simulation model. Flux injectors and flux producers were assigned to the outermost grid blocks. Historical boundary flux rates were obtained from a coarsely-ridded full-field model. Pressure distribution, water cuts, GORs, and recent flowmeter data were history matched. Permeability correction factors and numerous parameter adjustments were required to obtain the final history match. The permeability correction factors were based on pressure transient permeability-thickness analyses. The prediction phase of the study evaluated the effects of infill drilling, the use of artificial lifts, workovers, horizontal wells, producing rate constraints, and tight zone development to formulate depletion strategies for the development of this area. The window model will also be used to investigate day-to-day reservoir management problems in this area.« less

  3. Grid generation in three dimensions by Poisson equations with control of cell size and skewness at boundary surfaces

    NASA Technical Reports Server (NTRS)

    Sorenson, R. L.; Steger, J. L.

    1983-01-01

    An algorithm for generating computational grids about arbitrary three-dimensional bodies is developed. The elliptic partial differential equation (PDE) approach developed by Steger and Sorenson and used in the NASA computer program GRAPE is extended from two to three dimensions. Forcing functions which are found automatically by the algorithm give the user the ability to control mesh cell size and skewness at boundary surfaces. This algorithm, as is typical of PDE grid generators, gives smooth grid lines and spacing in the interior of the grid. The method is applied to a rectilinear wind-tunnel case and to two body shapes in spherical coordinates.

  4. Decay of grid turbulence in superfluid helium-4: Mesh dependence

    NASA Astrophysics Data System (ADS)

    Yang, J.; Ihas, G. G.

    2018-03-01

    Temporal decay of grid turbulence is experimentally studied in superfluid 4He in a large square channel. The second sound attenuation method is used to measure the turbulent vortex line density (L) with a phase locked tracking technique to minimize frequency shift effects induced by temperature fluctuations. Two different grids (0.8 mm and 3.0 mm mesh) are pulled to generate turbulence. Different power laws for decaying behavior are predicted by a theory. According to this theory, L should decay as t‑11/10 when the length scale of energy containing eddies grows from the grid mesh size to the size of the channel. At later time, after the energy containing eddy size becomes comparable to the channel, L should follow t‑3/2. Our recent experimental data exhibit evidence for t‑11/10 during the early time and t‑2 instead of t‑3/2 for later time. Moreover, a consistent bump/plateau feature is prominent between the two decay regimes for smaller (0.8 mm) grid mesh holes but absent with a grid mesh hole of 3.0 mm. This implies that in the large channel different types of turbulence are generated, depending on mesh hole size (mesh Reynolds number) compared to channel Reynolds number.

  5. Method of assembly of molecular-sized nets and scaffolding

    DOEpatents

    Michl, Josef; Magnera, Thomas F.; David, Donald E.; Harrison, Robin M.

    1999-01-01

    The present invention relates to methods and starting materials for forming molecular-sized grids or nets, or other structures based on such grids and nets, by creating molecular links between elementary molecular modules constrained to move in only two directions on an interface or surface by adhesion or bonding to that interface or surface. In the methods of this invention, monomers are employed as the building blocks of grids and more complex structures. Monomers are introduced onto and allowed to adhere or bond to an interface. The connector groups of adjacent adhered monomers are then polymerized with each other to form a regular grid in two dimensions above the interface. Modules that are not bound or adhered to the interface are removed prior to reaction of the connector groups to avoid undesired three-dimensional cross-linking and the formation of non-grid structures. Grids formed by the methods of this invention are useful in a variety of applications, including among others, for separations technology, as masks for forming regular surface structures (i.e., metal deposition) and as templates for three-dimensional molecular-sized structures.

  6. Method of assembly of molecular-sized nets and scaffolding

    DOEpatents

    Michl, J.; Magnera, T.F.; David, D.E.; Harrison, R.M.

    1999-03-02

    The present invention relates to methods and starting materials for forming molecular-sized grids or nets, or other structures based on such grids and nets, by creating molecular links between elementary molecular modules constrained to move in only two directions on an interface or surface by adhesion or bonding to that interface or surface. In the methods of this invention, monomers are employed as the building blocks of grids and more complex structures. Monomers are introduced onto and allowed to adhere or bond to an interface. The connector groups of adjacent adhered monomers are then polymerized with each other to form a regular grid in two dimensions above the interface. Modules that are not bound or adhered to the interface are removed prior to reaction of the connector groups to avoid undesired three-dimensional cross-linking and the formation of non-grid structures. Grids formed by the methods of this invention are useful in a variety of applications, including among others, for separations technology, as masks for forming regular surface structures (i.e., metal deposition) and as templates for three-dimensional molecular-sized structures. 9 figs.

  7. Centrifugal Modelling of Soil Structures. Part I. Centrifugal Modelling of Slope Failures.

    DTIC Science & Technology

    1979-03-01

    comparing successive photographs in which soil movement was noted by the change in position of the original grid of silvered indicator balls . Inherent in...SECIJ RITY CLASSIFICATION OF THIS PAGE(1Thon Pat& Entered) of uplift forces was also observed. In nineteen coal mine waste embankment dam models...In’nineteen coal mine waste embankment dam models, throughout which the soil particle size distribution was altered for modelling of dif- ferent

  8. DNS/LES Simulations of Separated Flows at High Reynolds Numbers

    NASA Technical Reports Server (NTRS)

    Balakumar, P.

    2015-01-01

    Direct numerical simulations (DNS) and large-eddy simulations (LES) simulations of flow through a periodic channel with a constriction are performed using the dynamic Smagorinsky model at two Reynolds numbers of 2800 and 10595. The LES equations are solved using higher order compact schemes. DNS are performed for the lower Reynolds number case using a fine grid and the data are used to validate the LES results obtained with a coarse and a medium size grid. LES simulations are also performed for the higher Reynolds number case using a coarse and a medium size grid. The results are compared with an existing reference data set. The DNS and LES results agreed well with the reference data. Reynolds stresses, sub-grid eddy viscosity, and the budgets for the turbulent kinetic energy are also presented. It is found that the turbulent fluctuations in the normal and spanwise directions have the same magnitude. The turbulent kinetic energy budget shows that the production peaks near the separation point region and the production to dissipation ratio is very high on the order of five in this region. It is also observed that the production is balanced by the advection, diffusion, and dissipation in the shear layer region. The dominant term is the turbulent diffusion that is about two times the molecular dissipation.

  9. Sizing and modelling of photovoltaic water pumping system

    NASA Astrophysics Data System (ADS)

    Al-Badi, A.; Yousef, H.; Al Mahmoudi, T.; Al-Shammaki, M.; Al-Abri, A.; Al-Hinai, A.

    2018-05-01

    With the decline in price of the photovoltaics (PVs) their use as a power source for water pumping is the most attractive solution instead of using diesel generators or electric motors driven by a grid system. In this paper, a method to design a PV pumping system is presented and discussed, which is then used to calculate the required size of the PV for an existing farm. Furthermore, the amount of carbon dioxide emissions saved by the use of PV water pumping system instead of using diesel-fuelled generators or electrical motor connected to the grid network is calculated. In addition, an experimental set-up is developed for the PV water pumping system using both DC and AC motors with batteries. The experimental tests are used to validate the developed MATLAB model. This research work demonstrates that using the PV water pumping system is not only improving the living conditions in rural areas but it is also protecting the environment and can be a cost-effective application in remote locations.

  10. Advanced ion thruster and electrochemical launcher research

    NASA Technical Reports Server (NTRS)

    Wilbur, P. J.

    1983-01-01

    The theoretical model of orificed hollow cathode operation predicted experimentally observed cathode performance with reasonable accuracy. The deflection and divergence characteristics of ion beamlets emanating from a two grid optics system as a function of the relative offset of screen and accel grids hole axes were described. Ion currents associated with discharge chamber operation were controlled to improve ion thruster performance markedly. Limitations imposed by basic physical laws on reductions in screen grid hole size and grid spacing for ion optics systems were described. The influence of stray magnetic fields in the vicinity of a neutralizer on the performance of that neutralizer was demonstrated. The ion current density extracted from a thruster was enhanced by injecting electrons into the region between its ion accelerating grids. Theoretical analysis of the electrothermal ramjet concept of launching space bound payloads at high acceleration levels is described. The operation of this system is broken down into two phases. In the light gas gun phase the payload is accelerated to the velocity at which the ramjet phase can commence. Preliminary models of operation are examined and shown to yield overall energy efficiences for a typical Earth escape launch of 60 to 70%. When shock losses are incorporated these efficiencies are still observed to remain at the relatively high values of 40 to 50%.

  11. Fatigue and Fracture Characterization of GlasGridRTM Reinforced Asphalt Concrete Pavement

    NASA Astrophysics Data System (ADS)

    Safavizadeh, Seyed Amirshayan

    The purpose of this research is to develop an experimental and analytical framework for describing, modeling, and predicting the reflective cracking patterns and crack growth rates in GlasGridRTM-reinforced asphalt pavements. In order to fulfill this objective, the effects of different interfacial conditions (mixture and tack coat type, and grid opening size) on reflective cracking-related failure mechanisms and the fatigue and fracture characteristics of fiberglass grid-reinforced asphalt concrete beams were studied by means of four- and threepoint bending notched beam fatigue tests (NBFTs) and cyclic and monotonic interface shear tests. The digital image correlation (DIC) technique was utilized for obtaining the displacement and strain contours of specimen surfaces during each test. The DIC analysis results were used to develop crack tip detection methods that were in turn used to determine interfacial crack lengths in the shear tests, and vertical and horizontal (interfacial) crack lengths in the notched beam fatigue tests. Linear elastic fracture mechanics (LEFM) principles were applied to the crack length data to describe the crack growth. In the case of the NBFTs, a finite element (FE) code was developed and used for modeling each beam at different stages of testing and back-calculating the stress intensity factors (SIFs) for the vertical and horizontal cracks. The local effect of reinforcement on the stiffness of the system at a vertical crack-interface intersection or the resistance of the grid system to the deflection differential at the joint/crack (hereinafter called joint stiffness) for GlasGrid-reinforced asphalt concrete beams was determined by implementing a joint stiffness parameter into the finite element code. The strain level dependency of the fatigue and fracture characteristics of the GlasGrid-reinforced beams was studied by performing four-point bending notched beam fatigue tests at strain levels of 600, 750, and 900 microstrain. These beam tests were conducted at 15°C, 20°C, and 23°C, with the main focus being to find the characteristics at 20°C. The results obtained from the tests at the different temperatures were used to investigate the effects of temperature on the reflective cracking performance of the gridreinforced beam specimens. The temperature tests were also used to investigate the validity of the time-temperature superposition (t-TS) principle in shear and the beam fatigue performance of the grid-reinforced specimens. The NBFT results suggest that different interlayer conditions do not reflect a unique failure mechanism, and thus, in order to predict and model the performance of grid-reinforced pavement, all the mechanisms involved in weakening its structural integrity, including damage within the asphalt layers and along the interface, must be considered. The shear and beam fatigue test results suggest that the grid opening size, interfacial bond quality, and mixture type play important roles in the reflective cracking performance of GlasGrid-reinforced asphalt pavements. According to the NBTF results, GlasGrid reinforcement retards reflective crack growth by stiffening the composite system and introducing a joint stiffness parameter. The results also show that the higher the bond strength and interlayer stiffness values, the higher the joint stiffness and retardation effects. The t-TS studies proved the validity of this principle in terms of the reflective crack growth of the grid-reinforced beam specimens and the shear modulus and shear strength of the grid-reinforced interfaces.

  12. A Composite Source Model With Fractal Subevent Size Distribution

    NASA Astrophysics Data System (ADS)

    Burjanek, J.; Zahradnik, J.

    A composite source model, incorporating different sized subevents, provides a pos- sible description of complex rupture processes during earthquakes. The number of subevents with characteristic dimension greater than R is proportional to R-2. The subevents do not overlap with each other, and the sum of their areas equals to the area of the target event (e.g. mainshock) . The subevents are distributed randomly over the fault. Each subevent is modeled as a finite source, using kinematic approach (radial rupture propagation, constant rupture velocity, boxcar slip-velocity function, with constant rise time on the subevent). The final slip at each subevent is related to its characteristic dimension, using constant stress-drop scaling. Variation of rise time with subevent size is a free parameter of modeling. The nucleation point of each subevent is taken as the point closest to mainshock hypocentre. The synthetic Green's functions are calculated by the discrete-wavenumber method in a 1D horizontally lay- ered crustal model in a relatively coarse grid of points covering the fault plane. The Green's functions needed for the kinematic model in a fine grid are obtained by cu- bic spline interpolation. As different frequencies may be efficiently calculated with different sampling, the interpolation simplifies and speeds-up the procedure signifi- cantly. The composite source model described above allows interpretation in terms of a kinematic model with non-uniform final slip and rupture velocity spatial distribu- tions. The 1994 Northridge earthquake (Mw = 6.7) is used as a validation event. The strong-ground motion modeling of the 1999 Athens earthquake (Mw = 5.9) is also performed.

  13. Use of North American and European air quality networks to evaluate global chemistry-climate modeling of surface ozone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schnell, J. L.; Prather, M. J.; Josse, B.

    Here we test the current generation of global chemistry–climate models in their ability to simulate observed, present-day surface ozone. Models are evaluated against hourly surface ozone from 4217 stations in North America and Europe that are averaged over 1° × 1° grid cells, allowing commensurate model–measurement comparison. Models are generally biased high during all hours of the day and in all regions. Most models simulate the shape of regional summertime diurnal and annual cycles well, correctly matching the timing of hourly (~ 15:00 local time (LT)) and monthly (mid-June) peak surface ozone abundance. The amplitude of these cycles is lessmore » successfully matched. The observed summertime diurnal range (~ 25 ppb) is underestimated in all regions by about 7 ppb, and the observed seasonal range (~ 21 ppb) is underestimated by about 5 ppb except in the most polluted regions, where it is overestimated by about 5 ppb. The models generally match the pattern of the observed summertime ozone enhancement, but they overestimate its magnitude in most regions. Most models capture the observed distribution of extreme episode sizes, correctly showing that about 80 % of individual extreme events occur in large-scale, multi-day episodes of more than 100 grid cells. The models also match the observed linear relationship between episode size and a measure of episode intensity, which shows increases in ozone abundance by up to 6 ppb for larger-sized episodes. Lastly, we conclude that the skill of the models evaluated here provides confidence in their projections of future surface ozone.« less

  14. Use of North American and European air quality networks to evaluate global chemistry-climate modeling of surface ozone

    DOE PAGES

    Schnell, J. L.; Prather, M. J.; Josse, B.; ...

    2015-09-25

    Here we test the current generation of global chemistry–climate models in their ability to simulate observed, present-day surface ozone. Models are evaluated against hourly surface ozone from 4217 stations in North America and Europe that are averaged over 1° × 1° grid cells, allowing commensurate model–measurement comparison. Models are generally biased high during all hours of the day and in all regions. Most models simulate the shape of regional summertime diurnal and annual cycles well, correctly matching the timing of hourly (~ 15:00 local time (LT)) and monthly (mid-June) peak surface ozone abundance. The amplitude of these cycles is lessmore » successfully matched. The observed summertime diurnal range (~ 25 ppb) is underestimated in all regions by about 7 ppb, and the observed seasonal range (~ 21 ppb) is underestimated by about 5 ppb except in the most polluted regions, where it is overestimated by about 5 ppb. The models generally match the pattern of the observed summertime ozone enhancement, but they overestimate its magnitude in most regions. Most models capture the observed distribution of extreme episode sizes, correctly showing that about 80 % of individual extreme events occur in large-scale, multi-day episodes of more than 100 grid cells. The models also match the observed linear relationship between episode size and a measure of episode intensity, which shows increases in ozone abundance by up to 6 ppb for larger-sized episodes. Lastly, we conclude that the skill of the models evaluated here provides confidence in their projections of future surface ozone.« less

  15. Direct comparisons of ice cloud macro- and microphysical properties simulated by the Community Atmosphere Model version 5 with HIPPO aircraft observations

    NASA Astrophysics Data System (ADS)

    Wu, Chenglai; Liu, Xiaohong; Diao, Minghui; Zhang, Kai; Gettelman, Andrew; Lu, Zheng; Penner, Joyce E.; Lin, Zhaohui

    2017-04-01

    In this study we evaluate cloud properties simulated by the Community Atmosphere Model version 5 (CAM5) using in situ measurements from the HIAPER Pole-to-Pole Observations (HIPPO) campaign for the period of 2009 to 2011. The modeled wind and temperature are nudged towards reanalysis. Model results collocated with HIPPO flight tracks are directly compared with the observations, and model sensitivities to the representations of ice nucleation and growth are also examined. Generally, CAM5 is able to capture specific cloud systems in terms of vertical configuration and horizontal extension. In total, the model reproduces 79.8 % of observed cloud occurrences inside model grid boxes and even higher (94.3 %) for ice clouds (T ≤ -40 °C). The missing cloud occurrences in the model are primarily ascribed to the fact that the model cannot account for the high spatial variability of observed relative humidity (RH). Furthermore, model RH biases are mostly attributed to the discrepancies in water vapor, rather than temperature. At the micro-scale of ice clouds, the model captures the observed increase of ice crystal mean sizes with temperature, albeit with smaller sizes than the observations. The model underestimates the observed ice number concentration (Ni) and ice water content (IWC) for ice crystals larger than 75 µm in diameter. Modeled IWC and Ni are more sensitive to the threshold diameter for autoconversion of cloud ice to snow (Dcs), while simulated ice crystal mean size is more sensitive to ice nucleation parameterizations than to Dcs. Our results highlight the need for further improvements to the sub-grid RH variability and ice nucleation and growth in the model.

  16. Sensitivity of Simulated Warm Rain Formation to Collision and Coalescence Efficiencies, Breakup, and Turbulence: Comparison of Two Bin-Resolved Numerical Models

    NASA Technical Reports Server (NTRS)

    Fridlind, Ann; Seifert, Axel; Ackerman, Andrew; Jensen, Eric

    2004-01-01

    Numerical models that resolve cloud particles into discrete mass size distributions on an Eulerian grid provide a uniquely powerful means of studying the closely coupled interaction of aerosols, cloud microphysics, and transport that determine cloud properties and evolution. However, such models require many experimentally derived paramaterizations in order to properly represent the complex interactions of droplets within turbulent flow. Many of these parameterizations remain poorly quantified, and the numerical methods of solving the equations for temporal evolution of the mass size distribution can also vary considerably in terms of efficiency and accuracy. In this work, we compare results from two size-resolved microphysics models that employ various widely-used parameterizations and numerical solution methods for several aspects of stochastic collection.

  17. Dynamic modeling and evaluation of solid oxide fuel cell - combined heat and power system operating strategies

    NASA Astrophysics Data System (ADS)

    Nanaeda, Kimihiro; Mueller, Fabian; Brouwer, Jacob; Samuelsen, Scott

    Operating strategies of solid oxide fuel cell (SOFC) combined heat and power (CHP) systems are developed and evaluated from a utility, and end-user perspective using a fully integrated SOFC-CHP system dynamic model that resolves the physical states, thermal integration and overall efficiency of the system. The model can be modified for any SOFC-CHP system, but the present analysis is applied to a hotel in southern California based on measured electric and heating loads. Analysis indicates that combined heat and power systems can be operated to benefit both the end-users and the utility, providing more efficient electric generation as well as grid ancillary services, namely dispatchable urban power. Design and operating strategies considered in the paper include optimal sizing of the fuel cell, thermal energy storage to dispatch heat, and operating the fuel cell to provide flexible grid power. Analysis results indicate that with a 13.1% average increase in price-of-electricity (POE), the system can provide the grid with a 50% operating range of dispatchable urban power at an overall thermal efficiency of 80%. This grid-support operating mode increases the operational flexibility of the SOFC-CHP system, which may make the technology an important utility asset for accommodating the increased penetration of intermittent renewable power.

  18. High-resolution spatial modeling of daily weather elements for a catchment in the Oregon Cascade Mountains, United States

    Treesearch

    Christopher Daly; Jonathan W. Smith; Joseph I. Smith; Robert B. McKane

    2007-01-01

    High-quality daily meteorological data at high spatial resolution are essential for a variety of hydrologic and ecological modeling applications that support environmental risk assessments and decisionmaking. This paper describes the development. application. and assessment of methods to construct daily high resolution (~50-m cell size) meteorological grids for the...

  19. Verification of the grid size and angular increment effects in lung stereotactic body radiation therapy using the dynamic conformal arc technique

    NASA Astrophysics Data System (ADS)

    Park, Hae-Jin; Suh, Tae-Suk; Park, Ji-Yeon; Lee, Jeong-Woo; Kim, Mi-Hwa; Oh, Young-Taek; Chun, Mison; Noh, O. Kyu; Suh, Susie

    2013-06-01

    The dosimetric effects of variable grid size and angular increment were systematically evaluated in the measured dose distributions of dynamic conformal arc therapy (DCAT) for lung stereotactic body radiation therapy (SBRT). Dose variations with different grid sizes (2, 3, and 4 mm) and angular increments (2, 4, 6, and 10°) for spherical planning target volumes (PTVs) were verified in a thorax phantom by using EBT2 films. Although the doses for identical PTVs were predicted for the different grid sizes, the dose discrepancy was evaluated using one measured dose distribution with the gamma tool because the beam was delivered in the same set-up for DCAT. The dosimetric effect of the angular increment was verified by comparing the measured dose area histograms of organs at risk (OARs) at each angular increment. When the difference in the OAR doses is higher than the uncertainty of the film dosimetry, the error is regarded as the angular increment effect in discretely calculated doses. In the results, even when a 2-mm grid size was used with an elaborate dose calculation, 4-mm grid size led to a higher gamma pass ratio due to underdosage, a steep-dose descent gradient, and lower estimated PTV doses caused by the smoothing effect in the calculated dose distribution. An undulating dose distribution and a difference in the maximum contralateral lung dose of up to 14% were observed in dose calculation using a 10° angular increment. The DCAT can be effectively applied for an approximately spherical PTV in a relatively uniform geometry, which is less affected by inhomogeneous materials and differences in the beam path length.

  20. Application of a three-dimensional hydrodynamic model to the Himmerfjärden, Baltic Sea

    NASA Astrophysics Data System (ADS)

    Sokolov, Alexander

    2014-05-01

    Himmerfjärden is a coastal fjord-like bay situated in the north-western part of the Baltic Sea. The fjord has a mean depth of 17 m and a maximum depth of 52 m. The water is brackish (6 psu) with small salinity fluctuation (±2 psu). A sewage treatment plant, which serves about 300 000 people, discharges into the inner part of Himmerfjärden. This area is the subject of a long-term monitoring program. We are planning to develop a publicly available modelling system for this area, which will perform short-term forecast predictions of pertinent parameters (e.g., water-levels, currents, salinity, temperature) and disseminate them to users. A key component of the system is a three-dimensional hydrodynamic model. The open source Delft3D Flow system (http://www.deltaressystems.com/hydro) has been applied to model the Himmerfjärden area. Two different curvilinear grids were used to approximate the modelling domain (25 km × 50 km × 60 m). One grid has low horizontal resolution (cell size varies from 250 to 450 m) to perform long-term numerical experiments (modelling period of several months), while another grid has higher resolution (cell size varies from 120 to 250 m) to model short-term situations. In vertical direction both z-level (50 layers) and sigma coordinate (20 layers) were used. Modelling results obtained with different horizontal resolution and vertical discretisation will be presented. This model will be a part of the operational system which provides automated integration of data streams from several information sources: meteorological forecast based on the HIRLAM model from the Finnish Meteorological Institute (https://en.ilmatieteenlaitos.fi/open-data), oceanographic forecast based on the HIROMB-BOOS Model developed within the Baltic community and provided by the MyOcean Project (http://www.myocean.eu), riverine discharge from the HYPE model provided by the Swedish Meteorological Hydrological Institute (http://vattenwebb.smhi.se/modelarea/).

  1. Computational Modeling of the Ames 11-Ft Transonic Wind Tunnel in Conjunction with IofNEWT

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Buning, Pieter G.; Erickson, Larry L.; George, Michael W. (Technical Monitor)

    1995-01-01

    Technical advances in Computational Fluid Dynamics have now made it possible to simulate complex three-dimensional internal flows about models of various size placed in a Transonic Wind Tunnel. TWT wall interference effects have been a source of error in predicting flight data from actual wind tunnel measured data. An advantage of such internal CFD calculations is to directly compare numerical results with the actual tunnel data for code assessment and tunnel flow analysis. A CFD capability has recently been devised for flow analysis of the NASA/Ames 11-Ft TWT facility. The primary objectives of this work are to provide a CFD tool to study the NASA/Ames 11-Ft TWT flow characteristics, to understand the slotted wall interference effects, and to validate CFD codes. A secondary objective is to integrate the internal flowfield calculations with the Pressure Sensitive Paint data, a surface pressure distribution capability in Ames' production wind tunnels. The effort has been part of the Ames IofNEWT, Integration of Numerical and Experimental Wind Tunnels project, which is aimed at providing further analytical tools for industrial application. We used the NASA/Ames OVERFLOW code to solve the thin-layer Navier-Stokes equations. Viscosity effects near the model are captured by Baldwin-Lomax or Baldwin-Barth turbulence models. The solver was modified to model the flow behavior in the vicinity of the tunnel longitudinal slotted walls. A suitable porous type wall boundary condition was coded to account for the cross-flow through the test section. Viscous flow equations were solved in generalized coordinates with a three-factor implicit central difference scheme in conjunction with the Chimera grid procedure. The internal flow field about the model and the tunnel walls were descretized by the Chimera overset grid system. This approach allows the application of efficient grid generation codes about individual components of the configuration; separate minor grids were developed to resolve the model and overset onto a main grid which discretizes the interior of the tunnel test section. Individual grid components axe not required to have mesh boundaries joined in any special way to each other or to the main tunnel grid. Programs have been developed to rotate the model about the tunnel pivot point and rotation axis, similar to that of the tunnel turntable mechanism for adjusting the pitch of the physical model in the test section.

  2. Numerical Simulations of Two-Phase Reacting Flow in a Single-Element Lean Direct Injection (LDI) Combustor Using NCC

    NASA Technical Reports Server (NTRS)

    Liu, Nan-Suey; Shih, Tsan-Hsing; Wey, C. Thomas

    2011-01-01

    A series of numerical simulations of Jet-A spray reacting flow in a single-element lean direct injection (LDI) combustor have been conducted by using the National Combustion Code (NCC). The simulations have been carried out using the time filtered Navier-Stokes (TFNS) approach ranging from the steady Reynolds-averaged Navier-Stokes (RANS), unsteady RANS (URANS), to the dynamic flow structure simulation (DFS). The sub-grid model employed for turbulent mixing and combustion includes the well-mixed model, the linear eddy mixing (LEM) model, and the filtered mass density function (FDF/PDF) model. The starting condition of the injected liquid spray is specified via empirical droplet size correlation, and a five-species single-step global reduced mechanism is employed for fuel chemistry. All the calculations use the same grid whose resolution is of the RANS type. Comparisons of results from various models are presented.

  3. The Impact of Simulated Mesoscale Convective Systems on Global Precipitation: A Multiscale Modeling Study

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Chern, Jiun-Dar

    2017-01-01

    The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. MCSs usually have horizontal scales of a few hundred kilometers (km); therefore, a large domain with several hundred km is required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multi-scale modeling frameworks (MMFs) may also lack the resolution (4 km grid spacing) and domain size (128 km) to realistically simulate MCSs. In this study, the impact of MCSs on precipitation is examined by conducting model simulations using the Goddard Cumulus Ensemble (GCE) model and Goddard MMF (GMMF). The results indicate that both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with fewer grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are weaker or reduced in the GMMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feed back are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures shows both reduced surface rainfall and evaporation.

  4. The impact of simulated mesoscale convective systems on global precipitation: A multiscale modeling study

    NASA Astrophysics Data System (ADS)

    Tao, Wei-Kuo; Chern, Jiun-Dar

    2017-06-01

    The importance of precipitating mesoscale convective systems (MCSs) has been quantified from TRMM precipitation radar and microwave imager retrievals. MCSs generate more than 50% of the rainfall in most tropical regions. MCSs usually have horizontal scales of a few hundred kilometers (km); therefore, a large domain with several hundred km is required for realistic simulations of MCSs in cloud-resolving models (CRMs). Almost all traditional global and climate models do not have adequate parameterizations to represent MCSs. Typical multiscale modeling frameworks (MMFs) may also lack the resolution (4 km grid spacing) and domain size (128 km) to realistically simulate MCSs. The impact of MCSs on precipitation is examined by conducting model simulations using the Goddard Cumulus Ensemble (GCE, a CRM) model and Goddard MMF that uses the GCEs as its embedded CRMs. Both models can realistically simulate MCSs with more grid points (i.e., 128 and 256) and higher resolutions (1 or 2 km) compared to those simulations with fewer grid points (i.e., 32 and 64) and low resolution (4 km). The modeling results also show the strengths of the Hadley circulations, mean zonal and regional vertical velocities, surface evaporation, and amount of surface rainfall are weaker or reduced in the Goddard MMF when using more CRM grid points and higher CRM resolution. In addition, the results indicate that large-scale surface evaporation and wind feedback are key processes for determining the surface rainfall amount in the GMMF. A sensitivity test with reduced sea surface temperatures shows both reduced surface rainfall and evaporation.

  5. The evolution of biomass-burning aerosol size distributions due to coagulation: dependence on fire and meteorological details and parameterization

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kimiko M.; Laing, James R.; Stevens, Robin G.; Jaffe, Daniel A.; Pierce, Jeffrey R.

    2016-06-01

    Biomass-burning aerosols have a significant effect on global and regional aerosol climate forcings. To model the magnitude of these effects accurately requires knowledge of the size distribution of the emitted and evolving aerosol particles. Current biomass-burning inventories do not include size distributions, and global and regional models generally assume a fixed size distribution from all biomass-burning emissions. However, biomass-burning size distributions evolve in the plume due to coagulation and net organic aerosol (OA) evaporation or formation, and the plume processes occur on spacial scales smaller than global/regional-model grid boxes. The extent of this size-distribution evolution is dependent on a variety of factors relating to the emission source and atmospheric conditions. Therefore, accurately accounting for biomass-burning aerosol size in global models requires an effective aerosol size distribution that accounts for this sub-grid evolution and can be derived from available emission-inventory and meteorological parameters. In this paper, we perform a detailed investigation of the effects of coagulation on the aerosol size distribution in biomass-burning plumes. We compare the effect of coagulation to that of OA evaporation and formation. We develop coagulation-only parameterizations for effective biomass-burning size distributions using the SAM-TOMAS large-eddy simulation plume model. For the most-sophisticated parameterization, we use the Gaussian Emulation Machine for Sensitivity Analysis (GEM-SA) to build a parameterization of the aged size distribution based on the SAM-TOMAS output and seven inputs: emission median dry diameter, emission distribution modal width, mass emissions flux, fire area, mean boundary-layer wind speed, plume mixing depth, and time/distance since emission. This parameterization was tested against an independent set of SAM-TOMAS simulations and yields R2 values of 0.83 and 0.89 for Dpm and modal width, respectively. The size distribution is particularly sensitive to the mass emissions flux, fire area, wind speed, and time, and we provide simplified fits of the aged size distribution to just these input variables. The simplified fits were tested against 11 aged biomass-burning size distributions observed at the Mt. Bachelor Observatory in August 2015. The simple fits captured over half of the variability in observed Dpm and modal width even though the freshly emitted Dpm and modal widths were unknown. These fits may be used in global and regional aerosol models. Finally, we show that coagulation generally leads to greater changes in the particle size distribution than OA evaporation/formation does, using estimates of OA production/loss from the literature.

  6. Ground-water hydrology, historical water use, and simulated ground-water flow in Cretaceous-age Coastal Plain aquifers near Charleston and Florence, South Carolina

    USGS Publications Warehouse

    Campbell, B.G.; van Heeswijk, Marijke

    1996-01-01

    A quasi-three-dimensional, transient, digital, ground-water flow model representing the Coastal Plain aquifers of South Carolina, has been constructed to assist in defining the ground- water-flow system of Cretaceous aquifers near Charleston and Florence, S.C. Both cities are near the centers of large (greater than 150 feet) potentiometric declines in the Middendorf aquifer. In 1989, the diameter of the depressions was approximately 40 miles at Charleston and 15 miles at Florence. The potentiometric decline occurred between predevelopment (1926) and 1982 near Florence, and between predevelopment (1879) and 1989 near Charleston. The city of Charleston does not withdraw water from these aquifers; however, some of the small communities in the area use these aquifers for a potable water supply. The model simulates flow in and between four aquifer systems. The model has a variable-cell-size grid, and spans the Coastal Plain from the Savannah River in the southwest to the Cape Fear Arch in the northeast, and from the Fall Line in the northwest to approximately 30 miles offshore to the southeast. Model-grid cell size is 1 by 1 mile in a 48 by 48 mile area centered in Charleston, and in a 36 by 48 mile area centered in Florence. The model cell size gradually increases to a maximum of 4 by 4 miles outside the two study areas. The entire grid consists of 115 by 127 cells and covers an area of 39,936 square miles. The model was calibrated to historical water-level data. The calibration relied on three techniques: (1) matching simulated and observed potentiometric map surfaces, (2) statistical comparison of observed and simulated heads, and (3) comparison of observed and simulated well hydrographs. Systematic changes in model parameters showed that simulated heads are most sensitive to changes in aquifer transmissivity. Eight predictive ground-water-use scenarios were simulated for the Mount Pleasant area, which presently (1993) uses the Middendorf aquifer as a sole-source of potable water. These simulations use various combinations of spatial distribution, and injection of treated wastewater effluent for existing and future Middendorf aquifer wells.

  7. Determination and representation of electric charge distributions associated with adverse weather conditions

    NASA Technical Reports Server (NTRS)

    Rompala, John T.

    1992-01-01

    Algorithms are presented for determining the size and location of electric charges which model storm systems and lightning strikes. The analysis utilizes readings from a grid of ground level field mills and geometric constraints on parameters to arrive at a representative set of charges. This set is used to generate three dimensional graphical depictions of the set as well as contour maps of the ground level electrical environment over the grid. The composite, analytic and graphic package is demonstrated and evaluated using controlled input data and archived data from a storm system. The results demonstrate the packages utility as: an operational tool in appraising adverse weather conditions; a research tool in studies of topics such as storm structure, storm dynamics, and lightning; and a tool in designing and evaluating grid systems.

  8. Aerodynamic heating effects on wall-modeled large-eddy simulations of high-speed flows

    NASA Astrophysics Data System (ADS)

    Yang, Xiang; Urzay, Javier; Moin, Parviz

    2017-11-01

    Aerospace vehicles flying at high speeds are subject to increased wall-heating rates because of strong aerodynamic heating in the near-wall region. In wall-modeled large-eddy simulations (WMLES), this near-wall region is typically not resolved by the computational grid. As a result, the effects of aerodynamic heating need to be modeled using an LES wall model. In this investigation, WMLES of transitional and fully turbulent high-speed flows are conducted to address this issue. In particular, an equilibrium wall model is employed in high-speed turbulent Couette flows subject to different combinations of thermal boundary conditions and grid sizes, and in transitional hypersonic boundary layers interacting with incident shock waves. Specifically, the WMLES of the Couette-flow configuration demonstrate that the shear-stress and heat-flux predictions made by the wall model show only a small sensitivity to the grid resolution even in the most adverse case where aerodynamic heating prevails near the wall and generates a sharp temperature peak there. In the WMLES of shock-induced transition in boundary layers, the wall model is tested against DNS and experiments, and it is shown to capture the post-transition aerodynamic heating and the overall heat transfer rate around the shock-impingement zone. This work is supported by AFOSR.

  9. Additive Manufacturing/Diagnostics via the High Frequency Induction Heating of Metal Powders: The Determination of the Power Transfer Factor for Fine Metallic Spheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rios, Orlando; Radhakrishnan, Balasubramaniam; Caravias, George

    2015-03-11

    Grid Logic Inc. is developing a method for sintering and melting fine metallic powders for additive manufacturing using spatially-compact, high-frequency magnetic fields called Micro-Induction Sintering (MIS). One of the challenges in advancing MIS technology for additive manufacturing is in understanding the power transfer to the particles in a powder bed. This knowledge is important to achieving efficient power transfer, control, and selective particle heating during the MIS process needed for commercialization of the technology. The project s work provided a rigorous physics-based model for induction heating of fine spherical particles as a function of frequency and particle size. This simulationmore » improved upon Grid Logic s earlier models and provides guidance that will make the MIS technology more effective. The project model will be incorporated into Grid Logic s power control circuit of the MIS 3D printer product and its diagnostics technology to optimize the sintering process for part quality and energy efficiency.« less

  10. Residential scene classification for gridded population sampling in developing countries using deep convolutional neural networks on satellite imagery.

    PubMed

    Chew, Robert F; Amer, Safaa; Jones, Kasey; Unangst, Jennifer; Cajka, James; Allpress, Justine; Bruhn, Mark

    2018-05-09

    Conducting surveys in low- and middle-income countries is often challenging because many areas lack a complete sampling frame, have outdated census information, or have limited data available for designing and selecting a representative sample. Geosampling is a probability-based, gridded population sampling method that addresses some of these issues by using geographic information system (GIS) tools to create logistically manageable area units for sampling. GIS grid cells are overlaid to partition a country's existing administrative boundaries into area units that vary in size from 50 m × 50 m to 150 m × 150 m. To avoid sending interviewers to unoccupied areas, researchers manually classify grid cells as "residential" or "nonresidential" through visual inspection of aerial images. "Nonresidential" units are then excluded from sampling and data collection. This process of manually classifying sampling units has drawbacks since it is labor intensive, prone to human error, and creates the need for simplifying assumptions during calculation of design-based sampling weights. In this paper, we discuss the development of a deep learning classification model to predict whether aerial images are residential or nonresidential, thus reducing manual labor and eliminating the need for simplifying assumptions. On our test sets, the model performs comparable to a human-level baseline in both Nigeria (94.5% accuracy) and Guatemala (96.4% accuracy), and outperforms baseline machine learning models trained on crowdsourced or remote-sensed geospatial features. Additionally, our findings suggest that this approach can work well in new areas with relatively modest amounts of training data. Gridded population sampling methods like geosampling are becoming increasingly popular in countries with outdated or inaccurate census data because of their timeliness, flexibility, and cost. Using deep learning models directly on satellite images, we provide a novel method for sample frame construction that identifies residential gridded aerial units. In cases where manual classification of satellite images is used to (1) correct for errors in gridded population data sets or (2) classify grids where population estimates are unavailable, this methodology can help reduce annotation burden with comparable quality to human analysts.

  11. Modeling erosion and sedimentation coupled with hydrological and overland flow processes at the watershed scale

    NASA Astrophysics Data System (ADS)

    Kim, Jongho; Ivanov, Valeriy Y.; Katopodes, Nikolaos D.

    2013-09-01

    A novel two-dimensional, physically based model of soil erosion and sediment transport coupled to models of hydrological and overland flow processes has been developed. The Hairsine-Rose formulation of erosion and deposition processes is used to account for size-selective sediment transport and differentiate bed material into original and deposited soil layers. The formulation is integrated within the framework of the hydrologic and hydrodynamic model tRIBS-OFM, Triangulated irregular network-based, Real-time Integrated Basin Simulator-Overland Flow Model. The integrated model explicitly couples the hydrodynamic formulation with the advection-dominated transport equations for sediment of multiple particle sizes. To solve the system of equations including both the Saint-Venant and the Hairsine-Rose equations, the finite volume method is employed based on Roe's approximate Riemann solver on an unstructured grid. The formulation yields space-time dynamics of flow, erosion, and sediment transport at fine scale. The integrated model has been successfully verified with analytical solutions and empirical data for two benchmark cases. Sensitivity tests to grid resolution and the number of used particle sizes have been carried out. The model has been validated at the catchment scale for the Lucky Hills watershed located in southeastern Arizona, USA, using 10 events for which catchment-scale streamflow and sediment yield data were available. Since the model is based on physical laws and explicitly uses multiple types of watershed information, satisfactory results were obtained. The spatial output has been analyzed and the driving role of topography in erosion processes has been discussed. It is expected that the integrated formulation of the model has the promise to reduce uncertainties associated with typical parameterizations of flow and erosion processes. A potential for more credible modeling of earth-surface processes is thus anticipated.

  12. Application of the FUN3D Unstructured-Grid Navier-Stokes Solver to the 4th AIAA Drag Prediction Workshop Cases

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, Elizabeth M.; Hammond, Dana P.; Nielsen, Eric J.; Pirzadeh, S. Z.; Rumsey, Christopher L.

    2010-01-01

    FUN3D Navier-Stokes solutions were computed for the 4th AIAA Drag Prediction Workshop grid convergence study, downwash study, and Reynolds number study on a set of node-based mixed-element grids. All of the baseline tetrahedral grids were generated with the VGRID (developmental) advancing-layer and advancing-front grid generation software package following the gridding guidelines developed for the workshop. With maximum grid sizes exceeding 100 million nodes, the grid convergence study was particularly challenging for the node-based unstructured grid generators and flow solvers. At the time of the workshop, the super-fine grid with 105 million nodes and 600 million elements was the largest grid known to have been generated using VGRID. FUN3D Version 11.0 has a completely new pre- and post-processing paradigm that has been incorporated directly into the solver and functions entirely in a parallel, distributed memory environment. This feature allowed for practical pre-processing and solution times on the largest unstructured-grid size requested for the workshop. For the constant-lift grid convergence case, the convergence of total drag is approximately second-order on the finest three grids. The variation in total drag between the finest two grids is only 2 counts. At the finest grid levels, only small variations in wing and tail pressure distributions are seen with grid refinement. Similarly, a small wing side-of-body separation also shows little variation at the finest grid levels. Overall, the FUN3D results compare well with the structured-grid code CFL3D. The FUN3D downwash study and Reynolds number study results compare well with the range of results shown in the workshop presentations.

  13. Getting the current out

    NASA Astrophysics Data System (ADS)

    Burger, D. R.

    1983-11-01

    Progress of a photovoltaic (PV) device from a research concept to a competitive power-generation source requires an increasing concern with current collection. The initial metallization focus is usually on contact resistance, since a good ohmic contact is desirable for accurate device characterization measurements. As the device grows in size, sheet resistance losses become important and a metal grid is usually added to reduce the effective sheet resistance. Later, as size and conversion efficiency continue to increase, grid-line resistance and cell shadowing must be considered simultaneously, because grid-line resistance is inversely related to total grid-line area and cell shadowing is directly related. A PV cell grid design must consider the five power-loss phenomena mentioned above: sheet resistance, contact resistance, grid resistance, bus-bar resistance and cell shadowing. Although cost, reliability and usage are important factors in deciding upon the best metallization system, this paper will focus only upon grid-line design and substrate material problems for flat-plate solar arrays.

  14. The Need of Nested Grids for Aerial and Satellite Images and Digital Elevation Models

    NASA Astrophysics Data System (ADS)

    Villa, G.; Mas, S.; Fernández-Villarino, X.; Martínez-Luceño, J.; Ojeda, J. C.; Pérez-Martín, B.; Tejeiro, J. A.; García-González, C.; López-Romero, E.; Soteres, C.

    2016-06-01

    Usual workflows for production, archiving, dissemination and use of Earth observation images (both aerial and from remote sensing satellites) pose big interoperability problems, as for example: non-alignment of pixels at the different levels of the pyramids that makes it impossible to overlay, compare and mosaic different orthoimages, without resampling them and the need to apply multiple resamplings and compression-decompression cycles. These problems cause great inefficiencies in production, dissemination through web services and processing in "Big Data" environments. Most of them can be avoided, or at least greatly reduced, with the use of a common "nested grid" for mutiresolution production, archiving, dissemination and exploitation of orthoimagery, digital elevation models and other raster data. "Nested grids" are space allocation schemas that organize image footprints, pixel sizes and pixel positions at all pyramid levels, in order to achieve coherent and consistent multiresolution coverage of a whole working area. A "nested grid" must be complemented by an appropriate "tiling schema", ideally based on the "quad-tree" concept. In the last years a "de facto standard" grid and Tiling Schema has emerged and has been adopted by virtually all major geospatial data providers. It has also been adopted by OGC in its "WMTS Simple Profile" standard. In this paper we explain how the adequate use of this tiling schema as common nested grid for orthoimagery, DEMs and other types of raster data constitutes the most practical solution to most of the interoperability problems of these types of data.

  15. Effect of elevation resolution on evapotranspiration simulations using MODFLOW.

    PubMed

    Kambhammettu, B V N P; Schmid, Wolfgang; King, James P; Creel, Bobby J

    2012-01-01

    Surface elevations represented in MODFLOW head-dependent packages are usually derived from digital elevation models (DEMs) that are available at much high resolution. Conventional grid refinement techniques to simulate the model at DEM resolution increases computational time, input file size, and in many cases are not feasible for regional applications. This research aims at utilizing the increasingly available high resolution DEMs for effective simulation of evapotranspiration (ET) in MODFLOW as an alternative to grid refinement techniques. The source code of the evapotranspiration package is modified by considering for a fixed MODFLOW grid resolution and for different DEM resolutions, the effect of variability in elevation data on ET estimates. Piezometric head at each DEM cell location is corrected by considering the gradient along row and column directions. Applicability of the research is tested for the lower Rio Grande (LRG) Basin in southern New Mexico. The DEM at 10 m resolution is aggregated to resampled DEM grid resolutions which are integer multiples of MODFLOW grid resolution. Cumulative outflows and ET rates are compared at different coarse resolution grids. Results of the analysis conclude that variability in depth-to-groundwater within the MODFLOW cell is a major contributing parameter to ET outflows in shallow groundwater regions. DEM aggregation methods for the LRG Basin have resulted in decreased volumetric outflow due to the formation of a smoothing error, which lowered the position of water table to a level below the extinction depth. © 2011, The Author(s). Ground Water © 2011, National Ground Water Association.

  16. Unsteady-flow-field predictions for oscillating cascades

    NASA Technical Reports Server (NTRS)

    Huff, Dennis L.

    1991-01-01

    The unsteady flow field around an oscillating cascade of flat plates with zero stagger was studied by using a time marching Euler code. This case had an exact solution based on linear theory and served as a model problem for studying pressure wave propagation in the numerical solution. The importance of using proper unsteady boundary conditions, grid resolution, and time step size was shown for a moderate reduced frequency. Results show that an approximate nonreflecting boundary condition based on linear theory does a good job of minimizing reflections from the inflow and outflow boundaries and allows the placement of the boundaries to be closer to the airfoils than when reflective boundaries are used. Stretching the boundary to dampen the unsteady waves is another way to minimize reflections. Grid clustering near the plates captures the unsteady flow field better than when uniform grids are used as long as the 'Courant Friedrichs Levy' (CFL) number is less than 1 for a sufficient portion of the grid. Finally, a solution based on an optimization of grid, CFL number, and boundary conditions shows good agreement with linear theory.

  17. Land Cover Change Detection using Neural Network and Grid Cells Techniques

    NASA Astrophysics Data System (ADS)

    Bagan, H.; Li, Z.; Tangud, T.; Yamagata, Y.

    2017-12-01

    In recent years, many advanced neural network methods have been applied in land cover classification, each of which has both strengths and limitations. In which, the self-organizing map (SOM) neural network method have been used to solve remote sensing data classification problems and have shown potential for efficient classification of remote sensing data. In SOM, both the distribution and the topology of features of the input layer are identified by using an unsupervised, competitive, neighborhood learning method. The high-dimensional data are then projected onto a low-dimensional map (competitive layer), usually as a two-dimensional map. The neurons (nodes) in the competitive layer are arranged by topological order in the input space. Spatio-temporal analyses of land cover change based on grid cells have demonstrated that gridded data are useful for obtaining spatial and temporal information about areas that are smaller than municipal scale and are uniform in size. Analysis based on grid cells has many advantages: grid cells all have the same size allowing for easy comparison; grids integrate easily with other scientific data; grids are stable over time and thus facilitate the modelling and analysis of very large multivariate spatial data sets. This study chose time-series MODIS and Landsat images as data sources, applied SOM neural network method to identify the land utilization in Inner Mongolia Autonomous Region of China. Then the results were integrated into grid cell to get the dynamic change maps. Land cover change using MODIS data in Inner Mongolia showed that urban area increased more than fivefold in recent 15 years, along with the growth of mining area. In terms of geographical distribution, the most obvious place of urban expansion is Ordos in southwest Inner Mongolia. The results using Landsat images from 1986 to 2014 in northeastern part of the Inner Mongolia show degradation in grassland from 1986 to 2014. Grid-cell-based spatial correlation analysis also confirmed a strong negative correlation between grassland and barren land, indicating that grassland degradation in this region is due to the urbanization and coal mining activities over the past three decades.

  18. Sparse grid techniques for particle-in-cell schemes

    NASA Astrophysics Data System (ADS)

    Ricketson, L. F.; Cerfon, A. J.

    2017-02-01

    We propose the use of sparse grids to accelerate particle-in-cell (PIC) schemes. By using the so-called ‘combination technique’ from the sparse grids literature, we are able to dramatically increase the size of the spatial cells in multi-dimensional PIC schemes while paying only a slight penalty in grid-based error. The resulting increase in cell size allows us to reduce the statistical noise in the simulation without increasing total particle number. We present initial proof-of-principle results from test cases in two and three dimensions that demonstrate the new scheme’s efficiency, both in terms of computation time and memory usage.

  19. Hydrogeologic unit flow characterization using transition probability geostatistics.

    PubMed

    Jones, Norman L; Walker, Justin R; Carle, Steven F

    2005-01-01

    This paper describes a technique for applying the transition probability geostatistics method for stochastic simulation to a MODFLOW model. Transition probability geostatistics has some advantages over traditional indicator kriging methods including a simpler and more intuitive framework for interpreting geologic relationships and the ability to simulate juxtapositional tendencies such as fining upward sequences. The indicator arrays generated by the transition probability simulation are converted to layer elevation and thickness arrays for use with the new Hydrogeologic Unit Flow package in MODFLOW 2000. This makes it possible to preserve complex heterogeneity while using reasonably sized grids and/or grids with nonuniform cell thicknesses.

  20. Optimizing Storage and Renewable Energy Systems with REopt

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elgqvist, Emma M.; Anderson, Katherine H.; Cutler, Dylan S.

    Under the right conditions, behind the meter (BTM) storage combined with renewable energy (RE) technologies can provide both cost savings and resiliency. Storage economics depend not only on technology costs and avoided utility rates, but also on how the technology is operated. REopt, a model developed at NREL, can be used to determine the optimal size and dispatch strategy for BTM or off-grid applications. This poster gives an overview of three applications of REopt: Optimizing BTM Storage and RE to Extend Probability of Surviving Outage, Optimizing Off-Grid Energy System Operation, and Optimizing Residential BTM Solar 'Plus'.

  1. High-performance parallel approaches for three-dimensional light detection and ranging point clouds gridding

    NASA Astrophysics Data System (ADS)

    Rizki, Permata Nur Miftahur; Lee, Heezin; Lee, Minsu; Oh, Sangyoon

    2017-01-01

    With the rapid advance of remote sensing technology, the amount of three-dimensional point-cloud data has increased extraordinarily, requiring faster processing in the construction of digital elevation models. There have been several attempts to accelerate the computation using parallel methods; however, little attention has been given to investigating different approaches for selecting the most suited parallel programming model for a given computing environment. We present our findings and insights identified by implementing three popular high-performance parallel approaches (message passing interface, MapReduce, and GPGPU) on time demanding but accurate kriging interpolation. The performances of the approaches are compared by varying the size of the grid and input data. In our empirical experiment, we demonstrate the significant acceleration by all three approaches compared to a C-implemented sequential-processing method. In addition, we also discuss the pros and cons of each method in terms of usability, complexity infrastructure, and platform limitation to give readers a better understanding of utilizing those parallel approaches for gridding purposes.

  2. Application of a predator-prey overlap metric to determine the impact of sub-grid scale feeding dynamics on ecosystem productivity

    NASA Astrophysics Data System (ADS)

    Greer, A. T.; Woodson, C. B.

    2016-02-01

    Because of the complexity and extremely large size of marine ecosystems, research attention has a strong focus on modelling the system through space and time to elucidate processes driving ecosystem state. One of the major weaknesses of current modelling approaches is the reliance on a particular grid cell size (usually 10's of km in the horizontal & water column mean) to capture the relevant processes, even though empirical research has shown that marine systems are highly structured on fine scales, and this structure can persist over relatively long time scales (days to weeks). Fine-scale features can have a strong influence on the predator-prey interactions driving trophic transfer. Here we apply a statistic, the AB ratio, used to quantify increased predator production due to predator-prey overlap on fine scales in a manner that is computationally feasible for larger scale models. We calculated the AB ratio for predator-prey distributions throughout the scientific literature, as well as for data obtained with a towed plankton imaging system, demonstrating that averaging across a typical model grid cell neglects the fine-scale predator-prey overlap that is an essential component of ecosystem productivity. Organisms from a range of trophic levels and oceanographic regions tended to overlap with their prey both in the horizontal and vertical dimensions. When predator swimming over a diel cycle was incorporated, the amount of production indicated by the AB ratio increased substantially. For the plankton image data, the AB ratio was higher with increasing sampling resolution, especially when prey were highly aggregated. We recommend that ecosystem models incorporate more fine-scale information both to more accurately capture trophic transfer processes and to capitalize on the increasing sampling resolution and data volume from empirical studies.

  3. About rats and jackfruit trees: modeling the carrying capacity of a Brazilian Atlantic Forest spiny-rat Trinomys dimidiatus (Günther, 1877) - Rodentia, Echimyidae - population with varying jackfruit tree (Artocarpus heterophyllus L.) abundances.

    PubMed

    Mello, J H F; Moulton, T P; Raíces, D S L; Bergallo, H G

    2015-01-01

    We carried out a six-year study aimed at evaluating if and how a Brazilian Atlantic Forest small mammal community responded to the presence of the invasive exotic species Artocarpus heterophyllus, the jackfruit tree. In the surroundings of Vila Dois Rios, Ilha Grande, RJ, 18 grids were established, 10 where the jackfruit tree was present and eight were it was absent. Previous results indicated that the composition and abundance of this small mammal community were altered by the presence and density of A. heterophyllus. One observed effect was the increased population size of the spiny-rat Trinomys dimidiatus within the grids where the jackfruit trees were present. Therefore we decided to create a mathematical model for this species, based on the Verhulst-Pearl logistic equation. Our objectives were i) to calculate the carrying capacity K based on real data of the involved species and the environment; ii) propose and evaluate a mathematical model to estimate the population size of T. dimidiatus based on the monthly seed production of jackfruit tree, Artocarpus heterophyllus and iii) determinate the minimum jackfruit tree seed production to maintain at least two T. dimidiatus individuals in one study grid. Our results indicated that the predicted values by the model for the carrying capacity K were significantly correlated with real data. The best fit was found considering 20~35% energy transfer efficiency between trophic levels. Within the scope of assumed premises, our model showed itself to be an adequate simulator for Trinomys dimidiatus populations where the invasive jackfruit tree is present.

  4. Dose evaluation of Grid Therapy using a 6 MV flattening filter-free (FFF) photon beam: A Monte Carlo study.

    PubMed

    Martínez-Rovira, Immaculada; Puxeu-Vaqué, Josep; Prezado, Yolanda

    2017-10-01

    Spatially fractionated radiotherapy is a strategy to overcome the main limitation of radiotherapy, i.e., the restrained normal tissue tolerances. A well-known example is Grid Therapy, which is currently performed at some hospitals using megavoltage photon beams delivered by Linacs. Grid Therapy has been successfully used in the management of bulky abdominal tumors with low toxicity. The aim of this work was to evaluate whether an improvement in therapeutic index in Grid Therapy can be obtained by implementing it in a flattening filter-free (FFF) Linac. The rationale behind is that the removal of the flattening filter shifts the beam energy spectrum towards lower energies and increase the photon fluence. Lower energies result in a reduction of lateral scattering and thus, to higher peak-to-valley dose ratios (PVDR) in normal tissues. In addition, the gain in fluence might allow using smaller beams leading a more efficient exploitation of dose-volume effects, and consequently, a better normal tissue sparing. Monte Carlo simulations were used to evaluate realistic dose distributions considering a 6 MV FFF photon beam from a standard medical Linac and a cerrobend mechanical collimator in different configurations: grid sizes of 0.3 × 0.3 cm 2 , 0.5 × 0.5 cm 2 , and 1 × 1 cm 2 and a corresponding center-to-center (ctc) distance of 0.6, 1, and 2 cm, respectively (total field size of 10 × 10 cm 2 ). As figure of merit, peak doses in depth, PVDR, output factors (OF), and penumbra values were assessed. Dose at the entrance is slightly higher than in conventional Grid Therapy. However, it is compensated by the large PVDR obtained at the entrance, reaching a maximum of 35 for a grid size of 1 × 1 cm 2 . Indeed, this grid size leads to very high PVDR values at all depths (≥ 10), which are much higher than in standard Grid Therapy. This may be beneficial for normal tissues but detrimental for tumor control, where a lower PVDR might be requested. In that case, higher valley doses in the tumor could be achieved by using an interlaced approach and/or adapting the ctc distance. The smallest grid size (0.3 × 0.3 cm 2 ) leads to low PVDR at all depths, comparable to standard Grid Therapy. However, the use of very thin beams might increase the normal tissue tolerances with respect to the grid size commonly used (1 × 1 cm 2 ). The gain in fluence provided by FFF implies that the important OF reduction (0.6) will not increase treatment time. Finally, the intermediate configuration (0.5 × 0.5 cm 2 ) provides high PVDR in the first 5 cm, and comparable PVDR to previous Grid Therapy works at depth. Therefore, this configuration might allow increasing the normal tissue tolerances with respect to Grid Therapy thanks to the higher PVDR and thinner beams, while a similar tumor control could be expected. The implementation of Grid Therapy in an FFF photon beam from medical Linac might lead to an improvement of the therapeutic index. Among the cases evaluated, a grid size of 0.5 × 0.5 cm 2 (1-cm-ctc) is the most advantageous configuration from the physics point of view. Radiobiological experiments are needed to fully explore this new avenue and to confirm our results. © 2017 American Association of Physicists in Medicine.

  5. Reissner-Mindlin Legendre Spectral Finite Elements with Mixed Reduced Quadrature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brito, K. D.; Sprague, M. A.

    2012-10-01

    Legendre spectral finite elements (LSFEs) are examined through numerical experiments for static and dynamic Reissner-Mindlin plate bending and a mixed-quadrature scheme is proposed. LSFEs are high-order Lagrangian-interpolant finite elements with nodes located at the Gauss-Lobatto-Legendre quadrature points. Solutions on unstructured meshes are examined in terms of accuracy as a function of the number of model nodes and total operations. While nodal-quadrature LSFEs have been shown elsewhere to be free of shear locking on structured grids, locking is demonstrated here on unstructured grids. LSFEs with mixed quadrature are, however, locking free and are significantly more accurate than low-order finite-elements for amore » given model size or total computation time.« less

  6. Use of North American and European Air Quality Networks to Evaluate Global Chemistry-Climate Modeling of Surface Ozone

    NASA Technical Reports Server (NTRS)

    Schnell, J. L.; Prather, M. J.; Josse, B.; Naik, V.; Horowitz, L. W.; Cameron-Smith, P.; Bergmann, D.; Zeng, G.; Plummer, D. A.; Sudo, K.; hide

    2015-01-01

    We test the current generation of global chemistry-climate models in their ability to simulate observed, present-day surface ozone. Models are evaluated against hourly surface ozone from 4217 stations in North America and Europe that are averaged over 1 degree by 1 degree grid cells, allowing commensurate model-measurement comparison. Models are generally biased high during all hours of the day and in all regions. Most models simulate the shape of regional summertime diurnal and annual cycles well, correctly matching the timing of hourly (approximately 15:00 local time (LT)) and monthly (mid-June) peak surface ozone abundance. The amplitude of these cycles is less successfully matched. The observed summertime diurnal range (25 ppb) is underestimated in all regions by about 7 parts per billion, and the observed seasonal range (approximately 21 parts per billion) is underestimated by about 5 parts per billion except in the most polluted regions, where it is overestimated by about 5 parts per billion. The models generally match the pattern of the observed summertime ozone enhancement, but they overestimate its magnitude in most regions. Most models capture the observed distribution of extreme episode sizes, correctly showing that about 80 percent of individual extreme events occur in large-scale, multi-day episodes of more than 100 grid cells. The models also match the observed linear relationship between episode size and a measure of episode intensity, which shows increases in ozone abundance by up to 6 parts per billion for larger-sized episodes. We conclude that the skill of the models evaluated here provides confidence in their projections of future surface ozone.

  7. Matching soil grid unit resolutions with polygon unit scales for DNDC modelling of regional SOC pool

    NASA Astrophysics Data System (ADS)

    Zhang, H. D.; Yu, D. S.; Ni, Y. L.; Zhang, L. M.; Shi, X. Z.

    2015-03-01

    Matching soil grid unit resolution with polygon unit map scale is important to minimize uncertainty of regional soil organic carbon (SOC) pool simulation as their strong influences on the uncertainty. A series of soil grid units at varying cell sizes were derived from soil polygon units at the six map scales of 1:50 000 (C5), 1:200 000 (D2), 1:500 000 (P5), 1:1 000 000 (N1), 1:4 000 000 (N4) and 1:14 000 000 (N14), respectively, in the Tai lake region of China. Both format soil units were used for regional SOC pool simulation with DeNitrification-DeComposition (DNDC) process-based model, which runs span the time period 1982 to 2000 at the six map scales, respectively. Four indices, soil type number (STN) and area (AREA), average SOC density (ASOCD) and total SOC stocks (SOCS) of surface paddy soils simulated with the DNDC, were attributed from all these soil polygon and grid units, respectively. Subjecting to the four index values (IV) from the parent polygon units, the variation of an index value (VIV, %) from the grid units was used to assess its dataset accuracy and redundancy, which reflects uncertainty in the simulation of SOC. Optimal soil grid unit resolutions were generated and suggested for the DNDC simulation of regional SOC pool, matching with soil polygon units map scales, respectively. With the optimal raster resolution the soil grid units dataset can hold the same accuracy as its parent polygon units dataset without any redundancy, when VIV < 1% of all the four indices was assumed as criteria to the assessment. An quadratic curve regression model y = -8.0 × 10-6x2 + 0.228x + 0.211 (R2 = 0.9994, p < 0.05) was revealed, which describes the relationship between optimal soil grid unit resolution (y, km) and soil polygon unit map scale (1:x). The knowledge may serve for grid partitioning of regions focused on the investigation and simulation of SOC pool dynamics at certain map scale.

  8. A calibration methodology of QCT BMD for human vertebral body with registered micro-CT images.

    PubMed

    Dall'Ara, E; Varga, P; Pahr, D; Zysset, P

    2011-05-01

    The accuracy of QCT-based homogenized finite element (FE) models is strongly related to the accuracy of the prediction of bone volume fraction (BV/TV) from bone mineral density (BMD). The goal of this study was to establish a calibration methodology to relate the BMD computed with QCT with the BV/TV computed with micro-CT (microCT) over a wide range of bone mineral densities and to investigate the effect of region size in which BMD and BV/TV are computed. Six human vertebral bodies were dissected from the spine of six donors and scanned submerged in water with QCT (voxel size: 0.391 x 0.391 x 0.450 mm3) and microCT (isotropic voxel size: 0.018(3) mm3). The microCT images were segmented with a single level threshold. Afterward, QCT-grayscale, microCT-grayscale, and microCT-segmented images were registered. Two isotropic grids of 1.230 mm (small) and 4.920 mm (large) were superimposed on every image, and QCT(BMD) was compared both with microCT(BMD) and microCT(BV/TV) for each grid cell. The ranges of QCT(BMD) for large and small regions were 9-559 mg/cm3 and -90 to 1006 mg/cm3, respectively. QCT(BMD) was found to overestimate microCT(BMD). No significant differences were found between the QCT(BMD)-microCT(BV/TV) regression parameters of the two grid sizes. However, the R2 was higher, and the standard error of the estimate (SEE) was lower for large regions when compared to small regions. For the pooled data, an extrapolated QCTBMD value equal to 1062 mg/ cm3 was found to correspond to 100% microCT(BV/TV). A calibration method was defined to evaluate BV/TV from QCTBMD values for cortical and trabecular bone in vitro. The QCT(BMD-microCT(BV/TV) calibration was found to be dependent on the scanned vertebral section but not on the size of the regions. However, the higher SEE computed for small regions suggests that the deleterious effect of QCT image noise on FE modelling increases with decreasing voxel size.

  9. Self-organized Segregation on the Grid

    NASA Astrophysics Data System (ADS)

    Omidvar, Hamed; Franceschetti, Massimo

    2018-02-01

    We consider an agent-based model with exponentially distributed waiting times in which two types of agents interact locally over a graph, and based on this interaction and on the value of a common intolerance threshold τ , decide whether to change their types. This is equivalent to a zero-temperature ising model with Glauber dynamics, an asynchronous cellular automaton with extended Moore neighborhoods, or a Schelling model of self-organized segregation in an open system, and has applications in the analysis of social and biological networks, and spin glasses systems. Some rigorous results were recently obtained in the theoretical computer science literature, and this work provides several extensions. We enlarge the intolerance interval leading to the expected formation of large segregated regions of agents of a single type from the known size ɛ >0 to size ≈ 0.134. Namely, we show that for 0.433< τ < 1/2 (and by symmetry 1/2<τ <0.567), the expected size of the largest segregated region containing an arbitrary agent is exponential in the size of the neighborhood. We further extend the interval leading to expected large segregated regions to size ≈ 0.312 considering "almost segregated" regions, namely regions where the ratio of the number of agents of one type and the number of agents of the other type vanishes quickly as the size of the neighborhood grows. In this case, we show that for 0.344 < τ ≤ 0.433 (and by symmetry for 0.567 ≤ τ <0.656) the expected size of the largest almost segregated region containing an arbitrary agent is exponential in the size of the neighborhood. This behavior is reminiscent of supercritical percolation, where small clusters of empty sites can be observed within any sufficiently large region of the occupied percolation cluster. The exponential bounds that we provide also imply that complete segregation, where agents of a single type cover the whole grid, does not occur with high probability for p=1/2 and the range of intolerance considered.

  10. Solving large-scale fixed cost integer linear programming models for grid-based location problems with heuristic techniques

    NASA Astrophysics Data System (ADS)

    Noor-E-Alam, Md.; Doucette, John

    2015-08-01

    Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.

  11. Filter and Grid Resolution in DG-LES

    NASA Astrophysics Data System (ADS)

    Miao, Ling; Sammak, Shervin; Madnia, Cyrus K.; Givi, Peyman

    2017-11-01

    The discontinuous Galerkin (DG) methodology has proven very effective for large eddy simulation (LES) of turbulent flows. Two important parameters in DG-LES are the grid resolution (h) and the filter size (Δ). In most previous work, the filter size is usually set to be proportional to the grid spacing. In this work, the DG method is combined with a subgrid scale (SGS) closure which is equivalent to that of the filtered density function (FDF). The resulting hybrid scheme is particularly attractive because a larger portion of the resolved energy is captured as the order of spectral approximation increases. Different cases for LES of a three-dimensional temporally developing mixing layer are appraised and a systematic parametric study is conducted to investigate the effects of grid resolution, the filter width size, and the order of spectral discretization. Comparative assessments are also made via the use of high resolution direct numerical simulation (DNS) data.

  12. Grid effects on the derived ion temperature and ram velocity from the simulated results of the retarding potential analyzer data

    NASA Astrophysics Data System (ADS)

    Chao, C. K.; Su, S.-Y.; Yeh, H. C.

    2003-12-01

    The ROCSAT-1 satellite circulating at 600 km altitude in the low- and mid-latitude topside ionosphere carries a retarding potential analyzer to measure the ion composition, temperature, and the plasma flow velocity in the ram direction. Based on an existing three-dimensional model, the particle's motion inside the instrument is simulated with the exact wire and mesh sizes but with a smaller aperture of the real sensor configuration. The simulation results indicate that the retarding grids could not provide a uniform retarding potential barrier to completely repel low energy particles. Some of low energy particles could pass through those grids and arrive at the collector. The leakage will cause the ram velocity to be over-estimated for by about 180 m/sec. Furthermore, the simulated O + temperature derived from the I-V curve is lower than the input temperature due to ion losses from colliding with the grids from the non-uniform potential field generated by the high retarding voltage.

  13. An Embedded 3D Fracture Modeling Approach for Simulating Fracture-Dominated Fluid Flow and Heat Transfer in Geothermal Reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, Henry; Wang, Cong; Winterfeld, Philip

    An efficient modeling approach is described for incorporating arbitrary 3D, discrete fractures, such as hydraulic fractures or faults, into modeling fracture-dominated fluid flow and heat transfer in fractured geothermal reservoirs. This technique allows 3D discrete fractures to be discretized independently from surrounding rock volume and inserted explicitly into a primary fracture/matrix grid, generated without including 3D discrete fractures in prior. An effective computational algorithm is developed to discretize these 3D discrete fractures and construct local connections between 3D fractures and fracture/matrix grid blocks of representing the surrounding rock volume. The constructed gridding information on 3D fractures is then added tomore » the primary grid. This embedded fracture modeling approach can be directly implemented into a developed geothermal reservoir simulator via the integral finite difference (IFD) method or with TOUGH2 technology This embedded fracture modeling approach is very promising and computationally efficient to handle realistic 3D discrete fractures with complicated geometries, connections, and spatial distributions. Compared with other fracture modeling approaches, it avoids cumbersome 3D unstructured, local refining procedures, and increases computational efficiency by simplifying Jacobian matrix size and sparsity, while keeps sufficient accuracy. Several numeral simulations are present to demonstrate the utility and robustness of the proposed technique. Our numerical experiments show that this approach captures all the key patterns about fluid flow and heat transfer dominated by fractures in these cases. Thus, this approach is readily available to simulation of fractured geothermal reservoirs with both artificial and natural fractures.« less

  14. Quantification of climatic feedbacks on the Caspian Sea level variability and impacts from the Caspian Sea on the large-scale atmospheric circulation

    NASA Astrophysics Data System (ADS)

    Arpe, Klaus; Tsuang, Ben-Jei; Tseng, Yu-Heng; Liu, Xin-Yu; Leroy, Suzanne A. G.

    2018-05-01

    With a fall of the Caspian Sea level (CSL), its size gets smaller and therefore the total evaporation over the sea is reduced. With a reduced evaporation from the sea, the fall of the CSL is weakened. This creates a negative feedback as less evaporation leads to less water losses of the Caspian Sea (CS). On the other hand, less evaporation reduces the water in the atmosphere, which may lead to less precipitation in the catchment area of the CS. The two opposite feedbacks are estimated by using an atmospheric climate model coupled with an ocean model only for the CS with different CS sizes while keeping all other forcings like oceanic sea surface temperatures (SSTs) and leaf area index the same from a global climate simulation. The investigation is concentrated on the medieval period because at that time the CSL changed dramatically from about - 30 to - 19 m below the mean ocean sea level, partly man-made. Models used for simulating the last millennium are not able to change the size of the CS dynamically so far. When results from such simulations are used to investigate the CSL variability and its causes, the present study should help to parameterize its feedbacks. A first assumption that the total evaporation from the CS will vary with the size of the CS (number of grid points representing the sea) is generally confirmed with the model simulations. The decrease of grid points from 15 to 14, 10, 8 or 7 leads to a decrease of evaporation to 96, 77, 70 and 54%. The lower decrease than initially expected from the number of grid points (93, 67, 53 and 47%) is probably due to the fact that there would also be some evaporation at grid points that run dry with a lower CSL but a cooling of the CS SST with increasing CS size in summer may be more important. The reduction of evaporation over the CS means more water for the budget of the whole catchment of the CS (an increase of the CSL) but from the gain through reduced evaporation over the CS, only 70% is found to remain in the water budget of the whole catchment area due to feedbacks with the precipitation. This suggests a high proportion of recycling of water within the CS catchment area. When using a model which does not have a correct CS size, the effect of a reduced CS area on the water budget for the whole CS catchment can be estimated by taking the evaporation over the sea multiplied by the proportional changed area. However, only 50% of that change is ending up in the water balance of the total catchment of the CS. A formula is provided. This method has been applied to estimate the CSL during the Last Glacial Maximum to be at - 30 to - 33 m. The experiments show as well that the CS has an impact on the large-scale atmospheric circulation with a widened Aleutian 500 hPa height field trough with increasing CS sizes. It is possible to validate this aspect with observational data.

  15. The mass-loss return from evolved stars to the Large Magellanic Cloud. V. The GRAMS carbon-star model grid

    NASA Astrophysics Data System (ADS)

    Srinivasan, S.; Sargent, B. A.; Meixner, M.

    2011-08-01

    Context. Outflows from asymptotic giant branch (AGB) and red supergiant (RSG) stars inject dust into the interstellar medium. The total rate of dust return provides an important constraint to galactic chemical evolution models. However, this requires detailed radiative transfer (RT) modeling of individual stars, which becomes impractical for large data sets. An alternative approach is to select the best-fit spectral energy distribution (SED) from a grid of dust shell models, allowing for a faster determination of the luminosities and mass-loss rates for entire samples. Aims: We have developed the Grid of RSG and AGB ModelS (GRAMS) to measure the mass-loss return from evolved stars. The models span the range of stellar, dust shell and grain properties relevant to evolved stars. The GRAMS model database will be made available to the scientific community. In this paper we present the carbon-rich AGB model grid and compare our results with photometry and spectra of Large Magellanic Cloud (LMC) carbon stars from the SAGE (Surveying the Agents of Galaxy Evolution) and SAGE-Spec programs. Methods: We generate models for spherically symmetric dust shells using the 2Dust code, with hydrostatic models for the central stars. The model photospheres have effective temperatures between 2600 and 4000 K and luminosities from ~2000 L⊙ to ~40 000 L⊙. Assuming a constant expansion velocity, we explore five values of the inner radius Rin of the dust shell (1.5, 3, 4.5, 7 and 12 Rstar). We fix the outer radius at 1000 Rin. Based on the results from our previous study, we use amorphous carbon dust mixed with 10% silicon carbide by mass. The grain size distribution follows a power-law and an exponential falloff at large sizes. The models span twenty-six values of 11.3 μm optical depth, ranging from 0.001 to 4. For each model, 2Dust calculates the output SED from 0.2 to 200 μm. Results: Over 12 000 models have dust temperatures below 1800 K. For these, we derive synthetic photometry in optical, near-infrared and mid-infrared filters for comparison with available data. We find good agreement with magnitudes and colors observed for LMC carbon-rich and extreme AGB star candidates from the SAGE survey, as well as spectroscopically confirmed carbon stars from the SAGE-Spec study. Our models reproduce the IRAC colors of most of the extreme AGB star candidates, consistent with the expectation that a majority of these enshrouded stars have carbon-rich dust. Finally, we fit the SEDs of some well-studied carbon stars and compare the resulting luminosities and mass-loss rates with those from previous studies. The model grid is only available at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/532/A54

  16. Age, size, and position of H ii regions in the Galaxy. Expansion of ionized gas in turbulent molecular clouds

    NASA Astrophysics Data System (ADS)

    Tremblin, P.; Anderson, L. D.; Didelon, P.; Raga, A. C.; Minier, V.; Ntormousi, E.; Pettitt, A.; Pinto, C.; Samal, M. R.; Schneider, N.; Zavagno, A.

    2014-08-01

    Aims: This work aims to improve the current understanding of the interaction between H ii regions and turbulent molecular clouds. We propose a new method to determine the age of a large sample of OB associations by investigating the development of their associated H ii regions in the surrounding turbulent medium. Methods: Using analytical solutions, one-dimensional (1D), and three-dimensional (3D) simulations, we constrained the expansion of the ionized bubble depending on the turbulence level of the parent molecular cloud. A grid of 1D simulations was then computed in order to build isochrone curves for H ii regions in a pressure-size diagram. This grid of models allowed us to date a large sample of OB associations that we obtained from the H ii Region Discovery Survey (HRDS). Results: Analytical solutions and numerical simulations showed that the expansion of H ii regions is slowed down by the turbulence up to the point where the pressure of the ionized gas is in a quasi-equilibrium with the turbulent ram pressure. Based on this result, we built a grid of 1D models of the expansion of H ii regions in a profile based on Larson's laws. We take the 3D turbulence into account with an effective 1D temperature profile. The ages estimated by the isochrones of this grid agree well with literature values of well known regions such as Rosette, RCW 36, RCW 79, and M 16. We thus propose that this method can be used to find ages of young OB associations through the Galaxy and also in nearby extra-galactic sources.

  17. Large-eddy simulation/Reynolds-averaged Navier-Stokes hybrid schemes for high speed flows

    NASA Astrophysics Data System (ADS)

    Xiao, Xudong

    Three LES/RANS hybrid schemes have been proposed for the prediction of high speed separated flows. Each method couples the k-zeta (Enstrophy) BANS model with an LES subgrid scale one-equation model by using a blending function that is coordinate system independent. Two of these functions are based on turbulence dissipation length scale and grid size, while the third one has no explicit dependence on the grid. To implement the LES/RANS hybrid schemes, a new rescaling-reintroducing method is used to generate time-dependent turbulent inflow conditions. The hybrid schemes have been tested on a Mach 2.88 flow over 25 degree compression-expansion ramp and a Mach 2.79 flow over 20 degree compression ramp. A special computation procedure has been designed to prevent the separation zone from expanding upstream to the recycle-plane. The code is parallelized using Message Passing Interface (MPI) and is optimized for running on IBM-SP3 parallel machine. The scheme was validated first for a flat plate. It was shown that the blending function has to be monotonic to prevent the RANS region from appearing in the LES region. In the 25 deg ramp case, the hybrid schemes provided better agreement with experiment in the recovery region. Grid refinement studies demonstrated the importance of using a grid independent blend function and further improvement with experiment in the recovery region. In the 20 deg ramp case, with a relatively finer grid, the hybrid scheme characterized by grid independent blending function well predicted the flow field in both the separation region and the recovery region. Therefore, with "appropriately" fine grid, current hybrid schemes are promising for the simulation of shock wave/boundary layer interaction problems.

  18. The Impact of the Grid Size on TomoTherapy for Prostate Cancer

    PubMed Central

    Kawashima, Motohiro; Kawamura, Hidemasa; Onishi, Masahiro; Takakusagi, Yosuke; Okonogi, Noriyuki; Okazaki, Atsushi; Sekihara, Tetsuo; Ando, Yoshitaka; Nakano, Takashi

    2017-01-01

    Discretization errors due to the digitization of computed tomography images and the calculation grid are a significant issue in radiation therapy. Such errors have been quantitatively reported for a fixed multifield intensity-modulated radiation therapy using traditional linear accelerators. The aim of this study is to quantify the influence of the calculation grid size on the dose distribution in TomoTherapy. This study used ten treatment plans for prostate cancer. The final dose calculation was performed with “fine” (2.73 mm) and “normal” (5.46 mm) grid sizes. The dose distributions were compared from different points of view: the dose-volume histogram (DVH) parameters for planning target volume (PTV) and organ at risk (OAR), the various indices, and dose differences. The DVH parameters were used Dmax, D2%, D2cc, Dmean, D95%, D98%, and Dmin for PTV and Dmax, D2%, and D2cc for OARs. The various indices used were homogeneity index and equivalent uniform dose for plan evaluation. Almost all of DVH parameters for the “fine” calculations tended to be higher than those for the “normal” calculations. The largest difference of DVH parameters for PTV was Dmax and that for OARs was rectal D2cc. The mean difference of Dmax was 3.5%, and the rectal D2cc was increased up to 6% at the maximum and 2.9% on average. The mean difference of D95% for PTV was the smallest among the differences of the other DVH parameters. For each index, whether there was a significant difference between the two grid sizes was determined through a paired t-test. There were significant differences for most of the indices. The dose difference between the “fine” and “normal” calculations was evaluated. Some points around high-dose regions had differences exceeding 5% of the prescription dose. The influence of the calculation grid size in TomoTherapy is smaller than traditional linear accelerators. However, there was a significant difference. We recommend calculating the final dose using the “fine” grid size. PMID:28974860

  19. An Integrated Software Package to Enable Predictive Simulation Capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yousu; Fitzhenry, Erin B.; Jin, Shuangshuang

    The power grid is increasing in complexity due to the deployment of smart grid technologies. Such technologies vastly increase the size and complexity of power grid systems for simulation and modeling. This increasing complexity necessitates not only the use of high-performance-computing (HPC) techniques, but a smooth, well-integrated interplay between HPC applications. This paper presents a new integrated software package that integrates HPC applications and a web-based visualization tool based on a middleware framework. This framework can support the data communication between different applications. Case studies with a large power system demonstrate the predictive capability brought by the integrated software package,more » as well as the better situational awareness provided by the web-based visualization tool in a live mode. Test results validate the effectiveness and usability of the integrated software package.« less

  20. Nested mesoscale-to-LES modeling of the atmospheric boundary layer in the presence of under-resolved convective structures

    DOE PAGES

    Mazzaro, Laura J.; Munoz-Esparza, Domingo; Lundquist, Julie K.; ...

    2017-07-06

    Multiscale atmospheric simulations can be computationally prohibitive, as they require large domains and fine spatiotemporal resolutions. Grid-nesting can alleviate this by bridging mesoscales and microscales, but one turbulence scheme must run at resolutions within a range of scales known as the terra incognita (TI). TI grid-cell sizes can violate both mesoscale and microscale subgrid-scale parametrization assumptions, resulting in unrealistic flow structures. Herein we assess the impact of unrealistic lateral boundary conditions from parent mesoscale simulations at TI resolutions on nested large eddy simulations (LES), to determine whether parent domains bias the nested LES. We present a series of idealized nestedmore » mesoscale-to-LES runs of a dry convective boundary layer (CBL) with different parent resolutions in the TI. We compare the nested LES with a stand-alone LES with periodic boundary conditions. The nested LES domains develop ~20% smaller convective structures, while potential temperature profiles are nearly identical for both the mesoscales and LES simulations. The horizontal wind speed and surface wind shear in the nested simulations closely resemble the reference LES. Heat fluxes are overestimated by up to ~0.01 K m s –1 in the top half of the PBL for all nested simulations. Overestimates of turbulent kinetic energy (TKE) and Reynolds stress in the nested domains are proportional to the parent domain's grid-cell size, and are almost eliminated for the simulation with the finest parent grid-cell size. Furthermore, based on these results, we recommend that LES of the CBL be forced by mesoscale simulations with the finest practical resolution.« less

  1. Nested mesoscale-to-LES modeling of the atmospheric boundary layer in the presence of under-resolved convective structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazzaro, Laura J.; Munoz-Esparza, Domingo; Lundquist, Julie K.

    Multiscale atmospheric simulations can be computationally prohibitive, as they require large domains and fine spatiotemporal resolutions. Grid-nesting can alleviate this by bridging mesoscales and microscales, but one turbulence scheme must run at resolutions within a range of scales known as the terra incognita (TI). TI grid-cell sizes can violate both mesoscale and microscale subgrid-scale parametrization assumptions, resulting in unrealistic flow structures. Herein we assess the impact of unrealistic lateral boundary conditions from parent mesoscale simulations at TI resolutions on nested large eddy simulations (LES), to determine whether parent domains bias the nested LES. We present a series of idealized nestedmore » mesoscale-to-LES runs of a dry convective boundary layer (CBL) with different parent resolutions in the TI. We compare the nested LES with a stand-alone LES with periodic boundary conditions. The nested LES domains develop ~20% smaller convective structures, while potential temperature profiles are nearly identical for both the mesoscales and LES simulations. The horizontal wind speed and surface wind shear in the nested simulations closely resemble the reference LES. Heat fluxes are overestimated by up to ~0.01 K m s –1 in the top half of the PBL for all nested simulations. Overestimates of turbulent kinetic energy (TKE) and Reynolds stress in the nested domains are proportional to the parent domain's grid-cell size, and are almost eliminated for the simulation with the finest parent grid-cell size. Furthermore, based on these results, we recommend that LES of the CBL be forced by mesoscale simulations with the finest practical resolution.« less

  2. Optimal configurations of spatial scale for grid cell firing under noise and uncertainty

    PubMed Central

    Towse, Benjamin W.; Barry, Caswell; Bush, Daniel; Burgess, Neil

    2014-01-01

    We examined the accuracy with which the location of an agent moving within an environment could be decoded from the simulated firing of systems of grid cells. Grid cells were modelled with Poisson spiking dynamics and organized into multiple ‘modules’ of cells, with firing patterns of similar spatial scale within modules and a wide range of spatial scales across modules. The number of grid cells per module, the spatial scaling factor between modules and the size of the environment were varied. Errors in decoded location can take two forms: small errors of precision and larger errors resulting from ambiguity in decoding periodic firing patterns. With enough cells per module (e.g. eight modules of 100 cells each) grid systems are highly robust to ambiguity errors, even over ranges much larger than the largest grid scale (e.g. over a 500 m range when the maximum grid scale is 264 cm). Results did not depend strongly on the precise organization of scales across modules (geometric, co-prime or random). However, independent spatial noise across modules, which would occur if modules receive independent spatial inputs and might increase with spatial uncertainty, dramatically degrades the performance of the grid system. This effect of spatial uncertainty can be mitigated by uniform expansion of grid scales. Thus, in the realistic regimes simulated here, the optimal overall scale for a grid system represents a trade-off between minimizing spatial uncertainty (requiring large scales) and maximizing precision (requiring small scales). Within this view, the temporary expansion of grid scales observed in novel environments may be an optimal response to increased spatial uncertainty induced by the unfamiliarity of the available spatial cues. PMID:24366144

  3. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    PubMed Central

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-01-01

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108

  4. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    PubMed

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-02-24

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  5. SU-F-T-365: Clinical Commissioning of the Monaco Treatment Planning System for the Novalis Tx to Deliver VMAT, SRS and SBRT Treatments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adnani, N

    Purpose: To commission the Monaco Treatment Planning System for the Novalis Tx machine. Methods: The commissioning of Monte-Carlo (MC), Collapsed Cone (CC) and electron Monte-Carlo (eMC) beam models was performed through a series of measurements and calculations in medium and in water. In medium measurements relied Octavius 4D QA system with the 1000 SRS detector array for field sizes less than 4 cm × 4 cm and the 1500 detector array for larger field sizes. Heterogeneity corrections were validated using a custom built phantom. Prior to clinical implementation, an end to end testing of a Prostate and H&N VMAT plansmore » was performed. Results: Using a 0.5% uncertainty and 2 mm grid sizes, Tables I and II summarize the MC validation at 6 MV and 18 MV in both medium and water. Tables III and IV show similar comparisons for CC. Using the custom heterogeneity phantom setup of Figure 1 and IGRT guidance summarized in Figure 2, Table V lists the percent pass rate for a 2%, 2 mm gamma criteria at 6 and 18 MV for both MC and CC. The relationship between MC calculations settings of uncertainty and grid size and the gamma passing rate for a prostate and H&N case is shown in Table VI. Table VII lists the results of the eMC calculations compared to measured data for clinically available applicators and Table VIII for small field cutouts. Conclusion: MU calculations using MC are highly sensitive to uncertainty and grid size settings. The difference can be of the order of several per cents. MC is superior to CC for small fields and when using heterogeneity corrections, regardless of field size, making it more suitable for SRS, SBRT and VMAT deliveries. eMC showed good agreement with measurements down to 2 cm − 2 cm field size.« less

  6. School Finance and Technology: A Case Study Using Grid and Group Theory to Explore the Connections

    ERIC Educational Resources Information Center

    Case, Stephoni; Harris, Edward L.

    2014-01-01

    Using grid and group theory (Douglas 1982, 2011), the study described in this article examined the intersections of technology and school finance in four schools located in districts differing in size, wealth, and commitment to technology integration. In grid and group theory, grid refers to the degree to which policies and role prescriptions…

  7. NREL's System Advisor Model Simplifies Complex Energy Analysis (Fact Sheet)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2015-01-01

    NREL has developed a tool -- the System Advisor Model (SAM) -- that can help decision makers analyze cost, performance, and financing of any size grid-connected solar, wind, or geothermal power project. Manufacturers, engineering and consulting firms, research and development firms, utilities, developers, venture capital firms, and international organizations use SAM for end-to-end analysis that helps determine whether and how to make investments in renewable energy projects.

  8. An Investigation into Solution Verification for CFD-DEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fullmer, William D.; Musser, Jordan

    This report presents the study of the convergence behavior of the computational fluid dynamicsdiscrete element method (CFD-DEM) method, specifically National Energy Technology Laboratory’s (NETL) open source MFiX code (MFiX-DEM) with a diffusion based particle-tocontinuum filtering scheme. In particular, this study focused on determining if the numerical method had a solution in the high-resolution limit where the grid size is smaller than the particle size. To address this uncertainty, fixed particle beds of two primary configurations were studied: i) fictitious beds where the particles are seeded with a random particle generator, and ii) instantaneous snapshots from a transient simulation of anmore » experimentally relevant problem. Both problems considered a uniform inlet boundary and a pressure outflow. The CFD grid was refined from a few particle diameters down to 1/6 th of a particle diameter. The pressure drop between two vertical elevations, averaged across the bed cross-section was considered as the system response quantity of interest. A least-squares regression method was used to extrapolate the grid-dependent results to an approximate “grid-free” solution in the limit of infinite resolution. The results show that the diffusion based scheme does yield a converging solution. However, the convergence is more complicated than encountered in simpler, single-phase flow problems showing strong oscillations and, at times, oscillations superimposed on top of globally non-monotonic behavior. The challenging convergence behavior highlights the importance of using at least four grid resolutions in solution verification problems so that (over-determined) regression-based extrapolation methods may be applied to approximate the grid-free solution. The grid-free solution is very important in solution verification and VVUQ exercise in general as the difference between it and the reference solution largely determines the numerical uncertainty. By testing different randomized particle configurations of the same general problem (for the fictitious case) or different instances of freezing a transient simulation, the numerical uncertainties appeared to be on the same order of magnitude as ensemble or time averaging uncertainties. By testing different drag laws, almost all cases studied show that model form uncertainty in this one, very important closure relation was larger than the numerical uncertainty, at least with a reasonable CFD grid, roughly five particle diameters. In this study, the diffusion width (filtering length scale) was mostly set at a constant of six particle diameters. A few exploratory tests were performed to show that similar convergence behavior was observed for diffusion widths greater than approximately two particle diameters. However, this subject was not investigated in great detail because determining an appropriate filter size is really a validation question which must be determined by comparison to experimental or highly accurate numerical data. Future studies are being considered targeting solution verification of transient simulations as well as validation of the filter size with direct numerical simulation data.« less

  9. Near-Body Grid Adaption for Overset Grids

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2016-01-01

    A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.

  10. Effects of dynamic-demand-control appliances on the power grid frequency.

    PubMed

    Tchuisseu, E B Tchawou; Gomila, D; Brunner, D; Colet, P

    2017-08-01

    Power grid frequency control is a demanding task requiring expensive idle power plants to adapt the supply to the fluctuating demand. An alternative approach is controlling the demand side in such a way that certain appliances modify their operation to adapt to the power availability. This is especially important to achieve a high penetration of renewable energy sources. A number of methods to manage the demand side have been proposed. In this work we focus on dynamic demand control (DDC), where smart appliances can delay their switchings depending on the frequency of the system. We introduce a simple model to study the effects of DDC on the frequency of the power grid. The model includes the power plant equations, a stochastic model for the demand that reproduces, adjusting a single parameter, the statistical properties of frequency fluctuations measured experimentally, and a generic DDC protocol. We find that DDC can reduce small and medium-size fluctuations but it can also increase the probability of observing large frequency peaks due to the necessity of recovering pending task. We also conclude that a deployment of DDC around 30-40% already allows a significant reduction of the fluctuations while keeping the number of pending tasks low.

  11. Effects of dynamic-demand-control appliances on the power grid frequency

    NASA Astrophysics Data System (ADS)

    Tchuisseu, E. B. Tchawou; Gomila, D.; Brunner, D.; Colet, P.

    2017-08-01

    Power grid frequency control is a demanding task requiring expensive idle power plants to adapt the supply to the fluctuating demand. An alternative approach is controlling the demand side in such a way that certain appliances modify their operation to adapt to the power availability. This is especially important to achieve a high penetration of renewable energy sources. A number of methods to manage the demand side have been proposed. In this work we focus on dynamic demand control (DDC), where smart appliances can delay their switchings depending on the frequency of the system. We introduce a simple model to study the effects of DDC on the frequency of the power grid. The model includes the power plant equations, a stochastic model for the demand that reproduces, adjusting a single parameter, the statistical properties of frequency fluctuations measured experimentally, and a generic DDC protocol. We find that DDC can reduce small and medium-size fluctuations but it can also increase the probability of observing large frequency peaks due to the necessity of recovering pending task. We also conclude that a deployment of DDC around 30-40% already allows a significant reduction of the fluctuations while keeping the number of pending tasks low.

  12. Field Test of a Hybrid Finite-Difference and Analytic Element Regional Model.

    PubMed

    Abrams, D B; Haitjema, H M; Feinstein, D T; Hunt, R J

    2016-01-01

    Regional finite-difference models often have cell sizes that are too large to sufficiently model well-stream interactions. Here, a steady-state hybrid model is applied whereby the upper layer or layers of a coarse MODFLOW model are replaced by the analytic element model GFLOW, which represents surface waters and wells as line and point sinks. The two models are coupled by transferring cell-by-cell leakage obtained from the original MODFLOW model to the bottom of the GFLOW model. A real-world test of the hybrid model approach is applied on a subdomain of an existing model of the Lake Michigan Basin. The original (coarse) MODFLOW model consists of six layers, the top four of which are aggregated into GFLOW as a single layer, while the bottom two layers remain part of MODFLOW in the hybrid model. The hybrid model and a refined "benchmark" MODFLOW model simulate similar baseflows. The hybrid and benchmark models also simulate similar baseflow reductions due to nearby pumping when the well is located within the layers represented by GFLOW. However, the benchmark model requires refinement of the model grid in the local area of interest, while the hybrid approach uses a gridless top layer and is thus unaffected by grid discretization errors. The hybrid approach is well suited to facilitate cost-effective retrofitting of existing coarse grid MODFLOW models commonly used for regional studies because it leverages the strengths of both finite-difference and analytic element methods for predictions in mildly heterogeneous systems that can be simulated with steady-state conditions. © 2015, National Ground Water Association.

  13. JPARSS: A Java Parallel Network Package for Grid Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jie; Akers, Walter; Chen, Ying

    2002-03-01

    The emergence of high speed wide area networks makes grid computinga reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve bandwidth and to reduce latency on a high speed wide area network. This paper presents a Java package called JPARSS (Java Parallel Secure Stream (Socket)) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a grid environment without the necessity of tuning TCP window size.more » This package enables single sign-on, certificate delegation and secure or plain-text data transfer using several security components based on X.509 certificate and SSL. Several experiments will be presented to show that using Java parallelstreams is more effective than tuning TCP window size. In addition a simple architecture using Web services« less

  14. Microwave Frequency Polarizers

    NASA Technical Reports Server (NTRS)

    Ha, Vien The; Mirel, Paul; Kogut, Alan J.

    2013-01-01

    This article describes the fabrication and analysis of microwave frequency polarizing grids. The grids are designed to measure polarization from the cosmic microwave background. It is effective in the range of 500 to 1500 micron wavelength. It is cryogenic compatible and highly robust to high load impacts. Each grid is fabricated using an array of different assembly processes which vary in the types of tension mechanisms to the shape and size of the grids. We provide a comprehensive study on the analysis of the grids' wire heights, diameters, and spacing.

  15. ON JOINT DETERMINISTIC GRID MODELING AND SUB-GRID VARIABILITY CONCEPTUAL FRAMEWORK FOR MODEL EVALUATION

    EPA Science Inventory

    The general situation, (but exemplified in urban areas), where a significant degree of sub-grid variability (SGV) exists in grid models poses problems when comparing gridbased air quality modeling results with observations. Typically, grid models ignore or parameterize processes ...

  16. Sub-Grid Modeling of Electrokinetic Effects in Micro Flows

    NASA Technical Reports Server (NTRS)

    Chen, C. P.

    2005-01-01

    Advances in micro-fabrication processes have generated tremendous interests in miniaturizing chemical and biomedical analyses into integrated microsystems (Lab-on-Chip devices). To successfully design and operate the micro fluidics system, it is essential to understand the fundamental fluid flow phenomena when channel sizes are shrink to micron or even nano dimensions. One important phenomenon is the electro kinetic effect in micro/nano channels due to the existence of the electrical double layer (EDL) near a solid-liquid interface. Not only EDL is responsible for electro-osmosis pumping when an electric field parallel to the surface is imposed, EDL also causes extra flow resistance (the electro-viscous effect) and flow anomaly (such as early transition from laminar to turbulent flow) observed in pressure-driven microchannel flows. Modeling and simulation of electro-kinetic effects on micro flows poses significant numerical challenge due to the fact that the sizes of the double layer (10 nm up to microns) are very thin compared to channel width (can be up to 100 s of m). Since the typical thickness of the double layer is extremely small compared to the channel width, it would be computationally very costly to capture the velocity profile inside the double layer by placing sufficient number of grid cells in the layer to resolve the velocity changes, especially in complex, 3-d geometries. Existing approaches using "slip" wall velocity and augmented double layer are difficult to use when the flow geometry is complicated, e.g. flow in a T-junction, X-junction, etc. In order to overcome the difficulties arising from those two approaches, we have developed a sub-grid integration method to properly account for the physics of the double layer. The integration approach can be used on simple or complicated flow geometries. Resolution of the double layer is not needed in this approach, and the effects of the double layer can be accounted for at the same time. With this approach, the numeric grid size can be much larger than the thickness of double layer. Presented in this report are a description of the approach, methodology for implementation and several validation simulations for micro flows.

  17. Efficient reactive Brownian dynamics

    DOE PAGES

    Donev, Aleksandar; Yang, Chiao-Yu; Kim, Changho

    2018-01-21

    We develop a Split Reactive Brownian Dynamics (SRBD) algorithm for particle simulations of reaction-diffusion systems based on the Doi or volume reactivity model, in which pairs of particles react with a specified Poisson rate if they are closer than a chosen reactive distance. In our Doi model, we ensure that the microscopic reaction rules for various association and dissociation reactions are consistent with detailed balance (time reversibility) at thermodynamic equilibrium. The SRBD algorithm uses Strang splitting in time to separate reaction and diffusion and solves both the diffusion-only and reaction-only subproblems exactly, even at high packing densities. To efficiently processmore » reactions without uncontrolled approximations, SRBD employs an event-driven algorithm that processes reactions in a time-ordered sequence over the duration of the time step. A grid of cells with size larger than all of the reactive distances is used to schedule and process the reactions, but unlike traditional grid-based methods such as reaction-diffusion master equation algorithms, the results of SRBD are statistically independent of the size of the grid used to accelerate the processing of reactions. We use the SRBD algorithm to compute the effective macroscopic reaction rate for both reaction-limited and diffusion-limited irreversible association in three dimensions and compare to existing theoretical predictions at low and moderate densities. We also study long-time tails in the time correlation functions for reversible association at thermodynamic equilibrium and compare to recent theoretical predictions. Finally, we compare different particle and continuum methods on a model exhibiting a Turing-like instability and pattern formation. Our studies reinforce the common finding that microscopic mechanisms and correlations matter for diffusion-limited systems, making continuum and even mesoscopic modeling of such systems difficult or impossible. We also find that for models in which particles diffuse off lattice, such as the Doi model, reactions lead to a spurious enhancement of the effective diffusion coefficients.« less

  18. Efficient reactive Brownian dynamics

    NASA Astrophysics Data System (ADS)

    Donev, Aleksandar; Yang, Chiao-Yu; Kim, Changho

    2018-01-01

    We develop a Split Reactive Brownian Dynamics (SRBD) algorithm for particle simulations of reaction-diffusion systems based on the Doi or volume reactivity model, in which pairs of particles react with a specified Poisson rate if they are closer than a chosen reactive distance. In our Doi model, we ensure that the microscopic reaction rules for various association and dissociation reactions are consistent with detailed balance (time reversibility) at thermodynamic equilibrium. The SRBD algorithm uses Strang splitting in time to separate reaction and diffusion and solves both the diffusion-only and reaction-only subproblems exactly, even at high packing densities. To efficiently process reactions without uncontrolled approximations, SRBD employs an event-driven algorithm that processes reactions in a time-ordered sequence over the duration of the time step. A grid of cells with size larger than all of the reactive distances is used to schedule and process the reactions, but unlike traditional grid-based methods such as reaction-diffusion master equation algorithms, the results of SRBD are statistically independent of the size of the grid used to accelerate the processing of reactions. We use the SRBD algorithm to compute the effective macroscopic reaction rate for both reaction-limited and diffusion-limited irreversible association in three dimensions and compare to existing theoretical predictions at low and moderate densities. We also study long-time tails in the time correlation functions for reversible association at thermodynamic equilibrium and compare to recent theoretical predictions. Finally, we compare different particle and continuum methods on a model exhibiting a Turing-like instability and pattern formation. Our studies reinforce the common finding that microscopic mechanisms and correlations matter for diffusion-limited systems, making continuum and even mesoscopic modeling of such systems difficult or impossible. We also find that for models in which particles diffuse off lattice, such as the Doi model, reactions lead to a spurious enhancement of the effective diffusion coefficients.

  19. Efficient reactive Brownian dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donev, Aleksandar; Yang, Chiao-Yu; Kim, Changho

    We develop a Split Reactive Brownian Dynamics (SRBD) algorithm for particle simulations of reaction-diffusion systems based on the Doi or volume reactivity model, in which pairs of particles react with a specified Poisson rate if they are closer than a chosen reactive distance. In our Doi model, we ensure that the microscopic reaction rules for various association and dissociation reactions are consistent with detailed balance (time reversibility) at thermodynamic equilibrium. The SRBD algorithm uses Strang splitting in time to separate reaction and diffusion and solves both the diffusion-only and reaction-only subproblems exactly, even at high packing densities. To efficiently processmore » reactions without uncontrolled approximations, SRBD employs an event-driven algorithm that processes reactions in a time-ordered sequence over the duration of the time step. A grid of cells with size larger than all of the reactive distances is used to schedule and process the reactions, but unlike traditional grid-based methods such as reaction-diffusion master equation algorithms, the results of SRBD are statistically independent of the size of the grid used to accelerate the processing of reactions. We use the SRBD algorithm to compute the effective macroscopic reaction rate for both reaction-limited and diffusion-limited irreversible association in three dimensions and compare to existing theoretical predictions at low and moderate densities. We also study long-time tails in the time correlation functions for reversible association at thermodynamic equilibrium and compare to recent theoretical predictions. Finally, we compare different particle and continuum methods on a model exhibiting a Turing-like instability and pattern formation. Our studies reinforce the common finding that microscopic mechanisms and correlations matter for diffusion-limited systems, making continuum and even mesoscopic modeling of such systems difficult or impossible. We also find that for models in which particles diffuse off lattice, such as the Doi model, reactions lead to a spurious enhancement of the effective diffusion coefficients.« less

  20. Optimizing the Terzaghi Estimator of the 3D Distribution of Rock Fracture Orientations

    NASA Astrophysics Data System (ADS)

    Tang, Huiming; Huang, Lei; Juang, C. Hsein; Zhang, Junrong

    2017-08-01

    Orientation statistics are prone to bias when surveyed with the scanline mapping technique in which the observed probabilities differ, depending on the intersection angle between the fracture and the scanline. This bias leads to 1D frequency statistical data that are poorly representative of the 3D distribution. A widely accessible estimator named after Terzaghi was developed to estimate 3D frequencies from 1D biased observations, but the estimation accuracy is limited for fractures at narrow intersection angles to scanlines (termed the blind zone). Although numerous works have concentrated on accuracy with respect to the blind zone, accuracy outside the blind zone has rarely been studied. This work contributes to the limited investigations of accuracy outside the blind zone through a qualitative assessment that deploys a mathematical derivation of the Terzaghi equation in conjunction with a quantitative evaluation that uses fractures simulation and verification of natural fractures. The results show that the estimator does not provide a precise estimate of 3D distributions and that the estimation accuracy is correlated with the grid size adopted by the estimator. To explore the potential for improving accuracy, the particular grid size producing maximum accuracy is identified from 168 combinations of grid sizes and two other parameters. The results demonstrate that the 2° × 2° grid size provides maximum accuracy for the estimator in most cases when applied outside the blind zone. However, if the global sample density exceeds 0.5°-2, then maximum accuracy occurs at a grid size of 1° × 1°.

  1. On solving three-dimensional open-dimension rectangular packing problems

    NASA Astrophysics Data System (ADS)

    Junqueira, Leonardo; Morabito, Reinaldo

    2017-05-01

    In this article, a recently proposed three-dimensional open-dimension rectangular packing problem is considered, in which the objective is to find a minimal volume rectangular container that packs a set of rectangular boxes. The literature has tackled small-sized instances of this problem by means of optimization solvers, position-free mixed-integer programming (MIP) formulations and piecewise linearization approaches. In this study, the problem is alternatively addressed by means of grid-based position MIP formulations, whereas still considering optimization solvers and the same piecewise linearization techniques. A comparison of the computational performance of both models is then presented, when tested with benchmark problem instances and with new instances, and it is shown that the grid-based position MIP formulation can be competitive, depending on the characteristics of the instances. The grid-based position MIP formulation is also embedded with real-world practical constraints, such as cargo stability, and results are additionally presented.

  2. Multi-off-grid methods in multi-step integration of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Beaudet, P. R.

    1974-01-01

    Description of methods of solving first- and second-order systems of differential equations in which all derivatives are evaluated at off-grid locations in order to circumvent the Dahlquist stability limitation on the order of on-grid methods. The proposed multi-off-grid methods require off-grid state predictors for the evaluation of the n derivatives at each step. Progressing forward in time, the off-grid states are predicted using a linear combination of back on-grid state values and off-grid derivative evaluations. A comparison is made between the proposed multi-off-grid methods and the corresponding Adams and Cowell on-grid integration techniques in integrating systems of ordinary differential equations, showing a significant reduction in the error at larger step sizes in the case of the multi-off-grid integrator.

  3. Fast and accurate 3D tensor calculation of the Fock operator in a general basis

    NASA Astrophysics Data System (ADS)

    Khoromskaia, V.; Andrae, D.; Khoromskij, B. N.

    2012-11-01

    The present paper contributes to the construction of a “black-box” 3D solver for the Hartree-Fock equation by the grid-based tensor-structured methods. It focuses on the calculation of the Galerkin matrices for the Laplace and the nuclear potential operators by tensor operations using the generic set of basis functions with low separation rank, discretized on a fine N×N×N Cartesian grid. We prove the Ch2 error estimate in terms of mesh parameter, h=O(1/N), that allows to gain a guaranteed accuracy of the core Hamiltonian part in the Fock operator as h→0. However, the commonly used problem adapted basis functions have low regularity yielding a considerable increase of the constant C, hence, demanding a rather large grid-size N of about several tens of thousands to ensure the high resolution. Modern tensor-formatted arithmetics of complexity O(N), or even O(logN), practically relaxes the limitations on the grid-size. Our tensor-based approach allows to improve significantly the standard basis sets in quantum chemistry by including simple combinations of Slater-type, local finite element and other basis functions. Numerical experiments for moderate size organic molecules show efficiency and accuracy of grid-based calculations to the core Hamiltonian in the range of grid parameter N3˜1015.

  4. Integrating TITAN2D Geophysical Mass Flow Model with GIS

    NASA Astrophysics Data System (ADS)

    Namikawa, L. M.; Renschler, C.

    2005-12-01

    TITAN2D simulates geophysical mass flows over natural terrain using depth-averaged granular flow models and requires spatially distributed parameter values to solve differential equations. Since a Geographical Information System (GIS) main task is integration and manipulation of data covering a geographic region, the use of a GIS for implementation of simulation of complex, physically-based models such as TITAN2D seems a natural choice. However, simulation of geophysical flows requires computationally intensive operations that need unique optimizations, such as adaptative grids and parallel processing. Thus GIS developed for general use cannot provide an effective environment for complex simulations and the solution is to develop a linkage between GIS and simulation model. The present work presents the solution used for TITAN2D where data structure of a GIS is accessed by simulation code through an Application Program Interface (API). GRASS is an open source GIS with published data formats thus GRASS data structure was selected. TITAN2D requires elevation, slope, curvature, and base material information at every cell to be computed. Results from simulation are visualized by a system developed to handle the large amount of output data and to support a realistic dynamic 3-D display of flow dynamics, which requires elevation and texture, usually from a remote sensor image. Data required by simulation is in raster format, using regular rectangular grids. GRASS format for regular grids is based on data file (binary file storing data either uncompressed or compressed by grid row), header file (text file, with information about georeferencing, data extents, and grid cell resolution), and support files (text files, with information about color table and categories names). The implemented API provides access to original data (elevation, base material, and texture from imagery) and slope and curvature derived from elevation data. From several existing methods to estimate slope and curvature from elevation, the selected one is based on estimation by a third-order finite difference method, which has shown to perform better or with minimal difference when compared to more computationally expensive methods. Derivatives are estimated using weighted sum of 8 grid neighbor values. The method was implemented and simulation results compared to derivatives estimated by a simplified version of the method (uses only 4 neighbor cells) and proven to perform better. TITAN2D uses an adaptative mesh grid, where resolution (grid cell size) is not constant, and visualization tools also uses texture with varying resolutions for efficient display. The API supports different resolutions applying bilinear interpolation when elevation, slope and curvature are required at a resolution higher (smaller cell size) than the original and using a nearest cell approach for elevations with lower resolution (larger) than the original. For material information nearest neighbor method is used since interpolation on categorical data has no meaning. Low fidelity characteristic of visualization allows use of nearest neighbor method for texture. Bilinear interpolation estimates the value at a point as the distance-weighted average of values at the closest four cell centers, and interpolation performance is just slightly inferior compared to more computationally expensive methods such as bicubic interpolation and kriging.

  5. The influence of the dose calculation resolution of VMAT plans on the calculated dose for eye lens and optic pathway.

    PubMed

    Park, Jong Min; Park, So-Yeon; Kim, Jung-In; Carlson, Joel; Kim, Jin Ho

    2017-03-01

    To investigate the effect of dose calculation grid on calculated dose-volumetric parameters for eye lenses and optic pathways. A total of 30 patients treated using the volumetric modulated arc therapy (VMAT) technique, were retrospectively selected. For each patient, dose distributions were calculated with calculation grids ranging from 1 to 5 mm at 1 mm intervals. Identical structures were used for VMAT planning. The changes in dose-volumetric parameters according to the size of the calculation grid were investigated. Compared to dose calculation with 1 mm grid, the maximum doses to the eye lens with calculation grids of 2, 3, 4 and 5 mm increased by 0.2 ± 0.2 Gy, 0.5 ± 0.5 Gy, 0.9 ± 0.8 Gy and 1.7 ± 1.5 Gy on average, respectively. The Spearman's correlation coefficient between dose gradients near structures vs. the differences between the calculated doses with 1 mm grid and those with 5 mm grid, were 0.380 (p < 0.001). For the accurate calculation of dose distributions, as well as efficiency, using a grid size of 2 mm appears to be the most appropriate choice.

  6. Recent Developments in Grid Generation and Force Integration Technology for Overset Grids

    NASA Technical Reports Server (NTRS)

    Chan, William M.; VanDalsem, William R. (Technical Monitor)

    1994-01-01

    Recent developments in algorithms and software tools for generating overset grids for complex configurations are described. These include the overset surface grid generation code SURGRD and version 2.0 of the hyperbolic volume grid generation code HYPGEN. The SURGRD code is in beta test mode where the new features include the capability to march over a collection of panel networks, a variety of ways to control the side boundaries and the marching step sizes and distance, a more robust projection scheme and an interpolation option. New features in version 2.0 of HYPGEN include a wider range of boundary condition types. The code also allows the user to specify different marching step sizes and distance for each point on the surface grid. A scheme that takes into account of the overlapped zones on the body surface for the purpose of forces and moments computation is also briefly described, The process involves the following two software modules: MIXSUR - a composite grid generation module to produce a collection of quadrilaterals and triangles on which pressure and viscous stresses are to be integrated, and OVERINT - a forces and moments integration module.

  7. The eGo grid model: An open source approach towards a model of German high and extra-high voltage power grids

    NASA Astrophysics Data System (ADS)

    Mueller, Ulf Philipp; Wienholt, Lukas; Kleinhans, David; Cussmann, Ilka; Bunke, Wolf-Dieter; Pleßmann, Guido; Wendiggensen, Jochen

    2018-02-01

    There are several power grid modelling approaches suitable for simulations in the field of power grid planning. The restrictive policies of grid operators, regulators and research institutes concerning their original data and models lead to an increased interest in open source approaches of grid models based on open data. By including all voltage levels between 60 kV (high voltage) and 380kV (extra high voltage), we dissolve the common distinction between transmission and distribution grid in energy system models and utilize a single, integrated model instead. An open data set for primarily Germany, which can be used for non-linear, linear and linear-optimal power flow methods, was developed. This data set consists of an electrically parameterised grid topology as well as allocated generation and demand characteristics for present and future scenarios at high spatial and temporal resolution. The usability of the grid model was demonstrated by the performance of exemplary power flow optimizations. Based on a marginal cost driven power plant dispatch, being subject to grid restrictions, congested power lines were identified. Continuous validation of the model is nescessary in order to reliably model storage and grid expansion in progressing research.

  8. Rapid inundation estimates at harbor scale using tsunami wave heights offshore simulation and coastal amplification laws

    NASA Astrophysics Data System (ADS)

    Gailler, A.; Loevenbruck, A.; Hebert, H.

    2013-12-01

    Numerical tsunami propagation and inundation models are well developed and have now reached an impressive level of accuracy, especially in locations such as harbors where the tsunami waves are mostly amplified. In the framework of tsunami warning under real-time operational conditions, the main obstacle for the routine use of such numerical simulations remains the slowness of the numerical computation, which is strengthened when detailed grids are required for the precise modeling of the coastline response of an individual harbor. Thus only tsunami offshore propagation modeling tools using a single sparse bathymetric computation grid are presently included within the French Tsunami Warning Center (CENALT), providing rapid estimation of tsunami warning at western Mediterranean and NE Atlantic basins scale. We present here a preliminary work that performs quick estimates of the inundation at individual harbors from these high sea forecasting tsunami simulations. The method involves an empirical correction based on theoretical amplification laws (either Green's or Synolakis laws). The main limitation is that its application to a given coastal area would require a large database of previous observations, in order to define the empirical parameters of the correction equation. As no such data (i.e., historical tide gage records of significant tsunamis) are available for the western Mediterranean and NE Atlantic basins, we use a set of synthetic mareograms, calculated for both fake and well-known historical tsunamigenic earthquakes in the area. This synthetic dataset is obtained through accurate numerical tsunami propagation and inundation modeling by using several nested bathymetric grids of increasingly fine resolution close to the shores (down to a grid cell size of 3m in some Mediterranean harbors). Non linear shallow water tsunami modeling performed on a single 2' coarse bathymetric grid are compared to the values given by time-consuming nested grids simulations (and observation when available), in order to check to which extent the simple approach based on the amplification laws can explain the data. The idea is to fit tsunami data with numerical modeling carried out without any refined coastal bathymetry/topography. To this end several parameters are discussed, namely the bathymetric depth to which model results must be extrapolated (using the Green's law), or the mean bathymetric slope to consider near the studied coast (when using the Synolakis law).

  9. Rapid inundation estimates using coastal amplification laws in the western Mediterranean basin

    NASA Astrophysics Data System (ADS)

    Gailler, Audrey; Loevenbruck, Anne; Hébert, Hélène

    2014-05-01

    Numerical tsunami propagation and inundation models are well developed and have now reached an impressive level of accuracy, especially in locations such as harbors where the tsunami waves are mostly amplified. In the framework of tsunami warning under real-time operational conditions, the main obstacle for the routine use of such numerical simulations remains the slowness of the numerical computation, which is strengthened when detailed grids are required for the precise modeling of the coastline response of an individual harbor. Thus only tsunami offshore propagation modeling tools using a single sparse bathymetric computation grid are presently included within the French Tsunami Warning Center (CENALT), providing rapid estimation of tsunami warning at western Mediterranean and NE Atlantic basins scale. We present here a preliminary work that performs quick estimates of the inundation at individual harbors from these high sea forecasting tsunami simulations. The method involves an empirical correction based on theoretical amplification laws (either Green's or Synolakis laws). The main limitation is that its application to a given coastal area would require a large database of previous observations, in order to define the empirical parameters of the correction equation. As no such data (i.e., historical tide gage records of significant tsunamis) are available for the western Mediterranean and NE Atlantic basins, we use a set of synthetic mareograms, calculated for both fake events and well-known historical tsunamigenic earthquakes in the area. This synthetic dataset is obtained through accurate numerical tsunami propagation and inundation modeling by using several nested bathymetric grids of increasingly fine resolution close to the shores (down to a grid cell size of 3m in some Mediterranean harbors). Non linear shallow water tsunami modeling performed on a single 2' coarse bathymetric grid are compared to the values given by time-consuming nested grids simulations (and observation when available), in order to check to which extent the simple approach based on the amplification laws can explain the data. The idea is to fit tsunami data with numerical modeling carried out without any refined coastal bathymetry/topography. To this end several parameters are discussed, namely the bathymetric depth to which model results must be extrapolated (using the Green's law), or the mean bathymetric slope to consider near the studied coast (when using the Synolakis law).

  10. Experimental and analytical study of close-coupled ventral nozzles for ASTOVL aircraft

    NASA Technical Reports Server (NTRS)

    Mcardle, Jack G.; Smith, C. Frederic

    1990-01-01

    Flow in a generic ventral nozzle system was studied experimentally and analytically with a block version of the PARC3D computational fluid dynamics program (a full Navier-Stokes equation solver) in order to evaluate the program's ability to predict system performance and internal flow patterns. For the experimental work a one-third-size model tailpipe with a single large rectangular ventral nozzle mounted normal to the tailpipe axis was tested with unheated air at steady-state pressure ratios up to 4.0. The end of the tailpipe was closed to simulate a blocked exhaust nozzle. Measurements showed about 5 1/2 percent flow-turning loss, reasonable nozzle performance coefficients, and a significant aftward axial component of thrust due to flow turning loss, reasonable nozzle performance coefficients, and a significant aftward axial component of thrust due to flow turning more than 90 deg. Flow behavior into and through the ventral duct is discussed and illustrated with paint streak flow visualization photographs. For the analytical work the same ventral system configuration was modeled with two computational grids to evaluate the effect of grid density. Both grids gave good results. The finer-grid solution produced more detailed flow patterns and predicted performance parameters, such as thrust and discharge coefficient, within 1 percent of the measured values. PARC3D flow visualization images are shown for comparison with the paint streak photographs. Modeling and computational issues encountered in the analytical work are discussed.

  11. Directional kriging implementation for gridded data interpolation and comparative study with common methods

    NASA Astrophysics Data System (ADS)

    Mahmoudabadi, H.; Briggs, G.

    2016-12-01

    Gridded data sets, such as geoid models or datum shift grids, are commonly used in coordinate transformation algorithms. Grid files typically contain known or measured values at regular fixed intervals. The process of computing a value at an unknown location from the values in the grid data set is called "interpolation". Generally, interpolation methods predict a value at a given point by computing a weighted average of the known values in the neighborhood of the point. Geostatistical Kriging is a widely used interpolation method for irregular networks. Kriging interpolation first analyzes the spatial structure of the input data, then generates a general model to describe spatial dependencies. This model is used to calculate values at unsampled locations by finding direction, shape, size, and weight of neighborhood points. Because it is based on a linear formulation for the best estimation, Kriging it the optimal interpolation method in statistical terms. The Kriging interpolation algorithm produces an unbiased prediction, as well as the ability to calculate the spatial distribution of uncertainty, allowing you to estimate the errors in an interpolation for any particular point. Kriging is not widely used in geospatial applications today, especially applications that run on low power devices or deal with large data files. This is due to the computational power and memory requirements of standard Kriging techniques. In this paper, improvements are introduced in directional kriging implementation by taking advantage of the structure of the grid files. The regular spacing of points simplifies finding the neighborhood points and computing their pairwise distances, reducing the the complexity and improving the execution time of the Kriging algorithm. Also, the proposed method iteratively loads small portion of interest areas in different directions to reduce the amount of required memory. This makes the technique feasible on almost any computer processor. Comparison between kriging and other standard interpolation methods demonstrated more accurate estimations in less denser data files.

  12. A stand-alone tidal prediction application for mobile devices

    NASA Astrophysics Data System (ADS)

    Tsai, Cheng-Han; Fan, Ren-Ye; Yang, Yi-Chung

    2017-04-01

    It is essential for people conducting fishing, leisure, or research activities at the coasts to have timely and handy tidal information. Although tidal information can be found easily on the internet or using mobile device applications, this information is all applicable for only certain specific locations, not anywhere on the coast, and they need an internet connection. We have developed an application for Android devices, which allows the user to obtain hourly tidal height anywhere on the coast for the next 24 hours without having to have any internet connection. All the necessary information needed for the tidal height calculation is stored in the application. To develop this application, we first simulate tides in the Taiwan Sea using the hydrodynamic model (MIKE21 HD) developed by the DHI. The simulation domain covers the whole coast of Taiwan and the surrounding seas with a grid size of 1 km by 1 km. This grid size allows us to calculate tides with high spatial resolution. The boundary conditions for the simulation domain were obtained from the Tidal Model Driver of the Oregon State University, using its tidal constants of eight constituents: M2, S2, N2, K2, K1, O1, P1, and Q1. The simulation calculates tides for 183 days so that the tidal constants for the above eight constituents of each water grid can be extracted by harmonic analysis. Using the calculated tidal constants, we can predict the tides in each grid of our simulation domain, which is useful when one needs the tidal information for any location in the Taiwan Sea. However, for the mobile application, we only store the eight tidal constants for the water grids on the coast. Once the user activates the application, it reads the longitude and latitude from the GPS sensor in the mobile device and finds the nearest coastal grid which has our tidal constants. Then, the application calculates tidal height variation based on the harmonic analysis. The application also allows the user to input location and time to obtain tides for any historic or future dates for the input location. The predicted tides have been verified with the historic tidal records of certain tidal stations. The verification shows that the tides predicted by the application match the measured record well.

  13. Hybrid PV/Wind Power Systems Incorporating Battery Storage and Considering the Stochastic Nature of Renewable Resources

    NASA Astrophysics Data System (ADS)

    Barnawi, Abdulwasa Bakr

    Hybrid power generation system and distributed generation technology are attracting more investments due to the growing demand for energy nowadays and the increasing awareness regarding emissions and their environmental impacts such as global warming and pollution. The price fluctuation of crude oil is an additional reason for the leading oil producing countries to consider renewable resources as an alternative. Saudi Arabia as the top oil exporter country in the word announced the "Saudi Arabia Vision 2030" which is targeting to generate 9.5 GW of electricity from renewable resources. Two of the most promising renewable technologies are wind turbines (WT) and photovoltaic cells (PV). The integration or hybridization of photovoltaics and wind turbines with battery storage leads to higher adequacy and redundancy for both autonomous and grid connected systems. This study presents a method for optimal generation unit planning by installing a proper number of solar cells, wind turbines, and batteries in such a way that the net present value (NPV) is minimized while the overall system redundancy and adequacy is maximized. A new renewable fraction technique (RFT) is used to perform the generation unit planning. RFT was tested and validated with particle swarm optimization and HOMER Pro under the same conditions and environment. Renewable resources and load randomness and uncertainties are considered. Both autonomous and grid-connected system designs were adopted in the optimal generation units planning process. An uncertainty factor was designed and incorporated in both autonomous and grid connected system designs. In the autonomous hybrid system design model, the strategy including an additional amount of operation reserve as a percent of the hourly load was considered to deal with resource uncertainty since the battery storage system is the only backup. While in the grid-connected hybrid system design model, demand response was incorporated to overcome the impact of uncertainty and perform energy trading between the hybrid grid utility and main grid utility in addition to the designed uncertainty factor. After the generation unit planning was carried out and component sizing was determined, adequacy evaluation was conducted by calculating the loss of load expectation adequacy index for different contingency criteria considering probability of equipment failure. Finally, a microgrid planning was conducted by finding the proper size and location to install distributed generation units in a radial distribution network.

  14. About the Need of Combining Power Market and Power Grid Model Results for Future Energy System Scenarios

    NASA Astrophysics Data System (ADS)

    Mende, Denis; Böttger, Diana; Löwer, Lothar; Becker, Holger; Akbulut, Alev; Stock, Sebastian

    2018-02-01

    The European power grid infrastructure faces various challenges due to the expansion of renewable energy sources (RES). To conduct investigations on interactions between power generation and the power grid, models for the power market as well as for the power grid are necessary. This paper describes the basic functionalities and working principles of both types of models as well as steps to couple power market results and the power grid model. The combination of these models is beneficial in terms of gaining realistic power flow scenarios in the grid model and of being able to pass back results of the power flow and restrictions to the market model. Focus is laid on the power grid model and possible application examples like algorithms in grid analysis, operation and dynamic equipment modelling.

  15. A simple grid implementation with Berkeley Open Infrastructure for Network Computing using BLAST as a model

    PubMed Central

    Pinthong, Watthanai; Muangruen, Panya

    2016-01-01

    Development of high-throughput technologies, such as Next-generation sequencing, allows thousands of experiments to be performed simultaneously while reducing resource requirement. Consequently, a massive amount of experiment data is now rapidly generated. Nevertheless, the data are not readily usable or meaningful until they are further analysed and interpreted. Due to the size of the data, a high performance computer (HPC) is required for the analysis and interpretation. However, the HPC is expensive and difficult to access. Other means were developed to allow researchers to acquire the power of HPC without a need to purchase and maintain one such as cloud computing services and grid computing system. In this study, we implemented grid computing in a computer training center environment using Berkeley Open Infrastructure for Network Computing (BOINC) as a job distributor and data manager combining all desktop computers to virtualize the HPC. Fifty desktop computers were used for setting up a grid system during the off-hours. In order to test the performance of the grid system, we adapted the Basic Local Alignment Search Tools (BLAST) to the BOINC system. Sequencing results from Illumina platform were aligned to the human genome database by BLAST on the grid system. The result and processing time were compared to those from a single desktop computer and HPC. The estimated durations of BLAST analysis for 4 million sequence reads on a desktop PC, HPC and the grid system were 568, 24 and 5 days, respectively. Thus, the grid implementation of BLAST by BOINC is an efficient alternative to the HPC for sequence alignment. The grid implementation by BOINC also helped tap unused computing resources during the off-hours and could be easily modified for other available bioinformatics software. PMID:27547555

  16. A high-order spatial filter for a cubed-sphere spectral element model

    NASA Astrophysics Data System (ADS)

    Kang, Hyun-Gyu; Cheong, Hyeong-Bin

    2017-04-01

    A high-order spatial filter is developed for the spectral-element-method dynamical core on the cubed-sphere grid which employs the Gauss-Lobatto Lagrange interpolating polynomials (GLLIP) as orthogonal basis functions. The filter equation is the high-order Helmholtz equation which corresponds to the implicit time-differencing of a diffusion equation employing the high-order Laplacian. The Laplacian operator is discretized within a cell which is a building block of the cubed sphere grid and consists of the Gauss-Lobatto grid. When discretizing a high-order Laplacian, due to the requirement of C0 continuity along the cell boundaries the grid-points in neighboring cells should be used for the target cell: The number of neighboring cells is nearly quadratically proportional to the filter order. Discrete Helmholtz equation yields a huge-sized and highly sparse matrix equation whose size is N*N with N the number of total grid points on the globe. The number of nonzero entries is also almost in quadratic proportion to the filter order. Filtering is accomplished by solving the huge-matrix equation. While requiring a significant computing time, the solution of global matrix provides the filtered field free of discontinuity along the cell boundaries. To achieve the computational efficiency and the accuracy at the same time, the solution of the matrix equation was obtained by only accounting for the finite number of adjacent cells. This is called as a local-domain filter. It was shown that to remove the numerical noise near the grid-scale, inclusion of 5*5 cells for the local-domain filter was found sufficient, giving the same accuracy as that obtained by global domain solution while reducing the computing time to a considerably lower level. The high-order filter was evaluated using the standard test cases including the baroclinic instability of the zonal flow. Results indicated that the filter performs better on the removal of grid-scale numerical noises than the explicit high-order viscosity. It was also presented that the filter can be easily implemented on the distributed-memory parallel computers with a desirable scalability.

  17. Influence of lubrication forces in direct numerical simulations of particle-laden flows

    NASA Astrophysics Data System (ADS)

    Maitri, Rohit; Peters, Frank; Padding, Johan; Kuipers, Hans

    2016-11-01

    Accurate numerical representation of particle-laden flows is important for fundamental understanding and optimizing the complex processes such as proppant transport in fracking. Liquid-solid flows are fundamentally different from gas-solid flows because of lower density ratios (solid to fluid) and non-negligible lubrication forces. In this interface resolved model, fluid-solid coupling is achieved by incorporating the no-slip boundary condition implicitly at particle's surfaces by means of an efficient second order ghost-cell immersed boundary method. A fixed Eulerian grid is used for solving the Navier-Stokes equations and the particle-particle interactions are implemented using the soft sphere collision and sub-grid scale lubrication model. Due to the range of influence of lubrication force on a smaller scale than the grid size, it is important to implement the lubrication model accurately. In this work, different implementations of the lubrication model on particle dynamics are studied for various flow conditions. The effect of a particle surface roughness on lubrication force and the particle transport is also investigated. This study is aimed at developing a validated methodology to incorporate lubrication models in direct numerical simulation of particle laden flows. This research is supported from Grant 13CSER014 of the Foundation for Fundamental Research on Matter (FOM), which is part of the Netherlands Organisation for Scientific Research (NWO).

  18. Fat fractal scaling of drainage networks from a random spatial network model

    USGS Publications Warehouse

    Karlinger, Michael R.; Troutman, Brent M.

    1992-01-01

    An alternative quantification of the scaling properties of river channel networks is explored using a spatial network model. Whereas scaling descriptions of drainage networks previously have been presented using a fractal analysis primarily of the channel lengths, we illustrate the scaling of the surface area of the channels defining the network pattern with an exponent which is independent of the fractal dimension but not of the fractal nature of the network. The methodology presented is a fat fractal analysis in which the drainage basin minus the channel area is considered the fat fractal. Random channel networks within a fixed basin area are generated on grids of different scales. The sample channel networks generated by the model have a common outlet of fixed width and a rule of upstream channel narrowing specified by a diameter branching exponent using hydraulic and geomorphologic principles. Scaling exponents are computed for each sample network on a given grid size and are regressed against network magnitude. Results indicate that the size of the exponents are related to magnitude of the networks and generally decrease as network magnitude increases. Cases showing differences in scaling exponents with like magnitudes suggest a direction of future work regarding other topologic basin characteristics as potential explanatory variables.

  19. RACORO continental boundary layer cloud investigations. Part I: Case study development and ensemble large-scale forcings

    DOE PAGES

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; ...

    2015-06-19

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functionsmore » for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.« less

  20. RACORO Continental Boundary Layer Cloud Investigations: 1. Case Study Development and Ensemble Large-Scale Forcings

    NASA Technical Reports Server (NTRS)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; hide

    2015-01-01

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60 h case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in situ measurements from the Routine AAF (Atmospheric Radiation Measurement (ARM) Aerial Facility) CLOWD (Clouds with Low Optical Water Depth) Optical Radiative Observations (RACORO) field campaign and remote sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, kappa, are derived from observations to be approximately 0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing data sets are derived from the ARM variational analysis, European Centre for Medium-Range Weather Forecasts, and a multiscale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in "trial" large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.

  1. RACORO continental boundary layer cloud investigations: 1. Case study development and ensemble large-scale forcings

    NASA Astrophysics Data System (ADS)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; Li, Zhijin; Xie, Shaocheng; Ackerman, Andrew S.; Zhang, Minghua; Khairoutdinov, Marat

    2015-06-01

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60 h case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in situ measurements from the Routine AAF (Atmospheric Radiation Measurement (ARM) Aerial Facility) CLOWD (Clouds with Low Optical Water Depth) Optical Radiative Observations (RACORO) field campaign and remote sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be 0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing data sets are derived from the ARM variational analysis, European Centre for Medium-Range Weather Forecasts, and a multiscale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in "trial" large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.

  2. The Role of Boundary-Layer and Cumulus Convection on Dust Emission, Mixing, and Transport Over Desert Regions

    NASA Astrophysics Data System (ADS)

    Takemi, T.; Yasui, M.

    2005-12-01

    Recent studies on dust emission and transport have been concerning the small-scale atmospheric processes in order to incorporate them as a subgrid-scale effect in large-scale numerical prediction models. In the present study, we investigated the dynamical processes and mechanisms of dust emission, mixing, and transport induced by boundary-layer and cumulus convection under a fair-weather condition over a Chinese desert. We performed a set of sensitivity experiments as well as a control simulation in order to examine the effects of vertical wind shear, upper-level wind speed, and moist convection by using a simplified and idealized modeling framework. The results of the control experiment showed that surface dust emission was at first caused before the noon time by intense convective motion which not only developed in the boundary layer but also penetrated into the free troposphere. In the afternoon hours, boundary-layer dry convection actively mixed and transported dust within the boundary layer. Some of the convective cells penetrated above the boundary layer, which led to the generation of cumulus clouds and hence gradually increased the dust content in the free troposphere. Coupled effects of the dry and moist convection played an important role in inducing surface dust emission and transporting dust vertically. This was clearly demonstrated through the comparison of the results between the control and the sensitivity experiments. The results of the control simulation were compared with lidar measurements. The simulation well captured the observed diurnal features of the upward transport of dust. We also examined the dependence of the simulated results on grid resolution: the grid size was changed from 250 m up to 4 km. It was found that there was a significant difference between the 2-km and 4-km grids. If a cumulus parameterization was added to the 4-km grid run, the column content was comparable to the other cases. This result suggests that subgrid parameterizations are required if the grid size is larger than the order of 1 km in a fair-weather condition.

  3. What is the effect of LiDAR-derived DEM resolution on large-scale watershed model results?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ping Yang; Daniel B. Ames; Andre Fonseca

    This paper examines the effect of raster cell size on hydrographic feature extraction and hydrological modeling using LiDAR derived DEMs. LiDAR datasets for three experimental watersheds were converted to DEMs at various cell sizes. Watershed boundaries and stream networks were delineated from each DEM and were compared to reference data. Hydrological simulations were conducted and the outputs were compared. Smaller cell size DEMs consistently resulted in less difference between DEM-delineated features and reference data. However, minor differences been found between streamflow simulations resulted for a lumped watershed model run at daily simulations aggregated at an annual average. These findings indicatemore » that while higher resolution DEM grids may result in more accurate representation of terrain characteristics, such variations do not necessarily improve watershed scale simulation modeling. Hence the additional expense of generating high resolution DEM's for the purpose of watershed modeling at daily or longer time steps may not be warranted.« less

  4. A Micro-Grid Simulator Tool (SGridSim) using Effective Node-to-Node Complex Impedance (EN2NCI) Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Udhay Ravishankar; Milos manic

    2013-08-01

    This paper presents a micro-grid simulator tool useful for implementing and testing multi-agent controllers (SGridSim). As a common engineering practice it is important to have a tool that simplifies the modeling of the salient features of a desired system. In electric micro-grids, these salient features are the voltage and power distributions within the micro-grid. Current simplified electric power grid simulator tools such as PowerWorld, PowerSim, Gridlab, etc, model only the power distribution features of a desired micro-grid. Other power grid simulators such as Simulink, Modelica, etc, use detailed modeling to accommodate the voltage distribution features. This paper presents a SGridSimmore » micro-grid simulator tool that simplifies the modeling of both the voltage and power distribution features in a desired micro-grid. The SGridSim tool accomplishes this simplified modeling by using Effective Node-to-Node Complex Impedance (EN2NCI) models of components that typically make-up a micro-grid. The term EN2NCI models means that the impedance based components of a micro-grid are modeled as single impedances tied between their respective voltage nodes on the micro-grid. Hence the benefit of the presented SGridSim tool are 1) simulation of a micro-grid is performed strictly in the complex-domain; 2) faster simulation of a micro-grid by avoiding the simulation of detailed transients. An example micro-grid model was built using the SGridSim tool and tested to simulate both the voltage and power distribution features with a total absolute relative error of less than 6%.« less

  5. The impact of ancillary services in optimal DER investment decisions

    DOE PAGES

    Cardoso, Goncalo; Stadler, Michael; Mashayekh, Salman; ...

    2017-04-25

    Microgrid resource sizing problems typically include the analysis of a combination of value streams such as peak shaving, load shifting, or load scheduling, which support the economic feasibility of the microgrid deployment. However, microgrid benefits can go beyond these, and the ability to provide ancillary grid services such as frequency regulation or spinning and non-spinning reserves is well known, despite typically not being considered in resource sizing problems. This paper proposes the expansion of the Distributed Energy Resources Customer Adoption Model (DER-CAM), a state-of-the-art microgrid resource sizing model, to include revenue streams resulting from the participation in ancillary service markets.more » Results suggest that participation in such markets may not only influence the optimum resource sizing, but also the operational dispatch, with results being strongly influenced by the exact market requirements and clearing prices.« less

  6. The impact of ancillary services in optimal DER investment decisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardoso, Goncalo; Stadler, Michael; Mashayekh, Salman

    Microgrid resource sizing problems typically include the analysis of a combination of value streams such as peak shaving, load shifting, or load scheduling, which support the economic feasibility of the microgrid deployment. However, microgrid benefits can go beyond these, and the ability to provide ancillary grid services such as frequency regulation or spinning and non-spinning reserves is well known, despite typically not being considered in resource sizing problems. This paper proposes the expansion of the Distributed Energy Resources Customer Adoption Model (DER-CAM), a state-of-the-art microgrid resource sizing model, to include revenue streams resulting from the participation in ancillary service markets.more » Results suggest that participation in such markets may not only influence the optimum resource sizing, but also the operational dispatch, with results being strongly influenced by the exact market requirements and clearing prices.« less

  7. NCAR global model topography generation software for unstructured grids

    NASA Astrophysics Data System (ADS)

    Lauritzen, P. H.; Bacmeister, J. T.; Callaghan, P. F.; Taylor, M. A.

    2015-06-01

    It is the purpose of this paper to document the NCAR global model topography generation software for unstructured grids. Given a model grid, the software computes the fraction of the grid box covered by land, the gridbox mean elevation, and associated sub-grid scale variances commonly used for gravity wave and turbulent mountain stress parameterizations. The software supports regular latitude-longitude grids as well as unstructured grids; e.g. icosahedral, Voronoi, cubed-sphere and variable resolution grids. As an example application and in the spirit of documenting model development, exploratory simulations illustrating the impacts of topographic smoothing with the NCAR-DOE CESM (Community Earth System Model) CAM5.2-SE (Community Atmosphere Model version 5.2 - Spectral Elements dynamical core) are shown.

  8. Connecting spatial and temporal scales of tropical precipitation in observations and the MetUM-GA6

    NASA Astrophysics Data System (ADS)

    Martin, Gill M.; Klingaman, Nicholas P.; Moise, Aurel F.

    2017-01-01

    This study analyses tropical rainfall variability (on a range of temporal and spatial scales) in a set of parallel Met Office Unified Model (MetUM) simulations at a range of horizontal resolutions, which are compared with two satellite-derived rainfall datasets. We focus on the shorter scales, i.e. from the native grid and time step of the model through sub-daily to seasonal, since previous studies have paid relatively little attention to sub-daily rainfall variability and how this feeds through to longer scales. We find that the behaviour of the deep convection parametrization in this model on the native grid and time step is largely independent of the grid-box size and time step length over which it operates. There is also little difference in the rainfall variability on larger/longer spatial/temporal scales. Tropical convection in the model on the native grid/time step is spatially and temporally intermittent, producing very large rainfall amounts interspersed with grid boxes/time steps of little or no rain. In contrast, switching off the deep convection parametrization, albeit at an unrealistic resolution for resolving tropical convection, results in very persistent (for limited periods), but very sporadic, rainfall. In both cases, spatial and temporal averaging smoothes out this intermittency. On the ˜ 100 km scale, for oceanic regions, the spectra of 3-hourly and daily mean rainfall in the configurations with parametrized convection agree fairly well with those from satellite-derived rainfall estimates, while at ˜ 10-day timescales the averages are overestimated, indicating a lack of intra-seasonal variability. Over tropical land the results are more varied, but the model often underestimates the daily mean rainfall (partly as a result of a poor diurnal cycle) but still lacks variability on intra-seasonal timescales. Ultimately, such work will shed light on how uncertainties in modelling small-/short-scale processes relate to uncertainty in climate change projections of rainfall distribution and variability, with a view to reducing such uncertainty through improved modelling of small-/short-scale processes.

  9. Diviner lunar radiometer gridded brightness temperatures from geodesic binning of modeled fields of view

    NASA Astrophysics Data System (ADS)

    Sefton-Nash, E.; Williams, J.-P.; Greenhagen, B. T.; Aye, K.-M.; Paige, D. A.

    2017-12-01

    An approach is presented to efficiently produce high quality gridded data records from the large, global point-based dataset returned by the Diviner Lunar Radiometer Experiment aboard NASA's Lunar Reconnaissance Orbiter. The need to minimize data volume and processing time in production of science-ready map products is increasingly important with the growth in data volume of planetary datasets. Diviner makes on average >1400 observations per second of radiance that is reflected and emitted from the lunar surface, using 189 detectors divided into 9 spectral channels. Data management and processing bottlenecks are amplified by modeling every observation as a probability distribution function over the field of view, which can increase the required processing time by 2-3 orders of magnitude. Geometric corrections, such as projection of data points onto a digital elevation model, are numerically intensive and therefore it is desirable to perform them only once. Our approach reduces bottlenecks through parallel binning and efficient storage of a pre-processed database of observations. Database construction is via subdivision of a geodesic icosahedral grid, with a spatial resolution that can be tailored to suit the field of view of the observing instrument. Global geodesic grids with high spatial resolution are normally impractically memory intensive. We therefore demonstrate a minimum storage and highly parallel method to bin very large numbers of data points onto such a grid. A database of the pre-processed and binned points is then used for production of mapped data products that is significantly faster than if unprocessed points were used. We explore quality controls in the production of gridded data records by conditional interpolation, allowed only where data density is sufficient. The resultant effects on the spatial continuity and uncertainty in maps of lunar brightness temperatures is illustrated. We identify four binning regimes based on trades between the spatial resolution of the grid, the size of the FOV and the on-target spacing of observations. Our approach may be applicable and beneficial for many existing and future point-based planetary datasets.

  10. Nonlinear plasma wave models in 3D fluid simulations of laser-plasma interaction

    NASA Astrophysics Data System (ADS)

    Chapman, Thomas; Berger, Richard; Arrighi, Bill; Langer, Steve; Banks, Jeffrey; Brunner, Stephan

    2017-10-01

    Simulations of laser-plasma interaction (LPI) in inertial confinement fusion (ICF) conditions require multi-mm spatial scales due to the typical laser beam size and durations of order 100 ps in order for numerical laser reflectivities to converge. To be computationally achievable, these scales necessitate a fluid-like treatment of light and plasma waves with a spatial grid size on the order of the light wave length. Plasma waves experience many nonlinear phenomena not naturally described by a fluid treatment, such as frequency shifts induced by trapping, a nonlinear (typically suppressed) Landau damping, and mode couplings leading to instabilities that can cause the plasma wave to decay rapidly. These processes affect the onset and saturation of stimulated Raman and Brillouin scattering, and are of direct interest to the modeling and prediction of deleterious LPI in ICF. It is not currently computationally feasible to simulate these Debye length-scale phenomena in 3D across experimental scales. Analytically-derived and/or numerically benchmarked models of processes occurring at scales finer than the fluid simulation grid offer a path forward. We demonstrate the impact of a range of kinetic processes on plasma reflectivity via models included in the LPI simulation code pF3D. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  11. Validation of Land-Surface Mosaic Heterogeneity in the GEOS DAS

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Molod, Andrea; Houser, Paul R.; Schubert, Siegfried

    1999-01-01

    The Mosaic Land-surface Model (LSM) has been included into the current GEOS Data Assimilation System (DAS). The LSM uses a more advanced representation of physical processes than previous versions of the GEOS DAS, including the representation of sub-grid heterogeneity of the land-surface through the Mosaic approach. As a first approximation, Mosaic assumes that all similar surface types within a grid-cell can be lumped together as a single'tile'. Within one GCM grid-cell, there might be 1 - 5 different tiles or surface types. All tiles are subjected to the grid-scale forcing (radiation, air temperature and specific humidity, and precipitation), and the sub-grid variability is a function of the tile characteristics. In this paper, we validate the LSM sub-grid scale variability (tiles) using a variety of surface observing stations from the Southern Great Plains (SGP) site of the Atmospheric Radiation Measurement (ARM) Program. One of the primary goals of SGP ARM is to study the variability of atmospheric radiation within a G,CM grid-cell. Enough surface data has been collected by ARM to extend this goal to sub-grid variability of the land-surface energy and water budgets. The time period of this study is the Summer of 1998 (June I - September 1). The ARM site data consists of surface meteorology, energy flux (eddy correlation and bowen ratio), soil water observations spread over an area similar to the size of a G-CM grid-cell. Various ARM stations are described as wheat and alfalfa crops, pasture and range land. The LSM tiles considered at the grid-space (2 x 2.5) nearest the ARM site include, grassland, deciduous forests, bare soil and dwarf trees. Surface energy and water balances for each tile type are compared with observations. Furthermore, we will discuss the land-surface sub-grid variability of both the ARM observations and the DAS.

  12. Optimal system sizing in grid-connected photovoltaic applications

    NASA Astrophysics Data System (ADS)

    Simoens, H. M.; Baert, D. H.; de Mey, G.

    A costs/benefits analysis for optimizing the combination of photovoltaic (PV) panels, batteries and an inverter for grid interconnected systems at a 500 W/day Belgian residence is presented. It is assumed that some power purchases from the grid will always be necessary, and that excess PV power can be fed into the grid. A minimal value for the cost divided by the performance is defined for economic optimization. Shortages and excesses are calculated for PV panels of 0.5-10 kWp output, with consideration given to the advantages of a battery back-up. The minimal economic value is found to increase with the magnitude of PV output, and an inverter should never be rated at more than half the array maximum output. A maximum panel size for the Belgian residence is projected to be 6 kWp.

  13. View of Pakistan Atomic Energy Commission towards SMPR's in the light of KANUPP performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huseini, S.D.

    1985-01-01

    The developing countries in general do not have grid capacities adequate enough to incorporate standard size, economic but rather large nuclear power plants for maximum advantage. Therefore, small and medium size reactors (SMPR) have been and still are, of particular interest to the developing countries in spite of certain known problems with these reactors. Pakistan Atomic Energy Commission (PAEC) has been operating a CANDU type of a small PHWR plant since 1971 when it was connected to the local Karachi grid. This paper describes PAEC's view in the light of KANUPP performance with respect to such factors associated with SMPR'smore » as selection of suitable reactor size and type, its operation in a grid of small capacity, flexibility of operation and its role as a reliable source of electrical power.« less

  14. Rapid inundation estimates at harbor scale using tsunami wave heights offshore simulation and Green's law approach

    NASA Astrophysics Data System (ADS)

    Gailler, Audrey; Hébert, Hélène; Loevenbruck, Anne

    2013-04-01

    Improvements in the availability of sea-level observations and advances in numerical modeling techniques are increasing the potential for tsunami warnings to be based on numerical model forecasts. Numerical tsunami propagation and inundation models are well developed and have now reached an impressive level of accuracy, especially in locations such as harbors where the tsunami waves are mostly amplified. In the framework of tsunami warning under real-time operational conditions, the main obstacle for the routine use of such numerical simulations remains the slowness of the numerical computation, which is strengthened when detailed grids are required for the precise modeling of the coastline response on the scale of an individual harbor. In fact, when facing the problem of the interaction of the tsunami wavefield with a shoreline, any numerical simulation must be performed over an increasingly fine grid, which in turn mandates a reduced time step, and the use of a fully non-linear code. Such calculations become then prohibitively time-consuming, which is clearly unacceptable in the framework of real-time warning. Thus only tsunami offshore propagation modeling tools using a single sparse bathymetric computation grid are presently included within the French Tsunami Warning Center (CENALT), providing rapid estimation of tsunami wave heights in high seas, and tsunami warning maps at western Mediterranean and NE Atlantic basins scale. We present here a preliminary work that performs quick estimates of the inundation at individual harbors from these deep wave heights simulations. The method involves an empirical correction relation derived from Green's law, expressing conservation of wave energy flux to extend the gridded wave field into the harbor with respect to the nearby deep-water grid node. The main limitation of this method is that its application to a given coastal area would require a large database of previous observations, in order to define the empirical parameters of the correction equation. As no such data (i.e., historical tide gage records of significant tsunamis) are available for the western Mediterranean and NE Atlantic basins, a set of synthetic mareograms is calculated for both fake and well-known historical tsunamigenic earthquakes in the area. This synthetic dataset is obtained through accurate numerical tsunami propagation and inundation modeling by using several nested bathymetric grids characterized by a coarse resolution over deep water regions and an increasingly fine resolution close to the shores (down to a grid cell size of 3m in some Mediterranean harbors). This synthetic dataset is then used to approximate the empirical parameters of the correction equation. Results of inundation estimates in several french Mediterranean harbors obtained with the fast "Green's law - derived" method are presented and compared with values given by time-consuming nested grids simulations.

  15. Long range Debye-Hückel correction for computation of grid-based electrostatic forces between biomacromolecules

    PubMed Central

    2014-01-01

    Background Brownian dynamics (BD) simulations can be used to study very large molecular systems, such as models of the intracellular environment, using atomic-detail structures. Such simulations require strategies to contain the computational costs, especially for the computation of interaction forces and energies. A common approach is to compute interaction forces between macromolecules by precomputing their interaction potentials on three-dimensional discretized grids. For long-range interactions, such as electrostatics, grid-based methods are subject to finite size errors. We describe here the implementation of a Debye-Hückel correction to the grid-based electrostatic potential used in the SDA BD simulation software that was applied to simulate solutions of bovine serum albumin and of hen egg white lysozyme. Results We found that the inclusion of the long-range electrostatic correction increased the accuracy of both the protein-protein interaction profiles and the protein diffusion coefficients at low ionic strength. Conclusions An advantage of this method is the low additional computational cost required to treat long-range electrostatic interactions in large biomacromolecular systems. Moreover, the implementation described here for BD simulations of protein solutions can also be applied in implicit solvent molecular dynamics simulations that make use of gridded interaction potentials. PMID:25045516

  16. Numerical Nuclear Second Derivatives on a Computing Grid: Enabling and Accelerating Frequency Calculations on Complex Molecular Systems.

    PubMed

    Yang, Tzuhsiung; Berry, John F

    2018-06-04

    The computation of nuclear second derivatives of energy, or the nuclear Hessian, is an essential routine in quantum chemical investigations of ground and transition states, thermodynamic calculations, and molecular vibrations. Analytic nuclear Hessian computations require the resolution of costly coupled-perturbed self-consistent field (CP-SCF) equations, while numerical differentiation of analytic first derivatives has an unfavorable 6 N ( N = number of atoms) prefactor. Herein, we present a new method in which grid computing is used to accelerate and/or enable the evaluation of the nuclear Hessian via numerical differentiation: NUMFREQ@Grid. Nuclear Hessians were successfully evaluated by NUMFREQ@Grid at the DFT level as well as using RIJCOSX-ZORA-MP2 or RIJCOSX-ZORA-B2PLYP for a set of linear polyacenes with systematically increasing size. For the larger members of this group, NUMFREQ@Grid was found to outperform the wall clock time of analytic Hessian evaluation; at the MP2 or B2LYP levels, these Hessians cannot even be evaluated analytically. We also evaluated a 156-atom catalytically relevant open-shell transition metal complex and found that NUMFREQ@Grid is faster (7.7 times shorter wall clock time) and less demanding (4.4 times less memory requirement) than an analytic Hessian. Capitalizing on the capabilities of parallel grid computing, NUMFREQ@Grid can outperform analytic methods in terms of wall time, memory requirements, and treatable system size. The NUMFREQ@Grid method presented herein demonstrates how grid computing can be used to facilitate embarrassingly parallel computational procedures and is a pioneer for future implementations.

  17. Surrogate modeling of deformable joint contact using artificial neural networks.

    PubMed

    Eskinazi, Ilan; Fregly, Benjamin J

    2015-09-01

    Deformable joint contact models can be used to estimate loading conditions for cartilage-cartilage, implant-implant, human-orthotic, and foot-ground interactions. However, contact evaluations are often so expensive computationally that they can be prohibitive for simulations or optimizations requiring thousands or even millions of contact evaluations. To overcome this limitation, we developed a novel surrogate contact modeling method based on artificial neural networks (ANNs). The method uses special sampling techniques to gather input-output data points from an original (slow) contact model in multiple domains of input space, where each domain represents a different physical situation likely to be encountered. For each contact force and torque output by the original contact model, a multi-layer feed-forward ANN is defined, trained, and incorporated into a surrogate contact model. As an evaluation problem, we created an ANN-based surrogate contact model of an artificial tibiofemoral joint using over 75,000 evaluations of a fine-grid elastic foundation (EF) contact model. The surrogate contact model computed contact forces and torques about 1000 times faster than a less accurate coarse grid EF contact model. Furthermore, the surrogate contact model was seven times more accurate than the coarse grid EF contact model within the input domain of a walking motion. For larger input domains, the surrogate contact model showed the expected trend of increasing error with increasing domain size. In addition, the surrogate contact model was able to identify out-of-contact situations with high accuracy. Computational contact models created using our proposed ANN approach may remove an important computational bottleneck from musculoskeletal simulations or optimizations incorporating deformable joint contact models. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  18. Surrogate Modeling of Deformable Joint Contact using Artificial Neural Networks

    PubMed Central

    Eskinazi, Ilan; Fregly, Benjamin J.

    2016-01-01

    Deformable joint contact models can be used to estimate loading conditions for cartilage-cartilage, implant-implant, human-orthotic, and foot-ground interactions. However, contact evaluations are often so expensive computationally that they can be prohibitive for simulations or optimizations requiring thousands or even millions of contact evaluations. To overcome this limitation, we developed a novel surrogate contact modeling method based on artificial neural networks (ANNs). The method uses special sampling techniques to gather input-output data points from an original (slow) contact model in multiple domains of input space, where each domain represents a different physical situation likely to be encountered. For each contact force and torque output by the original contact model, a multi-layer feed-forward ANN is defined, trained, and incorporated into a surrogate contact model. As an evaluation problem, we created an ANN-based surrogate contact model of an artificial tibiofemoral joint using over 75,000 evaluations of a fine-grid elastic foundation (EF) contact model. The surrogate contact model computed contact forces and torques about 1000 times faster than a less accurate coarse grid EF contact model. Furthermore, the surrogate contact model was seven times more accurate than the coarse grid EF contact model within the input domain of a walking motion. For larger input domains, the surrogate contact model showed the expected trend of increasing error with increasing domain size. In addition, the surrogate contact model was able to identify out-of-contact situations with high accuracy. Computational contact models created using our proposed ANN approach may remove an important computational bottleneck from musculoskeletal simulations or optimizations incorporating deformable joint contact models. PMID:26220591

  19. Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation

    NASA Astrophysics Data System (ADS)

    Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab

    2015-05-01

    3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.

  20. Simulations and Evaluation of Mesoscale Convective Systems in a Multi-scale Modeling Framework (MMF)

    NASA Astrophysics Data System (ADS)

    Chern, J. D.; Tao, W. K.

    2017-12-01

    It is well known that the mesoscale convective systems (MCS) produce more than 50% of rainfall in most tropical regions and play important roles in regional and global water cycles. Simulation of MCSs in global and climate models is a very challenging problem. Typical MCSs have horizontal scale of a few hundred kilometers. Models with a domain of several hundred kilometers and fine enough resolution to properly simulate individual clouds are required to realistically simulate MCSs. The multiscale modeling framework (MMF), which replaces traditional cloud parameterizations with cloud-resolving models (CRMs) within a host atmospheric general circulation model (GCM), has shown some capabilities of simulating organized MCS-like storm signals and propagations. However, its embedded CRMs typically have small domain (less than 128 km) and coarse resolution ( 4 km) that cannot realistically simulate MCSs and individual clouds. In this study, a series of simulations were performed using the Goddard MMF. The impacts of the domain size and model grid resolution of the embedded CRMs on simulating MCSs are examined. The changes of cloud structure, occurrence, and properties such as cloud types, updraft and downdraft, latent heating profile, and cold pool strength in the embedded CRMs are examined in details. The simulated MCS characteristics are evaluated against satellite measurements using the Goddard Satellite Data Simulator Unit. The results indicate that embedded CRMs with large domain and fine resolution tend to produce better simulations compared to those simulations with typical MMF configuration (128 km domain size and 4 km model grid spacing).

  1. Assessing the Impacts of Wind Integration in the Western Provinces

    NASA Astrophysics Data System (ADS)

    Sopinka, Amy

    Increasing carbon dioxide levels and the fear of irreversible climate change has prompted policy makers to implement renewable portfolio standards. These renewable portfolio standards are meant to encourage the adoption of renewable energy technologies thereby reducing carbon emissions associated with fossil fuel-fired electricity generation. The ability to efficiently adopt and utilize high levels of renewable energy technology, such as wind power, depends upon the composition of the extant generation within the grid. Western Canadian electric grids are poised to integrate high levels of wind and although Alberta has sufficient and, at times, an excess supply of electricity, it does not have the inherent generator flexibility required to mirror the variability of its wind generation. British Columbia, with its large reservoir storage capacities and rapid ramping hydroelectric generation could easily provide the firming services required by Alberta; however, the two grids are connected only by a small, constrained intertie. We use a simulation model to assess the economic impacts of high wind penetrations in the Alberta grid under various balancing protocols. We find that adding wind capacity to the system impacts grid reliability, increasing the frequency of system imbalances and unscheduled intertie flow. In order for British Columbia to be viable firming resource, it must have sufficient generation capability to meet and exceed the province's electricity self-sufficiency requirements. We use a linear programming model to evaluate the province's ability to meet domestic load under various water and trade conditions. We then examine the effects of drought and wind penetration on the interconnected Alberta -- British Columbia system given differing interconnection sizes.

  2. Software Surface Modeling and Grid Generation Steering Committee

    NASA Technical Reports Server (NTRS)

    Smith, Robert E. (Editor)

    1992-01-01

    It is a NASA objective to promote improvements in the capability and efficiency of computational fluid dynamics. Grid generation, the creation of a discrete representation of the solution domain, is an essential part of computational fluid dynamics. However, grid generation about complex boundaries requires sophisticated surface-model descriptions of the boundaries. The surface modeling and the associated computation of surface grids consume an extremely large percentage of the total time required for volume grid generation. Efficient and user friendly software systems for surface modeling and grid generation are critical for computational fluid dynamics to reach its potential. The papers presented here represent the state-of-the-art in software systems for surface modeling and grid generation. Several papers describe improved techniques for grid generation.

  3. NAS Grid Benchmarks: A Tool for Grid Space Exploration

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; VanderWijngaart, Rob F.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    We present an approach for benchmarking services provided by computational Grids. It is based on the NAS Parallel Benchmarks (NPB) and is called NAS Grid Benchmark (NGB) in this paper. We present NGB as a data flow graph encapsulating an instance of an NPB code in each graph node, which communicates with other nodes by sending/receiving initialization data. These nodes may be mapped to the same or different Grid machines. Like NPB, NGB will specify several different classes (problem sizes). NGB also specifies the generic Grid services sufficient for running the bench-mark. The implementor has the freedom to choose any specific Grid environment. However, we describe a reference implementation in Java, and present some scenarios for using NGB.

  4. WE-G-204-06: Grid-Line Artifact Minimization for High Resolution Detectors Using Iterative Residual Scatter Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rana, R; Bednarek, D; Rudin, S

    2015-06-15

    Purpose: Anti-scatter grid-line artifacts are more prominent for high-resolution x-ray detectors since the fraction of a pixel blocked by the grid septa is large. Direct logarithmic subtraction of the artifact pattern is limited by residual scattered radiation and we investigate an iterative method for scatter correction. Methods: A stationary Smit-Rοntgen anti-scatter grid was used with a high resolution Dexela 1207 CMOS X-ray detector (75 µm pixel size) to image an artery block (Nuclear Associates, Model 76-705) placed within a uniform head equivalent phantom as the scattering source. The image of the phantom was divided by a flat-field image obtained withoutmore » scatter but with the grid to eliminate grid-line artifacts. Constant scatter values were subtracted from the phantom image before dividing by the averaged flat-field-with-grid image. The standard deviation of pixel values for a fixed region of the resultant images with different subtracted scatter values provided a measure of the remaining grid-line artifacts. Results: A plot of the standard deviation of image pixel values versus the subtracted scatter value shows that the image structure noise reaches a minimum before going up again as the scatter value is increased. This minimum corresponds to a minimization of the grid-line artifacts as demonstrated in line profile plots obtained through each of the images perpendicular to the grid lines. Artifact-free images of the artery block were obtained with the optimal scatter value obtained by this iterative approach. Conclusion: Residual scatter subtraction can provide improved grid-line artifact elimination when using the flat-field with grid “subtraction” technique. The standard deviation of image pixel values can be used to determine the optimal scatter value to subtract to obtain a minimization of grid line artifacts with high resolution x-ray imaging detectors. This study was supported by NIH Grant R01EB002873 and an equipment grant from Toshiba Medical Systems Corp.« less

  5. Variation in aerosol nucleation and growth in coal-fired power plant plumes due to background aerosol, meteorology and emissions: sensitivity analysis and parameterization.

    NASA Astrophysics Data System (ADS)

    Stevens, R. G.; Lonsdale, C. L.; Brock, C. A.; Reed, M. K.; Crawford, J. H.; Holloway, J. S.; Ryerson, T. B.; Huey, L. G.; Nowak, J. B.; Pierce, J. R.

    2012-04-01

    New-particle formation in the plumes of coal-fired power plants and other anthropogenic sulphur sources may be an important source of particles in the atmosphere. It remains unclear, however, how best to reproduce this formation in global and regional aerosol models with grid-box lengths that are 10s of kilometres and larger. The predictive power of these models is thus limited by the resultant uncertainties in aerosol size distributions. In this presentation, we focus on sub-grid sulphate aerosol processes within coal-fired power plant plumes: the sub-grid oxidation of SO2 with condensation of H2SO4 onto newly-formed and pre-existing particles. Based on the results of the System for Atmospheric Modelling (SAM), a Large-Eddy Simulation/Cloud-Resolving Model (LES/CRM) with online TwO Moment Aerosol Sectional (TOMAS) microphysics, we develop a computationally efficient, but physically based, parameterization that predicts the characteristics of aerosol formed within coal-fired power plant plumes based on parameters commonly available in global and regional-scale models. Given large-scale mean meteorological parameters, emissions from the power plant, mean background condensation sink, and the desired distance from the source, the parameterization will predict the fraction of the emitted SO2 that is oxidized to H2SO4, the fraction of that H2SO4 that forms new particles instead of condensing onto preexisting particles, the median diameter of the newly-formed particles, and the number of newly-formed particles per kilogram SO2 emitted. We perform a sensitivity analysis of these characteristics of the aerosol size distribution to the meteorological parameters, the condensation sink, and the emissions. In general, new-particle formation and growth is greatly reduced during polluted conditions due to the large preexisting aerosol surface area for H2SO4 condensation and particle coagulation. The new-particle formation and growth rates are also a strong function of the amount of sunlight and NOx since both control OH concentrations. Decreases in NOx emissions without simultaneous decreases in SO2 emissions increase new-particle formation and growth due to increased oxidation of SO2. The parameterization we describe here should allow for more accurate predictions of aerosol size distributions and a greater confidence in the effects of aerosols in climate and health studies.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiley, J.C.

    The author describes a general `hp` finite element method with adaptive grids. The code was based on the work of Oden, et al. The term `hp` refers to the method of spatial refinement (h), in conjunction with the order of polynomials used as a part of the finite element discretization (p). This finite element code seems to handle well the different mesh grid sizes occuring between abuted grids with different resolutions.

  7. Predictions of Transient Flame Lift-Off Length With Comparison to Single-Cylinder Optical Engine Experiments

    DOE PAGES

    Senecal, P. K.; Pomraning, E.; Anders, J. W.; ...

    2014-05-28

    A state-of-the-art, grid-convergent simulation methodology was applied to three-dimensional calculations of a single-cylinder optical engine. A mesh resolution study on a sector-based version of the engine geometry further verified the RANS-based cell size recommendations previously presented by Senecal et al. (“Grid Convergent Spray Models for Internal Combustion Engine CFD Simulations,” ASME Paper No. ICEF2012-92043). Convergence of cylinder pressure, flame lift-off length, and emissions was achieved for an adaptive mesh refinement cell size of 0.35 mm. Furthermore, full geometry simulations, using mesh settings derived from the grid convergence study, resulted in excellent agreement with measurements of cylinder pressure, heat release rate,more » and NOx emissions. On the other hand, the full geometry simulations indicated that the flame lift-off length is not converged at 0.35 mm for jets not aligned with the computational mesh. Further simulations suggested that the flame lift-off lengths for both the nonaligned and aligned jets appear to be converged at 0.175 mm. With this increased mesh resolution, both the trends and magnitudes in flame lift-off length were well predicted with the current simulation methodology. Good agreement between the overall predicted flame behavior and the available chemiluminescence measurements was also achieved. Our present study indicates that cell size requirements for accurate prediction of full geometry flame lift-off lengths may be stricter than those for global combustion behavior. This may be important when accurate soot predictions are required.« less

  8. Predictions of Transient Flame Lift-Off Length With Comparison to Single-Cylinder Optical Engine Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Senecal, P. K.; Pomraning, E.; Anders, J. W.

    A state-of-the-art, grid-convergent simulation methodology was applied to three-dimensional calculations of a single-cylinder optical engine. A mesh resolution study on a sector-based version of the engine geometry further verified the RANS-based cell size recommendations previously presented by Senecal et al. (“Grid Convergent Spray Models for Internal Combustion Engine CFD Simulations,” ASME Paper No. ICEF2012-92043). Convergence of cylinder pressure, flame lift-off length, and emissions was achieved for an adaptive mesh refinement cell size of 0.35 mm. Furthermore, full geometry simulations, using mesh settings derived from the grid convergence study, resulted in excellent agreement with measurements of cylinder pressure, heat release rate,more » and NOx emissions. On the other hand, the full geometry simulations indicated that the flame lift-off length is not converged at 0.35 mm for jets not aligned with the computational mesh. Further simulations suggested that the flame lift-off lengths for both the nonaligned and aligned jets appear to be converged at 0.175 mm. With this increased mesh resolution, both the trends and magnitudes in flame lift-off length were well predicted with the current simulation methodology. Good agreement between the overall predicted flame behavior and the available chemiluminescence measurements was also achieved. Our present study indicates that cell size requirements for accurate prediction of full geometry flame lift-off lengths may be stricter than those for global combustion behavior. This may be important when accurate soot predictions are required.« less

  9. Siting and sizing of distributed generators based on improved simulated annealing particle swarm optimization.

    PubMed

    Su, Hongsheng

    2017-12-18

    Distributed power grids generally contain multiple diverse types of distributed generators (DGs). Traditional particle swarm optimization (PSO) and simulated annealing PSO (SA-PSO) algorithms have some deficiencies in site selection and capacity determination of DGs, such as slow convergence speed and easily falling into local trap. In this paper, an improved SA-PSO (ISA-PSO) algorithm is proposed by introducing crossover and mutation operators of genetic algorithm (GA) into SA-PSO, so that the capabilities of the algorithm are well embodied in global searching and local exploration. In addition, diverse types of DGs are made equivalent to four types of nodes in flow calculation by the backward or forward sweep method, and reactive power sharing principles and allocation theory are applied to determine initial reactive power value and execute subsequent correction, thus providing the algorithm a better start to speed up the convergence. Finally, a mathematical model of the minimum economic cost is established for the siting and sizing of DGs under the location and capacity uncertainties of each single DG. Its objective function considers investment and operation cost of DGs, grid loss cost, annual purchase electricity cost, and environmental pollution cost, and the constraints include power flow, bus voltage, conductor current, and DG capacity. Through applications in an IEEE33-node distributed system, it is found that the proposed method can achieve desirable economic efficiency and safer voltage level relative to traditional PSO and SA-PSO algorithms, and is a more effective planning method for the siting and sizing of DGs in distributed power grids.

  10. Radioactive Pollution Estimate for Fukushima Nuclear Power Plant by a Particle Model

    NASA Astrophysics Data System (ADS)

    Saito, Keisuke; Ogawa, Susumu

    2016-06-01

    On Mar 12, 2011, very wide radioactive pollution occurred by a hydrogen explosion in Fukushima Nuclear Power Plant. A large amount of radioisotopes started with four times of explosions. With traditional atmospheric diffusion models could not reconstruct radioactive pollution in Fukushima. Then, with a particle model, this accident was reconstructed from meteorological archive and Radar- AMeDAS. Calculations with the particle model were carried out for Mar 12, 15, 18 and 20 when east southeast winds blew for five hours continuously. Meteorological archive is expressed by wind speeds and directions in five-km grid every hour with eight classes of height till 3000 m. Radar- AMeDAS is precipitation data in one-km grid every thirty minutes. Particles are ten scales of 0.01 to 0.1 mm in diameter with specific weight of 2.65 and vertical speeds given by Stokes equation. But, on Mar 15, it rained from 16:30 and then the particles fell down at a moment as wet deposit in calculation. On the other hand, the altitudes on the ground were given by DEM with 1 km-grid. The spatial dose by emitted radioisotopes was referred to the observation data at monitoring posts of Tokyo Electric Power Company. The falling points of radioisotopes were expressed on the map using the particle model. As a result, the same distributions were obtained as the surface spatial dose of radioisotopes in aero-monitoring by Ministry of Education, Culture, Sports, Science and Technology. Especially, on Mar 15, the simulated pollution fitted to the observation, which extended to the northwest of Fukushima Daiichi Nuclear Power Plant and caused mainly sever pollution. By the particle model, the falling positions on the ground were estimated each particle size. Particles with more than 0.05 mm of size were affected by the topography and blocked by the mountains with the altitudes of more than 700 m. The particle model does not include the atmospheric stability, the source height, and deposit speeds. The present assignment is how to express the difference of deposition each nucleus.

  11. Schwarz-Christoffel Conformal Mapping based Grid Generation for Global Oceanic Circulation Models

    NASA Astrophysics Data System (ADS)

    Xu, Shiming

    2015-04-01

    We propose new grid generation algorithms for global ocean general circulation models (OGCMs). Contrary to conventional, analytical forms based dipolar or tripolar grids, the new algorithm are based on Schwarz-Christoffel (SC) conformal mapping with prescribed boundary information. While dealing with the conventional grid design problem of pole relocation, it also addresses more advanced issues of computational efficiency and the new requirements on OGCM grids arisen from the recent trend of high-resolution and multi-scale modeling. The proposed grid generation algorithm could potentially achieve the alignment of grid lines to coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the generated grids are still orthogonal curvilinear, they can be readily 10 utilized in existing Bryan-Cox-Semtner type ocean models. The proposed methodology can also be applied to the grid generation task for regional ocean modeling when complex land-ocean distribution is present.

  12. Modeling target normal sheath acceleration using handoffs between multiple simulations

    NASA Astrophysics Data System (ADS)

    McMahon, Matthew; Willis, Christopher; Mitchell, Robert; King, Frank; Schumacher, Douglass; Akli, Kramer; Freeman, Richard

    2013-10-01

    We present a technique to model the target normal sheath acceleration (TNSA) process using full-scale LSP PIC simulations. The technique allows for a realistic laser, full size target and pre-plasma, and sufficient propagation length for the accelerated ions and electrons. A first simulation using a 2D Cartesian grid models the laser-plasma interaction (LPI) self-consistently and includes field ionization. Electrons accelerated by the laser are imported into a second simulation using a 2D cylindrical grid optimized for the initial TNSA process and incorporating an equation of state. Finally, all of the particles are imported to a third simulation optimized for the propagation of the accelerated ions and utilizing a static field solver for initialization. We also show use of 3D LPI simulations. Simulation results are compared to recent ion acceleration experiments using SCARLET laser at The Ohio State University. This work was performed with support from ASOFR under contract # FA9550-12-1-0341, DARPA, and allocations of computing time from the Ohio Supercomputing Center.

  13. An Analysis of Waves Underlying Grid Cell Firing in the Medial Enthorinal Cortex.

    PubMed

    Bonilla-Quintana, Mayte; Wedgwood, Kyle C A; O'Dea, Reuben D; Coombes, Stephen

    2017-08-25

    Layer II stellate cells in the medial enthorinal cortex (MEC) express hyperpolarisation-activated cyclic-nucleotide-gated (HCN) channels that allow for rebound spiking via an [Formula: see text] current in response to hyperpolarising synaptic input. A computational modelling study by Hasselmo (Philos. Trans. R. Soc. Lond. B, Biol. Sci. 369:20120523, 2013) showed that an inhibitory network of such cells can support periodic travelling waves with a period that is controlled by the dynamics of the [Formula: see text] current. Hasselmo has suggested that these waves can underlie the generation of grid cells, and that the known difference in [Formula: see text] resonance frequency along the dorsal to ventral axis can explain the observed size and spacing between grid cell firing fields. Here we develop a biophysical spiking model within a framework that allows for analytical tractability. We combine the simplicity of integrate-and-fire neurons with a piecewise linear caricature of the gating dynamics for HCN channels to develop a spiking neural field model of MEC. Using techniques primarily drawn from the field of nonsmooth dynamical systems we show how to construct periodic travelling waves, and in particular the dispersion curve that determines how wave speed varies as a function of period. This exhibits a wide range of long wavelength solutions, reinforcing the idea that rebound spiking is a candidate mechanism for generating grid cell firing patterns. Importantly we develop a wave stability analysis to show how the maximum allowed period is controlled by the dynamical properties of the [Formula: see text] current. Our theoretical work is validated by numerical simulations of the spiking model in both one and two dimensions.

  14. Advanced Unstructured Grid Generation for Complex Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2008-01-01

    A new approach for distribution of grid points on the surface and in the volume has been developed and implemented in the NASA unstructured grid generation code VGRID. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.

  15. Review of Strategies and Technologies for Demand-Side Management on Isolated Mini-Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harper, Meg

    This review provides an overview of strategies and currently available technologies used for demandside management (DSM) on mini-grids throughout the world. For the purposes of this review, mini-grids are defined as village-scale electricity distribution systems powered by small local generation sources and not connected to a main grid.1 Mini-grids range in size from less than 1 kW to several hundred kW of installed generation capacity and may utilize different generation technologies, such as micro-hydro, biomass gasification, solar, wind, diesel generators, or a hybrid combination of any of these. This review will primarily refer to AC mini-grids, though much of themore » discussion could apply to DC grids as well. Many mini-grids include energy storage, though some rely solely on real-time generation.« less

  16. Clinical study using novel endoscopic system for measuring size of gastrointestinal lesion

    PubMed Central

    Oka, Kiyoshi; Seki, Takeshi; Akatsu, Tomohiro; Wakabayashi, Takao; Inui, Kazuo; Yoshino, Junji

    2014-01-01

    AIM: To verify the performance of a lesion size measurement system through a clinical study. METHODS: Our proposed system, which consists of a conventional endoscope, an optical device, an optical probe, and a personal computer, generates a grid scale to measure the lesion size from an endoscopic image. The width of the grid scale is constantly adjusted according to the distance between the tip of the endoscope and lesion because the lesion size on an endoscopic image changes according to the distance. The shape of the grid scale was corrected to match the distortion of the endoscopic image. The distance was calculated using the amount of laser light reflected from the lesion through an optical probe inserted into the instrument channel of the endoscope. The endoscopist can thus measure the lesion size without contact by comparing the lesion with the size of the grid scale on the endoscopic image. (1) A basic test was performed to verify the relationship between the measurement error eM and the tilt angle of the endoscope; and (2) The sizes of three colon polyps were measured using our system during endoscopy. These sizes were immediately measured by scale after their removal. RESULTS: There was no error at α = 0°. In addition, the values of eM (mean ± SD) were 0.24 ± 0.11 mm (α = 10°), 0.90 ± 0.58 mm (α = 20°) and 2.31 ± 1.41 mm (α = 30°). According to these results, our system has been confirmed to measure accurately when the tilt angle is less than 20°. The measurement error was approximately 1 mm in the clinical study. Therefore, it was concluded that our proposed measurement system was also effective in clinical examinations. CONCLUSION: By combining simple optical equipment with a conventional endoscope, a quick and accurate system for measuring lesion size was established. PMID:24744595

  17. Estimating the dust production rate of carbon stars in the Small Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Nanni, Ambra; Marigo, Paola; Girardi, Léo; Rubele, Stefano; Bressan, Alessandro; Groenewegen, Martin A. T.; Pastorelli, Giada; Aringer, Bernhard

    2018-02-01

    We employ newly computed grids of spectra reprocessed by dust for estimating the total dust production rate (DPR) of carbon stars in the Small Magellanic Cloud (SMC). For the first time, the grids of spectra are computed as a function of the main stellar parameters, i.e. mass-loss rate, luminosity, effective temperature, current stellar mass and element abundances at the photosphere, following a consistent, physically grounded scheme of dust growth coupled with stationary wind outflow. The model accounts for the dust growth of various dust species formed in the circumstellar envelopes of carbon stars, such as carbon dust, silicon carbide and metallic iron. In particular, we employ some selected combinations of optical constants and grain sizes for carbon dust that have been shown to reproduce simultaneously the most relevant colour-colour diagrams in the SMC. By employing our grids of models, we fit the spectral energy distributions of ≈3100 carbon stars in the SMC, consistently deriving some important dust and stellar properties, i.e. luminosities, mass-loss rates, gas-to-dust ratios, expansion velocities and dust chemistry. We discuss these properties and we compare some of them with observations in the Galaxy and Large Magellanic Cloud. We compute the DPR of carbon stars in the SMC, finding that the estimates provided by our method can be significantly different, between a factor of ≈2-5, than the ones available in the literature. Our grids of models, including the spectra and other relevant dust and stellar quantities, are publicly available at http://starkey.astro.unipd.it/web/guest/dustymodels.

  18. Nuclear Weapon Environment Model. Volume II. Computer Code User’s Guide.

    DTIC Science & Technology

    1979-02-01

    J.R./IfW-09obArt AT NAME AND ADDRESS 10 PROGRAM ELEMENT PROJECT. TASK ’A a *0 RK UONGANIZATION TRW Defense and Space Systems GroupA 8WOKUINMES One...SIZE I I& DENSITY / DENSITY ZERO ,-NO OR TIME TOO YES LARGE? I CALL SIZER I r SETUP GRID IDIAGNOSTICI -7 PRINT DESIRED NOY-LOOP .? D I INCREMENT Y I I

  19. Tariff Considerations for Micro-Grids in Sub-Saharan Africa

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reber, Timothy J.; Booth, Samuel S.; Cutler, Dylan S.

    This report examines some of the key drivers and considerations policymakers and decision makers face when deciding if and how to regulate electricity tariffs for micro-grids. Presenting a range of tariff options, from mandating some variety of national (uniform) tariff to allowing micro-grid developers and operators to set fully cost-reflective tariffs, it examines various benefits and drawbacks of each. In addition, the report and explores various types of cross-subsidies and other transitional forms of regulation that may offer a regulatory middle ground that can help balance the often competing goals of providing price control on electricity service in the namemore » of social good while still providing a means for investors to ensure high enough returns on their investment to attract the necessary capital financing to the market. Using the REopt tool developed by the U.S. Department of Energy's National Renewable Energy Laboratory to inform their study, the authors modeled a few representative micro-grid systems and the resultant levelized cost of electricity, lending context and scale to the consideration of these tariff questions. This simple analysis provides an estimate of the gap between current tariff regimes and the tariffs that would be necessary for developers to recover costs and attract investment, offering further insight into the potential scale of subsidies or other grants that may be required to enable micro-grid development under current regulatory structures. It explores potential options for addressing this gap while trying to balance This report examines some of the key drivers and considerations policymakers and decision makers face when deciding if and how to regulate electricity tariffs for micro-grids. Presenting a range of tariff options, from mandating some variety of national (uniform) tariff to allowing micro-grid developers and operators to set fully cost-reflective tariffs, it examines various benefits and drawbacks of each. In addition, the report and explores various types of cross-subsidies and other transitional forms of regulation that may offer a regulatory middle ground that can help balance the often competing goals of providing price control on electricity service in the name of social good while still providing a means for investors to ensure high enough returns on their investment to attract the necessary capital financing to the market. Using the REopt tool developed by the U.S. Department of Energy's National Renewable Energy Laboratory to inform their study, the authors modeled a few representative micro-grid systems and the resultant levelized cost of electricity, lending context and scale to the consideration of these tariff questions. This simple analysis provides an estimate of the gap between current tariff regimes and the tariffs that would be necessary for developers to recover costs and attract investment, offering further insight into the potential scale of subsidies or other grants that may be required to enable micro-grid development under current regulatory structures. It explores potential options for addressing this gap while trying to balance stakeholder needs, from subsidized national tariffs to lightly regulated cost-reflective tariffs to more of a compromise approach, such as different standards of regulation based on the size of a micro-grid.takeholder needs, from subsidized national tariffs to lightly regulated cost-reflective tariffs to more of a compromise approach, such as different standards of regulation based on the size of a micro-grid.« less

  20. Methodology for the Assessment of 3D Conduction Effects in an Aerothermal Wind Tunnel Test

    NASA Technical Reports Server (NTRS)

    Oliver, Anthony Brandon

    2010-01-01

    This slide presentation reviews a method for the assessment of three-dimensional conduction effects during test in a Aerothermal Wind Tunnel. The test objectives were to duplicate and extend tests that were performed during the 1960's on thermal conduction on proturberance on a flat plate. Slides review the 1D versus 3D conduction data reduction error, the analysis process, CFD-based analysis, loose coupling method that simulates a wind tunnel test run, verification of the CFD solution, Grid convergence, Mach number trend, size trends, and a Sumary of the CFD conduction analysis. Other slides show comparisons to pretest CFD at Mach 1.5 and 2.16 and the geometries of the models and grids.

  1. Voidage correction algorithm for unresolved Euler-Lagrange simulations

    NASA Astrophysics Data System (ADS)

    Askarishahi, Maryam; Salehi, Mohammad-Sadegh; Radl, Stefan

    2018-04-01

    The effect of grid coarsening on the predicted total drag force and heat exchange rate in dense gas-particle flows is investigated using Euler-Lagrange (EL) approach. We demonstrate that grid coarsening may reduce the predicted total drag force and exchange rate. Surprisingly, exchange coefficients predicted by the EL approach deviate more significantly from the exact value compared to results of Euler-Euler (EE)-based calculations. The voidage gradient is identified as the root cause of this peculiar behavior. Consequently, we propose a correction algorithm based on a sigmoidal function to predict the voidage experienced by individual particles. Our correction algorithm can significantly improve the prediction of exchange coefficients in EL models, which is tested for simulations involving Euler grid cell sizes between 2d_p and 12d_p . It is most relevant in simulations of dense polydisperse particle suspensions featuring steep voidage profiles. For these suspensions, classical approaches may result in an error of the total exchange rate of up to 30%.

  2. Model fitting for small skin permeability data sets: hyperparameter optimisation in Gaussian Process Regression.

    PubMed

    Ashrafi, Parivash; Sun, Yi; Davey, Neil; Adams, Roderick G; Wilkinson, Simon C; Moss, Gary Patrick

    2018-03-01

    The aim of this study was to investigate how to improve predictions from Gaussian Process models by optimising the model hyperparameters. Optimisation methods, including Grid Search, Conjugate Gradient, Random Search, Evolutionary Algorithm and Hyper-prior, were evaluated and applied to previously published data. Data sets were also altered in a structured manner to reduce their size, which retained the range, or 'chemical space' of the key descriptors to assess the effect of the data range on model quality. The Hyper-prior Smoothbox kernel results in the best models for the majority of data sets, and they exhibited significantly better performance than benchmark quantitative structure-permeability relationship (QSPR) models. When the data sets were systematically reduced in size, the different optimisation methods generally retained their statistical quality, whereas benchmark QSPR models performed poorly. The design of the data set, and possibly also the approach to validation of the model, is critical in the development of improved models. The size of the data set, if carefully controlled, was not generally a significant factor for these models and that models of excellent statistical quality could be produced from substantially smaller data sets. © 2018 Royal Pharmaceutical Society.

  3. An objective decision model of power grid environmental protection based on environmental influence index and energy-saving and emission-reducing index

    NASA Astrophysics Data System (ADS)

    Feng, Jun-shu; Jin, Yan-ming; Hao, Wei-hua

    2017-01-01

    Based on modelling the environmental influence index of power transmission and transformation project and energy-saving and emission-reducing index of source-grid-load of power system, this paper establishes an objective decision model of power grid environmental protection, with constraints of power grid environmental protection objectives being legal and economical, and considering both positive and negative influences of grid on the environmental in all-life grid cycle. This model can be used to guide the programming work of power grid environmental protection. A numerical simulation of Jiangsu province’s power grid environmental protection objective decision model has been operated, and the results shows that the maximum goal of energy-saving and emission-reducing benefits would be reached firstly as investment increasing, and then the minimum goal of environmental influence.

  4. Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2004-01-01

    A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  5. Surface Modeling and Grid Generation of Orbital Sciences X34 Vehicle. Phase 1

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    1997-01-01

    The surface modeling and grid generation requirements, motivations, and methods used to develop Computational Fluid Dynamic volume grids for the X34-Phase 1 are presented. The requirements set forth by the Aerothermodynamics Branch at the NASA Langley Research Center serve as the basis for the final techniques used in the construction of all volume grids, including grids for parametric studies of the X34. The Integrated Computer Engineering and Manufacturing code for Computational Fluid Dynamics (ICEM/CFD), the Grid Generation code (GRIDGEN), the Three-Dimensional Multi-block Advanced Grid Generation System (3DMAGGS) code, and Volume Grid Manipulator (VGM) code are used to enable the necessary surface modeling, surface grid generation, volume grid generation, and grid alterations, respectively. All volume grids generated for the X34, as outlined in this paper, were used for CFD simulations within the Aerothermodynamics Branch.

  6. A NASTRAN model of a large flexible swing-wing bomber. Volume 5: NASTRAN model development-fairing structure

    NASA Technical Reports Server (NTRS)

    Mock, W. D.; Latham, R. A.

    1982-01-01

    The NASTRAN model plan for the fairing structure was expanded in detail to generate the NASTRAN model of this substructure. The grid point coordinates, element definitions, material properties, and sizing data for each element were specified. The fairing model was thoroughly checked out for continuity, connectivity, and constraints. The substructure was processed for structural influence coefficients (SIC) point loadings to determine the deflection characteristics of the fairing model. Finally, a demonstration and validation processing of this substructure was accomplished using the NASTRAN finite element program. The bulk data deck, stiffness matrices, and SIC output data were delivered.

  7. Evaluation of grid generation technologies from an applied perspective

    NASA Technical Reports Server (NTRS)

    Hufford, Gary S.; Harrand, Vincent J.; Patel, Bhavin C.; Mitchell, Curtis R.

    1995-01-01

    An analysis of the grid generation process from the point of view of an applied CFD engineer is given. Issues addressed include geometric modeling, structured grid generation, unstructured grid generation, hybrid grid generation and use of virtual parts libraries in large parametric analysis projects. The analysis is geared towards comparing the effective turn around time for specific grid generation and CFD projects. The conclusion was made that a single grid generation methodology is not universally suited for all CFD applications due to both limitations in grid generation and flow solver technology. A new geometric modeling and grid generation tool, CFD-GEOM, is introduced to effectively integrate the geometric modeling process to the various grid generation methodologies including structured, unstructured, and hybrid procedures. The full integration of the geometric modeling and grid generation allows implementation of extremely efficient updating procedures, a necessary requirement for large parametric analysis projects. The concept of using virtual parts libraries in conjunction with hybrid grids for large parametric analysis projects is also introduced to improve the efficiency of the applied CFD engineer.

  8. Advanced Unstructured Grid Generation for Complex Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar

    2010-01-01

    A new approach for distribution of grid points on the surface and in the volume has been developed. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.

  9. Simulating North American mesoscale convective systems with a convection-permitting climate model

    NASA Astrophysics Data System (ADS)

    Prein, Andreas F.; Liu, Changhai; Ikeda, Kyoko; Bullock, Randy; Rasmussen, Roy M.; Holland, Greg J.; Clark, Martyn

    2017-10-01

    Deep convection is a key process in the climate system and the main source of precipitation in the tropics, subtropics, and mid-latitudes during summer. Furthermore, it is related to high impact weather causing floods, hail, tornadoes, landslides, and other hazards. State-of-the-art climate models have to parameterize deep convection due to their coarse grid spacing. These parameterizations are a major source of uncertainty and long-standing model biases. We present a North American scale convection-permitting climate simulation that is able to explicitly simulate deep convection due to its 4-km grid spacing. We apply a feature-tracking algorithm to detect hourly precipitation from Mesoscale Convective Systems (MCSs) in the model and compare it with radar-based precipitation estimates east of the US Continental Divide. The simulation is able to capture the main characteristics of the observed MCSs such as their size, precipitation rate, propagation speed, and lifetime within observational uncertainties. In particular, the model is able to produce realistically propagating MCSs, which was a long-standing challenge in climate modeling. However, the MCS frequency is significantly underestimated in the central US during late summer. We discuss the origin of this frequency biases and suggest strategies for model improvements.

  10. A kinetic Monte Carlo model with improved charge injection model for the photocurrent characteristics of organic solar cells

    NASA Astrophysics Data System (ADS)

    Kipp, Dylan; Ganesan, Venkat

    2013-06-01

    We develop a kinetic Monte Carlo model for photocurrent generation in organic solar cells that demonstrates improved agreement with experimental illuminated and dark current-voltage curves. In our model, we introduce a charge injection rate prefactor to correct for the electrode grid-size and electrode charge density biases apparent in the coarse-grained approximation of the electrode as a grid of single occupancy, charge-injecting reservoirs. We use the charge injection rate prefactor to control the portion of dark current attributed to each of four kinds of charge injection. By shifting the dark current between electrode-polymer pairs, we align the injection timescales and expand the applicability of the method to accommodate ohmic energy barriers. We consider the device characteristics of the ITO/PEDOT/PSS:PPDI:PBTT:Al system and demonstrate the manner in which our model captures the device charge densities unique to systems with small injection energy barriers. To elucidate the defining characteristics of our model, we first demonstrate the manner in which charge accumulation and band bending affect the shape and placement of the various current-voltage regimes. We then discuss the influence of various model parameters upon the current-voltage characteristics.

  11. Fine-scale application of WRF-CAM5 during a dust storm episode over East Asia: Sensitivity to grid resolutions and aerosol activation parameterizations

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Zhang, Yang; Zhang, Xin; Fan, Jiwen; Leung, L. Ruby; Zheng, Bo; Zhang, Qiang; He, Kebin

    2018-03-01

    An advanced online-coupled meteorology and chemistry model WRF-CAM5 has been applied to East Asia using triple-nested domains at different grid resolutions (i.e., 36-, 12-, and 4-km) to simulate a severe dust storm period in spring 2010. Analyses are performed to evaluate the model performance and investigate model sensitivity to different horizontal grid sizes and aerosol activation parameterizations and to examine aerosol-cloud interactions and their impacts on the air quality. A comprehensive model evaluation of the baseline simulations using the default Abdul-Razzak and Ghan (AG) aerosol activation scheme shows that the model can well predict major meteorological variables such as 2-m temperature (T2), water vapor mixing ratio (Q2), 10-m wind speed (WS10) and wind direction (WD10), and shortwave and longwave radiation across different resolutions with domain-average normalized mean biases typically within ±15%. The baseline simulations also show moderate biases for precipitation and moderate-to-large underpredictions for other major variables associated with aerosol-cloud interactions such as cloud droplet number concentration (CDNC), cloud optical thickness (COT), and cloud liquid water path (LWP) due to uncertainties or limitations in the aerosol-cloud treatments. The model performance is sensitive to grid resolutions, especially for surface meteorological variables such as T2, Q2, WS10, and WD10, with the performance generally improving at finer grid resolutions for those variables. Comparison of the sensitivity simulations with an alternative (i.e., the Fountoukis and Nenes (FN) series scheme) and the default (i.e., AG scheme) aerosol activation scheme shows that the former predicts larger values for cloud variables such as CDNC and COT across all grid resolutions and improves the overall domain-average model performance for many cloud/radiation variables and precipitation. Sensitivity simulations using the FN series scheme also have large impacts on radiations, T2, precipitation, and air quality (e.g., decreasing O3) through complex aerosol-radiation-cloud-chemistry feedbacks. The inclusion of adsorptive activation of dust particles in the FN series scheme has similar impacts on the meteorology and air quality but to lesser extent as compared to differences between the FN series and AG schemes. Compared to the overall differences between the FN series and AG schemes, impacts of adsorptive activation of dust particles can contribute significantly to the increase of total CDNC (∼45%) during dust storm events and indicate their importance in modulating regional climate over East Asia.

  12. Analysis of the design and economics of molten carbonate fuel cell tri-generation systems providing heat and power for commercial buildings and H2 for FC vehicles

    NASA Astrophysics Data System (ADS)

    Li, Xuping; Ogden, Joan; Yang, Christopher

    2013-11-01

    This study models the operation of molten carbonate fuel cell (MCFC) tri-generation systems for “big box” store businesses that combine grocery and retail business, and sometimes gasoline retail. Efficiency accounting methods and parameters for MCFC tri-generation systems have been developed. Interdisciplinary analysis and an engineering/economic model were applied for evaluating the technical, economic, and environmental performance of distributed MCFC tri-generation systems, and for exploring the optimal system design. Model results show that tri-generation is economically competitive with the conventional system, in which the stores purchase grid electricity and NG for heat, and sell gasoline fuel. The results are robust based on sensitivity analysis considering the uncertainty in energy prices and capital cost. Varying system sizes with base case engineering inputs, energy prices, and cost assumptions, it is found that there is a clear tradeoff between the portion of electricity demand covered and the capital cost increase of bigger system size. MCFC Tri-generation technology provides lower emission electricity, heat, and H2 fuel. With NG as feedstock the CO2 emission can be reduced by 10%-43.6%, depending on how the grid electricity is generated. With renewable methane as feedstock CO2 emission can be further reduced to near zero.

  13. WE-EF-207-03: Design and Optimization of a CBCT Head Scanner for Detection of Acute Intracranial Hemorrhage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, J; Sisniega, A; Zbijewski, W

    Purpose: To design a dedicated x-ray cone-beam CT (CBCT) system suitable to deployment at the point-of-care and offering reliable detection of acute intracranial hemorrhage (ICH), traumatic brain injury (TBI), stroke, and other head and neck injuries. Methods: A comprehensive task-based image quality model was developed to guide system design and optimization of a prototype head scanner suitable to imaging of acute TBI and ICH. Previously reported models were expanded to include the effects of x-ray scatter correction necessary for detection of low contrast ICH and the contribution of bit depth (digitization noise) to imaging performance. Task-based detectablity index provided themore » objective function for optimization of system geometry, x-ray source, detector type, anti-scatter grid, and technique at 10–25 mGy dose. Optimal characteristics were experimentally validated using a custom head phantom with 50 HU contrast ICH inserts imaged on a CBCT imaging bench allowing variation of system geometry, focal spot size, detector, grid selection, and x-ray technique. Results: The model guided selection of system geometry with a nominal source-detector distance 1100 mm and optimal magnification of 1.50. Focal spot size ∼0.6 mm was sufficient for spatial resolution requirements in ICH detection. Imaging at 90 kVp yielded the best tradeoff between noise and contrast. The model provided quantitation of tradeoffs between flat-panel and CMOS detectors with respect to electronic noise, field of view, and readout speed required for imaging of ICH. An anti-scatter grid was shown to provide modest benefit in conjunction with post-acquisition scatter correction. Images of the head phantom demonstrate visualization of millimeter-scale simulated ICH. Conclusions: Performance consistent with acute TBI and ICH detection is feasible with model-based system design and robust artifact correction in a dedicated head CBCT system. Further improvements can be achieved with incorporation of model-based iterative reconstruction techniques also within the scope of the task-based optimization framework. David Foos and Xiaohui Wang are employees of Carestream Health.« less

  14. Energy efficiency design strategies for buildings with grid-connected photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Yimprayoon, Chanikarn

    The building sector in the United States represents more than 40% of the nation's energy consumption. Energy efficiency design strategies and renewable energy are keys to reduce building energy demand. Grid-connected photovoltaic (PV) systems installed on buildings have been the fastest growing market in the PV industry. This growth poses challenges for buildings qualified to serve in this market sector. Electricity produced from solar energy is intermittent. Matching building electricity demand with PV output can increase PV system efficiency. Through experimental methods and case studies, computer simulations were used to investigate the priorities of energy efficiency design strategies that decreased electricity demand while producing load profiles matching with unique output profiles from PV. Three building types (residential, commercial, and industrial) of varying sizes and use patterns located in 16 climate zones were modeled according to ASHRAE 90.1 requirements. Buildings were analyzed individually and as a group. Complying with ASHRAE energy standards can reduce annual electricity consumption at least 13%. With energy efficiency design strategies, the reduction could reach up to 65%, making it possible for PV systems to meet reduced demands in residential and industrial buildings. The peak electricity demand reduction could be up to 71% with integration of strategies and PV. Reducing lighting power density was the best single strategy with high overall performances. Combined strategies such as zero energy building are also recommended. Electricity consumption reductions are the sum of the reductions from strategies and PV output. However, peak electricity reductions were less than their sum because they reduced peak at different times. The potential of grid stress reduction is significant. Investment incentives from government and utilities are necessary. The PV system sizes on net metering interconnection should not be limited by legislation existing in some states. Data from this study provides insight of impacts from applying energy efficiency design strategies in buildings with grid-connected PV systems. With the current transition from traditional electric grids to future smart grids, this information plus large database of various building conditions allow possible investigations needed by governments or utilities in large scale communities for implementing various measures and policies.

  15. Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale

    NASA Astrophysics Data System (ADS)

    Barrios, M. I.

    2013-12-01

    The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues. Moreover, the implementation of this virtual lab improved the ability to understand the rationale of these process and how to transfer the mathematical models to computational representations.

  16. Scalar excursions in large-eddy simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matheou, Georgios; Dimotakis, Paul E.

    Here, the range of values of scalar fields in turbulent flows is bounded by their boundary values, for passive scalars, and by a combination of boundary values, reaction rates, phase changes, etc., for active scalars. The current investigation focuses on the local conservation of passive scalar concentration fields and the ability of the large-eddy simulation (LES) method to observe the boundedness of passive scalar concentrations. In practice, as a result of numerical artifacts, this fundamental constraint is often violated with scalars exhibiting unphysical excursions. The present study characterizes passive-scalar excursions in LES of a shear flow and examines methods formore » diagnosis and assesment of the problem. The analysis of scalar-excursion statistics provides support of the main hypothesis of the current study that unphysical scalar excursions in LES result from dispersive errors of the convection-term discretization where the subgrid-scale model (SGS) provides insufficient dissipation to produce a sufficiently smooth scalar field. In the LES runs three parameters are varied: the discretization of the convection terms, the SGS model, and grid resolution. Unphysical scalar excursions decrease as the order of accuracy of non-dissipative schemes is increased, but the improvement rate decreases with increasing order of accuracy. Two SGS models are examined, the stretched-vortex and a constant-coefficient Smagorinsky. Scalar excursions strongly depend on the SGS model. The excursions are significantly reduced when the characteristic SGS scale is set to double the grid spacing in runs with the stretched-vortex model. The maximum excursion and volume fraction of excursions outside boundary values show opposite trends with respect to resolution. The maximum unphysical excursions increase as resolution increases, whereas the volume fraction decreases. The reason for the increase in the maximum excursion is statistical and traceable to the number of grid points (sample size) which increases with resolution. In contrast, the volume fraction of unphysical excursions decreases with resolution because the SGS models explored perform better at higher grid resolution.« less

  17. Scalar excursions in large-eddy simulations

    DOE PAGES

    Matheou, Georgios; Dimotakis, Paul E.

    2016-08-31

    Here, the range of values of scalar fields in turbulent flows is bounded by their boundary values, for passive scalars, and by a combination of boundary values, reaction rates, phase changes, etc., for active scalars. The current investigation focuses on the local conservation of passive scalar concentration fields and the ability of the large-eddy simulation (LES) method to observe the boundedness of passive scalar concentrations. In practice, as a result of numerical artifacts, this fundamental constraint is often violated with scalars exhibiting unphysical excursions. The present study characterizes passive-scalar excursions in LES of a shear flow and examines methods formore » diagnosis and assesment of the problem. The analysis of scalar-excursion statistics provides support of the main hypothesis of the current study that unphysical scalar excursions in LES result from dispersive errors of the convection-term discretization where the subgrid-scale model (SGS) provides insufficient dissipation to produce a sufficiently smooth scalar field. In the LES runs three parameters are varied: the discretization of the convection terms, the SGS model, and grid resolution. Unphysical scalar excursions decrease as the order of accuracy of non-dissipative schemes is increased, but the improvement rate decreases with increasing order of accuracy. Two SGS models are examined, the stretched-vortex and a constant-coefficient Smagorinsky. Scalar excursions strongly depend on the SGS model. The excursions are significantly reduced when the characteristic SGS scale is set to double the grid spacing in runs with the stretched-vortex model. The maximum excursion and volume fraction of excursions outside boundary values show opposite trends with respect to resolution. The maximum unphysical excursions increase as resolution increases, whereas the volume fraction decreases. The reason for the increase in the maximum excursion is statistical and traceable to the number of grid points (sample size) which increases with resolution. In contrast, the volume fraction of unphysical excursions decreases with resolution because the SGS models explored perform better at higher grid resolution.« less

  18. On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models

    NASA Astrophysics Data System (ADS)

    Xu, S.; Wang, B.; Liu, J.

    2015-02-01

    In this article we propose two conformal mapping based grid generation algorithms for global ocean general circulation models (OGCMs). Contrary to conventional, analytical forms based dipolar or tripolar grids, the new algorithms are based on Schwarz-Christoffel (SC) conformal mapping with prescribed boundary information. While dealing with the basic grid design problem of pole relocation, these new algorithms also address more advanced issues such as smoothed scaling factor, or the new requirements on OGCM grids arisen from the recent trend of high-resolution and multi-scale modeling. The proposed grid generation algorithm could potentially achieve the alignment of grid lines to coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the generated grids are still orthogonal curvilinear, they can be readily utilized in existing Bryan-Cox-Semtner type ocean models. The proposed methodology can also be applied to the grid generation task for regional ocean modeling where complex land-ocean distribution is present.

  19. Efficient radiative transfer methods for continuum and line transfer in large three-dimensional models

    NASA Astrophysics Data System (ADS)

    Juvela, Mika J.

    The relationship between physical conditions of an interstellar cloud and the observed radiation is defined by the radiative transfer problem. Radiative transfer calculations are needed if, e.g., one wants to disentangle abundance variations from excitation effects or wants to model variations of dust properties inside an interstellar cloud. New observational facilities (e.g., ALMA and Herschel) will bring improved accuracy both in terms of intensity and spatial resolution. This will enable detailed studies of the densest sub-structures of interstellar clouds and star forming regions. Such observations must be interpreted with accurate radiative transfer methods and realistic source models. In many cases this will mean modelling in three dimensions. High optical depths and observed wide range of linear scales are, however, challenging for radiative transfer modelling. A large range of linear scales can be accessed only with hierarchical models. Figure 1 shows an example of the use of a hierarchical grid for radiative transfer calculations when the original model cloud (L=10 pc, =500 cm-3) was based a MHD simulation carried out on a regular grid (Juvela & Padoan, 2005). For computed line intensities an accuracy of 10% was still reached when the number of individual cells (and the run time) was reduced by a factor of ten. This illustrates how, as long as cloud is not extremely optically thick, most of the emission comes from a small sub-volume. It is also worth noting that while errors are ~10% for any given point they are much smaller when compared with intensity variations. In particular, calculations on hierarchical grid recovered the spatial power spectrum of line emission with very good accuracy. Monte Carlo codes are used widely in both continuum and line transfer calculations. Like any lambda iteration schemes these suffer from slow convergence when models are optically thick. In line transfer Accelerated Monte Carlo methods (AMC) present a partial solution to this problem (Juvela & Padoan, 2000; Hogerheijde & van der Tak, 2000). AMC methods can be used similarly in continuum calculations to speed up the computation of dust temperatures (Juvela, 2005). The sampling problems associated with high optical depths can be solved with weighted sampling and the handling of models with τV ~ 1000 is perfectly feasible. Transiently heated small dust grains pose another problem because the calculation of their temperature distribution is very time consuming. However, a 3D model will contain thousands of cells at very similar conditions. If dust temperature distributions are calculated only once for such a set an approximate solution can be found in a much shorter time time. (Juvela & Padoan, 2003; see Figure 2a). MHD simulations with Automatic Mesh Refinement (AMR) techniques present an exciting development for the modelling of interstellar clouds. Cloud models consist of a hierarchy of grids with different grid steps and the ratio between the cloud size and the smallest resolution elements can be 106 or even larger. We are currently working on radiative transfer codes (line and continuum) that could be used efficiently on such grids (see Figure 2b). The radiative transfer problem can be solved relatively independently on each of the sub-grids. This means that the use of convergence acceleration methods can be limited to those sub-grids where they are needed and, on the other hand, parallelization of the code is straightforward.

  20. Application of a multi-level grid method to transonic flow calculations

    NASA Technical Reports Server (NTRS)

    South, J. C., Jr.; Brandt, A.

    1976-01-01

    A multi-level grid method was studied as a possible means of accelerating convergence in relaxation calculations for transonic flows. The method employs a hierarchy of grids, ranging from very coarse to fine. The coarser grids are used to diminish the magnitude of the smooth part of the residuals. The method was applied to the solution of the transonic small disturbance equation for the velocity potential in conservation form. Nonlifting transonic flow past a parabolic arc airfoil is studied with meshes of both constant and variable step size.

  1. Meteorological modeling of arrival and deposition of fallout at intermediate distances downwind of the Nevada Test Site.

    PubMed

    Cederwall, R T; Peterson, K R

    1990-11-01

    A three-dimensional atmospheric transport and diffusion model is used to calculate the arrival and deposition of fallout from 13 selected nuclear tests at the Nevada Test Site (NTS) in the 1950s. Results are used to extend NTS fallout patterns to intermediate downwind distances (300 to 1200 km). The radioactive cloud is represented in the model by a population of Lagrangian marker particles, with concentrations calculated on an Eulerian grid. Use of marker particles, with fall velocities dependent on particle size, provides a realistic simulation of fallout as the debris cloud travels downwind. The three-dimensional wind field is derived from observed data, adjusted for mass consistency. Terrain is represented in the grid, which extends up to 1200 km downwind of NTS and has 32-km horizontal resolution and 1-km vertical resolution. Ground deposition is calculated by a deposition-velocity approach. Source terms and relationships between deposition and exposure rate are based on work by Hicks. Uncertainty in particle size and vertical distributions within the debris cloud (and stem) allow for some model "tuning" to better match measured ground-deposition values. Particle trajectories representing different sizes and starting heights above ground zero are used to guide source specification. An hourly time history of the modeled fallout pattern as the debris cloud moves downwind provides estimates of fallout arrival times. Results for event HARRY illustrate the methodology. The composite deposition pattern for all 13 tests is characterized by two lobes extending out to the north-northeast and east-northeast, respectively, at intermediate distances from NTS. Arrival estimates, along with modeled deposition values, augment measured deposition data in the development of data bases at the county level; these data bases are used for estimating radiation exposure at intermediate distances downwind of NTS. Results from a study of event TRINITY are also presented.

  2. The R package 'icosa' for coarse resolution global triangular and penta-hexagonal gridding

    NASA Astrophysics Data System (ADS)

    Kocsis, Adam T.

    2017-04-01

    With the development of the internet and the computational power of personal computers, open source programming environments have become indispensable for science in the past decade. This includes the increase of the GIS capacity of the free R environment, which was originally developed for statistical analyses. The flexibility of R made it a preferred programming tool in a multitude of disciplines from the area of the biological and geological sciences. Many of these subdisciplines operate with incidence (occurrence) data that are in a large number of cases to be grained before further analyses can be conducted. This graining is executed mostly by gridding data to cells of a Gaussian grid of various resolutions to increase the density of data in a single unit of the analyses. This method has obvious shortcomings despite the ease of its application: well-known systematic biases are induced to cell sizes and shapes that can interfere with the results of statistical procedures, especially if the number of incidence points influences the metrics in question. The 'icosa' package employs a common method to overcome this obstacle by implementing grids with roughly equal cell sizes and shapes that are based on tessellated icosahedra. These grid objects are essentially polyhedra with xyz Cartesian vertex data that are linked to tables of faces and edges. At its current developmental stage, the package uses a single method of tessellation which balances grid cell size and shape distortions, but its structure allows the implementation of various other types of tessellation algorithms. The resolution of the grids can be set by the number of breakpoints inserted into a segment forming an edge of the original icosahedron. Both the triangular and their inverted penta-hexagonal grids are available for creation with the package. The package also incorporates functions to look up coordinates in the grid very effectively and data containers to link data to the grid structure. The classes defined in the package are communicating with classes of the 'sp' and 'raster' packages and functions are supplied that allow resolution change and type conversions. Three-dimensional rendering is made available with the 'rgl' package and two-dimensional projections can be calculated using 'sp' and 'rgdal'. The package was developed as part of a project funded by the Deutsche Forschungsgemeinschaft (KO - 5382/1-1).

  3. Grid2: A Program for Rapid Estimation of the Jovian Radiation Environment

    NASA Technical Reports Server (NTRS)

    Evans, R. W.; Brinza, D. E.

    2014-01-01

    Grid2 is a program that utilizes the Galileo Interim Radiation Electron model 2 (GIRE2) Jovian radiation model to compute fluences and doses for Jupiter missions. (Note: The iterations of these two softwares have been GIRE and GIRE2; likewise Grid and Grid2.) While GIRE2 is an important improvement over the original GIRE radiation model, the GIRE2 model can take as long as a day or more to compute these quantities for a complete mission. Grid2 fits the results of the detailed GIRE2 code with a set of grids in local time and position thereby greatly speeding up the execution of the model-minutes as opposed to days. The Grid2 model covers the time period from 1971 to 2050 and distances of 1.03 to 30 Jovian diameters (Rj). It is available as a direct-access database through a FORTRAN interface program. The new database is only slightly larger than the original grid version: 1.5 gigabytes (GB) versus 1.2 GB.

  4. Small-Scale Smart Grid Construction and Analysis

    NASA Astrophysics Data System (ADS)

    Surface, Nicholas James

    The smart grid (SG) is a commonly used catch-phrase in the energy industry yet there is no universally accepted definition. The objectives and most useful concepts have been investigated extensively in economic, environmental and engineering research by applying statistical knowledge and established theories to develop simulations without constructing physical models. In this study, a small-scale version (SSSG) is constructed to physically represent these ideas so they can be evaluated. Results of construction show data acquisition three times more expensive than the grid itself although mainly due to the incapability to downsize 70% of data acquisition costs to small-scale. Experimentation on the fully assembled grid exposes the limitations of low cost modified sine wave power, significant enough to recommend pure sine wave investment in future SSSG iterations. Findings can be projected to full-size SG at a ratio of 1:10, based on the appliance representing average US household peak daily load. However this exposes disproportionalities in the SSSG compared with previous SG investigations and recommended changes for future iterations are established to remedy this issue. Also discussed are other ideas investigated in the literature and their suitability for SSSG incorporation. It is highly recommended to develop a user-friendly bidirectional charger to more accurately represent vehicle-to-grid (V2G) infrastructure. Smart homes, BEV swap stations and pumped hydroelectric storage can also be researched on future iterations of the SSSG.

  5. Multidimensional radiative transfer with multilevel atoms. II. The non-linear multigrid method.

    NASA Astrophysics Data System (ADS)

    Fabiani Bendicho, P.; Trujillo Bueno, J.; Auer, L.

    1997-08-01

    A new iterative method for solving non-LTE multilevel radiative transfer (RT) problems in 1D, 2D or 3D geometries is presented. The scheme obtains the self-consistent solution of the kinetic and RT equations at the cost of only a few (<10) formal solutions of the RT equation. It combines, for the first time, non-linear multigrid iteration (Brandt, 1977, Math. Comp. 31, 333; Hackbush, 1985, Multi-Grid Methods and Applications, springer-Verlag, Berlin), an efficient multilevel RT scheme based on Gauss-Seidel iterations (cf. Trujillo Bueno & Fabiani Bendicho, 1995ApJ...455..646T), and accurate short-characteristics formal solution techniques. By combining a valid stopping criterion with a nested-grid strategy a converged solution with the desired true error is automatically guaranteed. Contrary to the current operator splitting methods the very high convergence speed of the new RT method does not deteriorate when the grid spatial resolution is increased. With this non-linear multigrid method non-LTE problems discretized on N grid points are solved in O(N) operations. The nested multigrid RT method presented here is, thus, particularly attractive in complicated multilevel transfer problems where small grid-sizes are required. The properties of the method are analyzed both analytically and with illustrative multilevel calculations for Ca II in 1D and 2D schematic model atmospheres.

  6. Ostracod Body Size Change Across Space and Time

    NASA Astrophysics Data System (ADS)

    Nolen, L.; Llarena, L. A.; Saux, J.; Heim, N. A.; Payne, J.

    2014-12-01

    Many factors drive evolution, although it is not always clear which factors are more influential. Miller et al. (2009) found that there is a change in geographic disparity in diversity in marine biotas over time. We tested if there was also geographic disparity in body size during different epochs. We used marine ostracods, which are tiny crustaceans, as a study group for this analysis. We also studied which factor is more influential in body size change: distance or time. We compared the mean body size from different geologic time intervals as well as the mean body size from different locations for each epoch. We grouped ostracod occurrences from the Paleobiology Database into 10º x 10º grid cells on a paleogeographic map. Then we calculated the difference in mean size and the distance between the grid cells containing specimens. Our size data came from the Ellis & Messina"Catalogue of Ostracod" as well as the"Treatise on Invertebrate Paleontology". Sizes were calculated by applying the formula for the volume of an ellipsoid to three linear dimensions of the ostracod carapace (anteroposterior, dorsoventral, and right-left lengths). Throughout this analysis we have come to the realization that there is a trend in ostracods towards smaller size over time. Therefore there is also a trend through time of decreasing difference in size between occurrences in different grid cells. However, if time is not taken into account, there is no correlation between size and geographic distance. This may be attributed to the fact that one might not expect a big size difference between locations that are far apart but still at a similar latitude (for example, at the equator). This analysis suggests that distance alone is not the main factor in driving changes in ostracod size over time.

  7. An adaptive grid algorithm for one-dimensional nonlinear equations

    NASA Technical Reports Server (NTRS)

    Gutierrez, William E.; Hills, Richard G.

    1990-01-01

    Richards' equation, which models the flow of liquid through unsaturated porous media, is highly nonlinear and difficult to solve. Step gradients in the field variables require the use of fine grids and small time step sizes. The numerical instabilities caused by the nonlinearities often require the use of iterative methods such as Picard or Newton interation. These difficulties result in large CPU requirements in solving Richards equation. With this in mind, adaptive and multigrid methods are investigated for use with nonlinear equations such as Richards' equation. Attention is focused on one-dimensional transient problems. To investigate the use of multigrid and adaptive grid methods, a series of problems are studied. First, a multigrid program is developed and used to solve an ordinary differential equation, demonstrating the efficiency with which low and high frequency errors are smoothed out. The multigrid algorithm and an adaptive grid algorithm is used to solve one-dimensional transient partial differential equations, such as the diffusive and convective-diffusion equations. The performance of these programs are compared to that of the Gauss-Seidel and tridiagonal methods. The adaptive and multigrid schemes outperformed the Gauss-Seidel algorithm, but were not as fast as the tridiagonal method. The adaptive grid scheme solved the problems slightly faster than the multigrid method. To solve nonlinear problems, Picard iterations are introduced into the adaptive grid and tridiagonal methods. Burgers' equation is used as a test problem for the two algorithms. Both methods obtain solutions of comparable accuracy for similar time increments. For the Burgers' equation, the adaptive grid method finds the solution approximately three times faster than the tridiagonal method. Finally, both schemes are used to solve the water content formulation of the Richards' equation. For this problem, the adaptive grid method obtains a more accurate solution in fewer work units and less computation time than required by the tridiagonal method. The performance of the adaptive grid method tends to degrade as the solution process proceeds in time, but still remains faster than the tridiagonal scheme.

  8. Effect of Orthene on an unconfined population of the meadow vole (Microtus pennsylvanicus)

    USGS Publications Warehouse

    Jett, David A.; Nichols, James D.; Hines, James E.

    1986-01-01

    The possible impact on Microtus pennsylvanicus of ground applications of Orthene® insecticide was investigated in old-field habitats in northern Maryland during 1982 and 1983. The treatment grids in 1982 and 1983 were sprayed at 0.62 and 0.82 kg active ingredient/ha, respectively. A capture–recapture design robust to unequal capture probabilities was utilized to estimate population size, survival, and recruitment. Data on reproductive activity and relative weight change were also collected to investigate the effect of the insecticide treatment. There were no significant differences in population size or recruitment between control and treatment grids which could be directly related to the treatment. Survival rate was significantly lower on the treatment grid than on the control grid after spraying in 1983; however, survival rate was higher on the treatment grid after spraying in 1982. Significantly fewer pregnant adults were found on the treatment grid after spraying in 1982, whereas the proportions of voles lactating or with perforate vaginas or open pubic symphyses were slightly higher or remained unchanged during this period. Relative weight change was not affected by the treatment. Results do not indicate any pattern of inhibitory effects from the insecticide treatment. Field application of Orthene® did not have an adverse effect on this Microtus population.

  9. Sources of spurious force oscillations from an immersed boundary method for moving-body problems

    NASA Astrophysics Data System (ADS)

    Lee, Jongho; Kim, Jungwoo; Choi, Haecheon; Yang, Kyung-Soo

    2011-04-01

    When a discrete-forcing immersed boundary method is applied to moving-body problems, it produces spurious force oscillations on a solid body. In the present study, we identify two sources of these force oscillations. One source is from the spatial discontinuity in the pressure across the immersed boundary when a grid point located inside a solid body becomes that of fluid with a body motion. The addition of mass source/sink together with momentum forcing proposed by Kim et al. [J. Kim, D. Kim, H. Choi, An immersed-boundary finite volume method for simulations of flow in complex geometries, Journal of Computational Physics 171 (2001) 132-150] reduces the spurious force oscillations by alleviating this pressure discontinuity. The other source is from the temporal discontinuity in the velocity at the grid points where fluid becomes solid with a body motion. The magnitude of velocity discontinuity decreases with decreasing the grid spacing near the immersed boundary. Four moving-body problems are simulated by varying the grid spacing at a fixed computational time step and at a constant CFL number, respectively. It is found that the spurious force oscillations decrease with decreasing the grid spacing and increasing the computational time step size, but they depend more on the grid spacing than on the computational time step size.

  10. Simulating flame lift-off characteristics of diesel and biodiesel fuels using detailed chemical-kinetic mechanisms and LES turbulence model.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Som, S; Longman, D. E.; Luo, Z

    2012-01-01

    Combustion in direct-injection diesel engines occurs in a lifted, turbulent diffusion flame mode. Numerous studies indicate that the combustion and emissions in such engines are strongly influenced by the lifted flame characteristics, which are in turn determined by fuel and air mixing in the upstream region of the lifted flame, and consequently by the liquid breakup and spray development processes. From a numerical standpoint, these spray combustion processes depend heavily on the choice of underlying spray, combustion, and turbulence models. The present numerical study investigates the influence of different chemical kinetic mechanisms for diesel and biodiesel fuels, as well asmore » Reynolds-averaged Navier-Stokes (RANS) and large eddy simulation (LES) turbulence models on predicting flame lift-off lengths (LOLs) and ignition delays. Specifically, two chemical kinetic mechanisms for n-heptane (NHPT) and three for biodiesel surrogates are investigated. In addition, the RNG k-{epsilon} (RANS) model is compared to the Smagorinsky based LES turbulence model. Using adaptive grid resolution, minimum grid sizes of 250 {micro}m and 125 {micro}m were obtained for the RANS and LES cases respectively. Validations of these models were performed against experimental data from Sandia National Laboratories in a constant volume combustion chamber. Ignition delay and flame lift-off validations were performed at different ambient temperature conditions. The LES model predicts lower ignition delays and qualitatively better flame structures compared to the RNG k-{epsilon} model. The use of realistic chemistry and a ternary surrogate mixture, which consists of methyl decanoate, methyl 9-decenoate, and NHPT, results in better predicted LOLs and ignition delays. For diesel fuel though, only marginal improvements are observed by using larger size mechanisms. However, these improved predictions come at a significant increase in computational cost.« less

  11. Point Cloud Oriented Shoulder Line Extraction in Loess Hilly Area

    NASA Astrophysics Data System (ADS)

    Min, Li; Xin, Yang; Liyang, Xiong

    2016-06-01

    Shoulder line is the significant line in hilly area of Loess Plateau in China, dividing the surface into positive and negative terrain (P-N terrains). Due to the point cloud vegetation removal methods of P-N terrains are different, there is an imperative need for shoulder line extraction. In this paper, we proposed an automatic shoulder line extraction method based on point cloud. The workflow is as below: (i) ground points were selected by using a grid filter in order to remove most of noisy points. (ii) Based on DEM interpolated by those ground points, slope was mapped and classified into two classes (P-N terrains), using Natural Break Classified method. (iii) The common boundary between two slopes is extracted as shoulder line candidate. (iv) Adjust the filter gird size and repeat step i-iii until the shoulder line candidate matches its real location. (v) Generate shoulder line of the whole area. Test area locates in Madigou, Jingbian County of Shaanxi Province, China. A total of 600 million points are acquired in the test area of 0.23km2, using Riegl VZ400 3D Laser Scanner in August 2014. Due to the limit Granted computing performance, the test area is divided into 60 blocks and 13 of them around the shoulder line were selected for filter grid size optimizing. The experiment result shows that the optimal filter grid size varies in diverse sample area, and a power function relation exists between filter grid size and point density. The optimal grid size was determined by above relation and shoulder lines of 60 blocks were then extracted. Comparing with the manual interpretation results, the accuracy of the whole result reaches 85%. This method can be applied to shoulder line extraction in hilly area, which is crucial for point cloud denoising and high accuracy DEM generation.

  12. Elliptic generation of composite three-dimensional grids about realistic aircraft

    NASA Technical Reports Server (NTRS)

    Sorenson, R. L.

    1986-01-01

    An elliptic method for generating composite grids about realistic aircraft is presented. A body-conforming grid is first generated about the entire aircraft by the solution of Poisson's differential equation. This grid has relatively coarse spacing, and it covers the entire physical domain. At boundary surfaces, cell size is controlled and cell skewness is nearly eliminated by inhomogeneous terms, which are found automatically by the program. Certain regions of the grid in which high gradients are expected, and which map into rectangular solids in the computational domain, are then designated for zonal refinement. Spacing in the zonal grids is reduced by adding points with a simple, algebraic scheme. Details of the grid generation method are presented along with results of the present application, a wing-body configuration based on the F-16 fighter aircraft.

  13. INITIAL APPL;ICATION OF THE ADAPTIVE GRID AIR POLLUTION MODEL

    EPA Science Inventory

    The paper discusses an adaptive-grid algorithm used in air pollution models. The algorithm reduces errors related to insufficient grid resolution by automatically refining the grid scales in regions of high interest. Meanwhile the grid scales are coarsened in other parts of the d...

  14. Effect of grid resolution on large eddy simulation of wall-bounded turbulence

    NASA Astrophysics Data System (ADS)

    Rezaeiravesh, S.; Liefvendahl, M.

    2018-05-01

    The effect of grid resolution on a large eddy simulation (LES) of a wall-bounded turbulent flow is investigated. A channel flow simulation campaign involving a systematic variation of the streamwise (Δx) and spanwise (Δz) grid resolution is used for this purpose. The main friction-velocity-based Reynolds number investigated is 300. Near the walls, the grid cell size is determined by the frictional scaling, Δx+ and Δz+, and strongly anisotropic cells, with first Δy+ ˜ 1, thus aiming for the wall-resolving LES. Results are compared to direct numerical simulations, and several quality measures are investigated, including the error in the predicted mean friction velocity and the error in cross-channel profiles of flow statistics. To reduce the total number of channel flow simulations, techniques from the framework of uncertainty quantification are employed. In particular, a generalized polynomial chaos expansion (gPCE) is used to create metamodels for the errors over the allowed parameter ranges. The differing behavior of the different quality measures is demonstrated and analyzed. It is shown that friction velocity and profiles of the velocity and Reynolds stress tensor are most sensitive to Δz+, while the error in the turbulent kinetic energy is mostly influenced by Δx+. Recommendations for grid resolution requirements are given, together with the quantification of the resulting predictive accuracy. The sensitivity of the results to the subgrid-scale (SGS) model and varying Reynolds number is also investigated. All simulations are carried out with second-order accurate finite-volume-based solver OpenFOAM. It is shown that the choice of numerical scheme for the convective term significantly influences the error portraits. It is emphasized that the proposed methodology, involving the gPCE, can be applied to other modeling approaches, i.e., other numerical methods and the choice of SGS model.

  15. Mesh Dependence on Shear Driven Boundary Layers in Stable Stratification Generated by Large Eddy-Simulation

    NASA Astrophysics Data System (ADS)

    Berg, Jacob; Patton, Edward G.; Sullivan, Peter S.

    2017-11-01

    The effect of mesh resolution and size on shear driven atmospheric boundary layers in a stable stratified environment is investigated with the NCAR pseudo-spectral LES model (J. Atmos. Sci. v68, p2395, 2011 and J. Atmos. Sci. v73, p1815, 2016). The model applies FFT in the two horizontal directions and finite differencing in the vertical direction. With vanishing heat flux at the surface and a capping inversion entraining potential temperature into the boundary layer the situation is often called the conditional neutral atmospheric boundary layer (ABL). Due to its relevance in high wind applications such as wind power meteorology, we emphasize on second order statistics important for wind turbines including spectral information. The simulations range from mesh sizes of 643 to 10243 grid points. Due to the non-stationarity of the problem, different simulations are compared at equal eddy-turnover times. Whereas grid convergence is mostly achieved in the middle portion of the ABL, statistics close to the surface of the ABL, where the presence of the ground limits the growth of the energy containing eddies, second order statistics are not converged on the studies meshes. Higher order structure functions also reveal non-Gaussian statistics highly dependent on the resolution.

  16. Grid2: A Program for Rapid Estimation of the Jovian Radiation Environment: A Numeric Implementation of the GIRE2 Jovian Radiation Model for Estimating Trapped Radiation for Mission Concept Studies

    NASA Technical Reports Server (NTRS)

    Evans, R. W.; Brinza, D. E.

    2014-01-01

    Grid2 is a program that utilizes the Galileo Interim Radiation Electron model 2 (GIRE2) Jovian radiation model to compute fluences and doses for Jupiter missions. (Note: The iterations of these two softwares have been GIRE and GIRE2; likewise Grid and Grid2.) While GIRE2 is an important improvement over the original GIRE radiation model, the GIRE2 model can take as long as a day or more to compute these quantities for a complete mission. Grid2 fits the results of the detailed GIRE2 code with a set of grids in local time and position thereby greatly speeding up the execution of the model--minutes as opposed to days. The Grid2 model covers the time period from 1971 to 2050and distances of 1.03 to 30 Jovian diameters (Rj). It is available as a direct-access database through a FORTRAN interface program. The new database is only slightly larger than the original grid version: 1.5 gigabytes (GB) versus 1.2 GB.

  17. Particle model of full-size ITER-relevant negative ion source.

    PubMed

    Taccogna, F; Minelli, P; Ippolito, N

    2016-02-01

    This work represents the first attempt to model the full-size ITER-relevant negative ion source including the expansion, extraction, and part of the acceleration regions keeping the mesh size fine enough to resolve every single aperture. The model consists of a 2.5D particle-in-cell Monte Carlo collision representation of the plane perpendicular to the filter field lines. Magnetic filter and electron deflection field have been included and a negative ion current density of j(H(-)) = 660 A/m(2) from the plasma grid (PG) is used as parameter for the neutral conversion. The driver is not yet included and a fixed ambipolar flux is emitted from the driver exit plane. Results show the strong asymmetry along the PG driven by the electron Hall (E × B and diamagnetic) drift perpendicular to the filter field. Such asymmetry creates an important dis-homogeneity in the electron current extracted from the different apertures. A steady state is not yet reached after 15 μs.

  18. Grid Transmission Expansion Planning Model Based on Grid Vulnerability

    NASA Astrophysics Data System (ADS)

    Tang, Quan; Wang, Xi; Li, Ting; Zhang, Quanming; Zhang, Hongli; Li, Huaqiang

    2018-03-01

    Based on grid vulnerability and uniformity theory, proposed global network structure and state vulnerability factor model used to measure different grid models. established a multi-objective power grid planning model which considering the global power network vulnerability, economy and grid security constraint. Using improved chaos crossover and mutation genetic algorithm to optimize the optimal plan. For the problem of multi-objective optimization, dimension is not uniform, the weight is not easy given. Using principal component analysis (PCA) method to comprehensive assessment of the population every generation, make the results more objective and credible assessment. the feasibility and effectiveness of the proposed model are validated by simulation results of Garver-6 bus system and Garver-18 bus.

  19. An efficient biological pathway layout algorithm combining grid-layout and spring embedder for complicated cellular location information

    PubMed Central

    2010-01-01

    Background Graph drawing is one of the important techniques for understanding biological regulations in a cell or among cells at the pathway level. Among many available layout algorithms, the spring embedder algorithm is widely used not only for pathway drawing but also for circuit placement and www visualization and so on because of the harmonized appearance of its results. For pathway drawing, location information is essential for its comprehension. However, complex shapes need to be taken into account when torus-shaped location information such as nuclear inner membrane, nuclear outer membrane, and plasma membrane is considered. Unfortunately, the spring embedder algorithm cannot easily handle such information. In addition, crossings between edges and nodes are usually not considered explicitly. Results We proposed a new grid-layout algorithm based on the spring embedder algorithm that can handle location information and provide layouts with harmonized appearance. In grid-layout algorithms, the mapping of nodes to grid points that minimizes a cost function is searched. By imposing positional constraints on grid points, location information including complex shapes can be easily considered. Our layout algorithm includes the spring embedder cost as a component of the cost function. We further extend the layout algorithm to enable dynamic update of the positions and sizes of compartments at each step. Conclusions The new spring embedder-based grid-layout algorithm and a spring embedder algorithm are applied to three biological pathways; endothelial cell model, Fas-induced apoptosis model, and C. elegans cell fate simulation model. From the positional constraints, all the results of our algorithm satisfy location information, and hence, more comprehensible layouts are obtained as compared to the spring embedder algorithm. From the comparison of the number of crossings, the results of the grid-layout-based algorithm tend to contain more crossings than those of the spring embedder algorithm due to the positional constraints. For a fair comparison, we also apply our proposed method without positional constraints. This comparison shows that these results contain less crossings than those of the spring embedder algorithm. We also compared layouts of the proposed algorithm with and without compartment update and verified that latter can reach better local optima. PMID:20565884

  20. Faster than Real-Time Dynamic Simulation for Large-Size Power System with Detailed Dynamic Models using High-Performance Computing Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Renke; Jin, Shuangshuang; Chen, Yousu

    This paper presents a faster-than-real-time dynamic simulation software package that is designed for large-size power system dynamic simulation. It was developed on the GridPACKTM high-performance computing (HPC) framework. The key features of the developed software package include (1) faster-than-real-time dynamic simulation for a WECC system (17,000 buses) with different types of detailed generator, controller, and relay dynamic models, (2) a decoupled parallel dynamic simulation algorithm with optimized computation architecture to better leverage HPC resources and technologies, (3) options for HPC-based linear and iterative solvers, (4) hidden HPC details, such as data communication and distribution, to enable development centered on mathematicalmore » models and algorithms rather than on computational details for power system researchers, and (5) easy integration of new dynamic models and related algorithms into the software package.« less

  1. The eGo grid model: An open-source and open-data based synthetic medium-voltage grid model for distribution power supply systems

    NASA Astrophysics Data System (ADS)

    Amme, J.; Pleßmann, G.; Bühler, J.; Hülk, L.; Kötter, E.; Schwaegerl, P.

    2018-02-01

    The increasing integration of renewable energy into the electricity supply system creates new challenges for distribution grids. The planning and operation of distribution systems requires appropriate grid models that consider the heterogeneity of existing grids. In this paper, we describe a novel method to generate synthetic medium-voltage (MV) grids, which we applied in our DIstribution Network GeneratOr (DINGO). DINGO is open-source software and uses freely available data. Medium-voltage grid topologies are synthesized based on location and electricity demand in defined demand areas. For this purpose, we use GIS data containing demand areas with high-resolution spatial data on physical properties, land use, energy, and demography. The grid topology is treated as a capacitated vehicle routing problem (CVRP) combined with a local search metaheuristics. We also consider the current planning principles for MV distribution networks, paying special attention to line congestion and voltage limit violations. In the modelling process, we included power flow calculations for validation. The resulting grid model datasets contain 3608 synthetic MV grids in high resolution, covering all of Germany and taking local characteristics into account. We compared the modelled networks with real network data. In terms of number of transformers and total cable length, we conclude that the method presented in this paper generates realistic grids that could be used to implement a cost-optimised electrical energy system.

  2. Small-Scale Dissipation in Binary-Species Transitional Mixing Layers

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Okong'o, Nora

    2011-01-01

    Motivated by large eddy simulation (LES) modeling of supercritical turbulent flows, transitional states of databases obtained from direct numerical simulations (DNS) of binary-species supercritical temporal mixing layers were examined to understand the subgrid-scale dissipation, and its variation with filter size. Examination of the DSN-scale domain- averaged dissipation confirms previous findings that, out of the three modes of viscous, temperature and species-mass dissipation, the species-mass dissipation is the main contributor to the total dissipation. The results revealed that the percentage of species-mass by total dissipation is nearly invariant across species systems and initial conditions. This dominance of the species-mass dissipation is due to high-density-gradient magnitude (HDGM) regions populating the flow under the supercritical conditions of the simulations; such regions have also been observed in fully turbulent supercritical flows. The domain average being the result of both the local values and the extent of the HDGM regions, the expectations were that the response to filtering would vary with these flow characteristics. All filtering here is performed in the dissipation range of the Kolmogorov spectrum, at filter sizes from 4 to 16 times the DNS grid spacing. The small-scale (subgrid scale, SGS) dissipation was found by subtracting the filtered-field dissipation from the DNS-field dissipation. In contrast to the DNS dissipation, the SGS dissipation is not necessarily positive; negative values indicate backscatter. Backscatter was shown to be spatially widespread in all modes of dissipation and in the total dissipation (25 to 60 percent of the domain). The maximum magnitude of the negative subgrid- scale dissipation was as much as 17 percent of the maximum positive subgrid- scale dissipation, indicating that, not only is backscatter spatially widespread in these flows, but it is considerable in magnitude and cannot be ignored for the purposes of LES modeling. The Smagorinsky model, for example, is unsuited for modeling SGS fluxes in the LES because it cannot render backscatter. With increased filter size, there is only a modest decrease in the spatial extent of backscatter. The implication is that even at large LES grid spacing, the issue of backscatter and related SGS-flux modeling decisions are unavoidable. As a fraction of the total dissipation, the small-scale dissipation is between 10 and 30 percent of the total dissipation for a filter size that is four times the DNS grid spacing, with all OH cases bunched at 10 percent, and the HN cases spanning 24 30 percent. A scale similarity was found in that the domain-average proportion of each small-scale dissipation mode, with respect to the total small-scale dissipation, is very similar to equivalent results at the DNS scale. With increasing filter size, the proportion of the small-scale dissipation in the dissipation increases substantially, although not quite proportionally. When the filter size increases by four-fold, 52 percent for all OH runs, and 70 percent for HN runs, of the dissipation is contained in the subgrid-scale portion with virtually no dependence on the initial conditions of the DNS. The indications from the dissipation analysis are that modeling efforts in LES of thermodynamically supercritical flows should be focused primarily on mass-flux effects, with temperature and viscous effects being secondary. The analysis also reveals a physical justification for scale-similarity type models, although the suitability of these will need to be confirmed in a posteriori studies.

  3. Status of the seamless coupled modelling system ICON-ART

    NASA Astrophysics Data System (ADS)

    Vogel, Bernhard; Rieger, Daniel; Schroeter, Jenniffer; Bischoff-Gauss, Inge; Deetz, Konrad; Eckstein, Johannes; Foerstner, Jochen; Gasch, Philipp; Ruhnke, Roland; Vogel, Heike; Walter, Carolin; Weimer, Michael

    2016-04-01

    The integrated modelling framework ICON-ART [1] (ICOsahedral Nonhydrostatic - Aerosols and Reactive Trace gases) extends the numerical weather prediction modelling system ICON by modules for gas phase chemistry, aerosol dynamics and related feedback processes. The nonhydrostatic global modelling system ICON [2] is a joint development of German Weather Service (DWD) and Max Planck Institute for Meteorology (MPI-M) with local grid refinement down to grid sizes of a few kilometers. It will be used for numerical weather prediction, climate projections and for research purposes. Since January 2016 ICON runs operationally at DWD for weather forecast on the global scale with a grid size of 13 km. Analogous to its predecessor COSMO-ART [3], ICON-ART is designed to account for feedback processes between meteorological variables and atmospheric trace substances. Up to now, ICON-ART contains the dispersion of volcanic ash, radioactive tracers, sea salt aerosol, as well as ozone-depleting stratospheric trace substances [1]. Recently, we have extended ICON-ART by a mineral dust emission scheme with global applicability and nucleation parameterizations which allow the cloud microphysics to explicitly account for prognostic aerosol distributions. Also very recently an emission scheme for volatile organic compounds was included. We present first results of the impact of natural aerosol (i.e. sea salt aerosol and mineral dust) on cloud properties and precipitation as well as the interaction of primary emitted particles with radiation. Ongoing developments are the coupling with a radiation scheme to calculate the photolysis frequencies, a coupling with the RADMKA (1) chemistry and first steps to include isotopologues of water. Examples showing the capabilities of the model system will be presented. This includes a simulation of the transport of ozone depleting short-lived trace gases from the surface into the stratosphere as well as of long-lived tracers. [1] Rieger, D., et al. (2015), ICON-ART - A new online-coupled model system from the global to regional scale, Geosci. Model Dev., doi:10.5194/gmd-8-1659-2015. [2] Zängl, G., et al. (2014), The ICON (ICOsahedral Non-hydrostatic) modelling framework of DWD MPI-M: Description of the non-hydrostatic dynamical core. Q.J.R. Meteorol. Soc., doi: 10.1002/qj.2378 [3] Vogel, B., et al. (2009), The comprehensive model system COSMO-ART - Radiative impact of aerosol on the state of the atmosphere on the regional scale, Atmos. Chem. Phys., 9, 8661-8680

  4. Alighting of Tabanidae and muscids on natural and simulated hosts in the Sudan.

    PubMed

    Mohamed-Ahmed, M M; Mihok, S

    2009-12-01

    Alighting of horse flies (Diptera: Tabanidae) and non-biting muscids (Diptera: Muscidae) was studied at Khartoum, Sudan, using black cylindrical models mimicking a goat, calf and cow. Flies were intercepted by attaching electrocution grids or clear adhesive film to models. Alighting sites and defensive behaviour were also documented on hosts through observation. Female Tabanus sufis (Jennicke), T. taeniola (Palisot) and Atylotus agrestis (Wiedemann) were the main tabanids captured. Muscids landed in equal numbers on all sizes of models. They had a strong preference for the upper portions of both models and hosts. Landings of T. taeniola and A. agrestis increased with model size, but not so for T. sufis. T. taeniola and A. agrestis scarcely alighted on the legs of models whereas 60-78% of T. sufis did so. Landings of T. sufis on artificial legs did not vary with model size. Landings of all tabanids on the lower and upper portions of a model increased with model size. For both hosts and models, most tabanids (88-98%) alighted on the lower half and legs. Most muscids (63-89%) alighted on the upper half. Landing of tabanids on the cow was 34.9 and 69.3 times greater than that on the calf and goat, respectively. These results are discussed in relation to strategies for the control of blood-sucking flies associated with farm animals using either insecticide-treated live baits or their mimics.

  5. Scaling range sizes to threats for robust predictions of risks to biodiversity.

    PubMed

    Keith, David A; Akçakaya, H Resit; Murray, Nicholas J

    2018-04-01

    Assessments of risk to biodiversity often rely on spatial distributions of species and ecosystems. Range-size metrics used extensively in these assessments, such as area of occupancy (AOO), are sensitive to measurement scale, prompting proposals to measure them at finer scales or at different scales based on the shape of the distribution or ecological characteristics of the biota. Despite its dominant role in red-list assessments for decades, appropriate spatial scales of AOO for predicting risks of species' extinction or ecosystem collapse remain untested and contentious. There are no quantitative evaluations of the scale-sensitivity of AOO as a predictor of risks, the relationship between optimal AOO scale and threat scale, or the effect of grid uncertainty. We used stochastic simulation models to explore risks to ecosystems and species with clustered, dispersed, and linear distribution patterns subject to regimes of threat events with different frequency and spatial extent. Area of occupancy was an accurate predictor of risk (0.81<|r|<0.98) and performed optimally when measured with grid cells 0.1-1.0 times the largest plausible area threatened by an event. Contrary to previous assertions, estimates of AOO at these relatively coarse scales were better predictors of risk than finer-scale estimates of AOO (e.g., when measurement cells are <1% of the area of the largest threat). The optimal scale depended on the spatial scales of threats more than the shape or size of biotic distributions. Although we found appreciable potential for grid-measurement errors, current IUCN guidelines for estimating AOO neutralize geometric uncertainty and incorporate effective scaling procedures for assessing risks posed by landscape-scale threats to species and ecosystems. © 2017 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.

  6. Semi-automated landform classification for hazard mapping of soil liquefaction by earthquake

    NASA Astrophysics Data System (ADS)

    Nakano, Takayuki

    2018-05-01

    Soil liquefaction damages were caused by huge earthquake in Japan, and the similar damages are concerned in near future huge earthquake. On the other hand, a preparation of soil liquefaction risk map (soil liquefaction hazard map) is impeded by the difficulty of evaluation of soil liquefaction risk. Generally, relative soil liquefaction risk should be able to be evaluated from landform classification data by using experimental rule based on the relationship between extent of soil liquefaction damage and landform classification items associated with past earthquake. Therefore, I rearranged the relationship between landform classification items and soil liquefaction risk intelligibly in order to enable the evaluation of soil liquefaction risk based on landform classification data appropriately and efficiently. And I developed a new method of generating landform classification data of 50-m grid size from existing landform classification data of 250-m grid size by using digital elevation model (DEM) data and multi-band satellite image data in order to evaluate soil liquefaction risk in detail spatially. It is expected that the products of this study contribute to efficient producing of soil liquefaction hazard map by local government.

  7. Impact of cloud horizontal inhomogeneity and directional sampling on the retrieval of cloud droplet size by the POLDER instrument

    NASA Astrophysics Data System (ADS)

    Shang, H.; Chen, L.; Bréon, F. M.; Letu, H.; Li, S.; Wang, Z.; Su, L.

    2015-11-01

    The principles of cloud droplet size retrieval via Polarization and Directionality of the Earth's Reflectance (POLDER) requires that clouds be horizontally homogeneous. The retrieval is performed by combining all measurements from an area of 150 km × 150 km to compensate for POLDER's insufficient directional sampling. Using POLDER-like data simulated with the RT3 model, we investigate the impact of cloud horizontal inhomogeneity and directional sampling on the retrieval and analyze which spatial resolution is potentially accessible from the measurements. Case studies show that the sub-grid-scale variability in droplet effective radius (CDR) can significantly reduce valid retrievals and introduce small biases to the CDR (~ 1.5 μm) and effective variance (EV) estimates. Nevertheless, the sub-grid-scale variations in EV and cloud optical thickness (COT) only influence the EV retrievals and not the CDR estimate. In the directional sampling cases studied, the retrieval using limited observations is accurate and is largely free of random noise. Several improvements have been made to the original POLDER droplet size retrieval. For example, measurements in the primary rainbow region (137-145°) are used to ensure retrievals of large droplet (> 15 μm) and to reduce the uncertainties caused by cloud heterogeneity. We apply the improved method using the POLDER global L1B data from June 2008, and the new CDR results are compared with the operational CDRs. The comparison shows that the operational CDRs tend to be underestimated for large droplets because the cloudbow oscillations in the scattering angle region of 145-165° are weak for cloud fields with CDR > 15 μm. Finally, a sub-grid-scale retrieval case demonstrates that a higher resolution, e.g., 42 km × 42 km, can be used when inverting cloud droplet size distribution parameters from POLDER measurements.

  8. Polarizing Grids, their Assemblies and Beams of Radiation

    NASA Technical Reports Server (NTRS)

    Houde, Martin; Akeson, Rachel L.; Carlstrom, John E.; Lamb, James W.; Schleuning, David A.; Woody, David P.

    2001-01-01

    This article gives an analysis of the behavior of polarizing grids and reflecting polarizers by solving Maxwell's equations, for arbitrary angles of incidence and grid rotation, for cases where the excitation is provided by an incident plane wave or a beam of radiation. The scattering and impedance matrix representations are derived and used to solve more complicated configurations of grid assemblies. The results are also compared with data obtained in the calibration of reflecting polarizers at the Owens Valley Radio Observatory (OVRO). From these analysis, we propose a method for choosing the optimum grid parameters (wire radius and spacing). We also provide a study of the effects of two types of errors (in wire separation and radius size) that can be introduced in the fabrication of a grid.

  9. Andreas Acrivos Dissertation Award Talk: Modeling drag forces and velocity fluctuations in wall-bounded flows at high Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Yang, Xiang

    2017-11-01

    The sizes of fluid motions in wall-bounded flows scale approximately as their distances from the wall. At high Reynolds numbers, resolving near-wall, small-scale, yet momentum-transferring eddies are computationally intensive, and to alleviate the strict near-wall grid resolution requirement, a wall model is usually used. The wall model of interest here is the integral wall model. This model parameterizes the near-wall sub-grid velocity profile as being comprised of a linear inner-layer and a logarithmic meso-layer with one additional term that accounts for the effects of flow acceleration, pressure gradients etc. We use the integral wall model for wall-modeled large-eddy simulations (WMLES) of turbulent boundary layers over rough walls. The effects of rough-wall topology on drag forces are investigated. A rough-wall model is then developed based on considerations of such effects, which are now known as mutual sheltering among roughness elements. Last, we discuss briefly a new interpretation of the Townsend attached eddy hypothesis-the hierarchical random additive process model (HRAP). The analogy between the energy cascade and the momentum cascade is mathematically formal as HRAP follows the multi-fractal formulism, which was extensively used for the energy cascade.

  10. Mathematical Aspects of Finite Element Methods for Incompressible Viscous Flows.

    DTIC Science & Technology

    1986-09-01

    respectively. Here h is a parameter which is usually related to the size of the grid associated with the finite element partitioning of Q. Then one... grid and of not at least performing serious mesh refinement studies. It also points out the usefulness of rigorous results concerning the stability...overconstrained the .1% approximate velocity field. However, by employing different grids for the ’z pressure and velocity fields, the linear-constant

  11. Evaluation of the UnTRIM model for 3-D tidal circulation

    USGS Publications Warehouse

    Cheng, R.T.; Casulli, V.; ,

    2001-01-01

    A family of numerical models, known as the TRIM models, shares the same modeling philosophy for solving the shallow water equations. A characteristic analysis of the shallow water equations points out that the numerical instability is controlled by the gravity wave terms in the momentum equations and by the transport terms in the continuity equation. A semi-implicit finite-difference scheme has been formulated so that these terms and the vertical diffusion terms are treated implicitly and the remaining terms explicitly to control the numerical stability and the computations are carried out over a uniform finite-difference computational mesh without invoking horizontal or vertical coordinate transformations. An unstructured grid version of TRIM model is introduced, or UnTRIM (pronounces as "you trim"), which preserves these basic numerical properties and modeling philosophy, only the computations are carried out over an unstructured orthogonal grid. The unstructured grid offers the flexibilities in representing complex study areas so that fine grid resolution can be placed in regions of interest, and coarse grids are used to cover the remaining domain. Thus, the computational efforts are concentrated in areas of importance, and an overall computational saving can be achieved because the total number of grid-points is dramatically reduced. To use this modeling approach, an unstructured grid mesh must be generated to properly reflect the properties of the domain of the investigation. The new modeling flexibility in grid structure is accompanied by new challenges associated with issues of grid generation. To take full advantage of this new model flexibility, the model grid generation should be guided by insights into the physics of the problems; and the insights needed may require a higher degree of modeling skill.

  12. PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems

    PubMed Central

    Mohamed, Mohamed A.; Eltamaly, Ali M.; Alolah, Abdulrahman I.

    2016-01-01

    This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers. PMID:27513000

  13. PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems.

    PubMed

    Mohamed, Mohamed A; Eltamaly, Ali M; Alolah, Abdulrahman I

    2016-01-01

    This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers.

  14. Study on the glaze ice accretion of wind turbine with various chord lengths

    NASA Astrophysics Data System (ADS)

    Liang, Jian; Liu, Maolian; Wang, Ruiqi; Wang, Yuhang

    2018-02-01

    Wind turbine icing often occurs in winter, which changes the aerodynamic characteristics of the blades and reduces the work efficiency of the wind turbine. In this paper, the glaze ice model is established for horizontal-axis wind turbine in 3-D. The model contains the grid generation, two-phase simulation, heat and mass transfer. Results show that smaller wind turbine suffers from more serious icing problem, which reflects on a larger ice thickness. Both the collision efficiency and heat transfer coefficient increase under smaller size condition.

  15. A single-cell spiking model for the origin of grid-cell patterns

    PubMed Central

    Kempter, Richard

    2017-01-01

    Spatial cognition in mammals is thought to rely on the activity of grid cells in the entorhinal cortex, yet the fundamental principles underlying the origin of grid-cell firing are still debated. Grid-like patterns could emerge via Hebbian learning and neuronal adaptation, but current computational models remained too abstract to allow direct confrontation with experimental data. Here, we propose a single-cell spiking model that generates grid firing fields via spike-rate adaptation and spike-timing dependent plasticity. Through rigorous mathematical analysis applicable in the linear limit, we quantitatively predict the requirements for grid-pattern formation, and we establish a direct link to classical pattern-forming systems of the Turing type. Our study lays the groundwork for biophysically-realistic models of grid-cell activity. PMID:28968386

  16. Evapotranspiration from nonuniform surfaces - A first approach for short-term numerical weather prediction

    NASA Technical Reports Server (NTRS)

    Wetzel, Peter J.; Chang, Jy-Tai

    1988-01-01

    Observations of surface heterogeneity of soil moisture from scales of meters to hundreds of kilometers are discussed, and a relationship between grid element size and soil moisture variability is presented. An evapotranspiration model is presented which accounts for the variability of soil moisture, standing surface water, and vegetation internal and stomatal resistance to moisture flow from the soil. The mean values and standard deviations of these parameters are required as input to the model. Tests of this model against field observations are reported, and extensive sensitivity tests are presented which explore the importance of including subgrid-scale variability in an evapotranspiration model.

  17. [Analysis on difference of richness of traditional Chinese medicine resources in Chongqing based on grid technology].

    PubMed

    Zhang, Xiao-Bo; Qu, Xian-You; Li, Meng; Wang, Hui; Jing, Zhi-Xian; Liu, Xiang; Zhang, Zhi-Wei; Guo, Lan-Ping; Huang, Lu-Qi

    2017-11-01

    After the end of the national and local medicine resources census work, a large number of Chinese medicine resources and distribution of data will be summarized. The species richness between the regions is a valid indicator for objective reflection of inter-regional resources of Chinese medicine. Due to the large difference in the size of the county area, the assessment of the intercropping of the resources of the traditional Chinese medicine by the county as a statistical unit will lead to the deviation of the regional abundance statistics. Based on the rule grid or grid statistical methods, the size of the statistical unit due to different can be reduced, the differences in the richness of traditional Chinese medicine resources are caused. Taking Chongqing as an example, based on the existing survey data, the difference of richness of traditional Chinese medicine resources under different grid scale were compared and analyzed. The results showed that the 30 km grid could be selected and the richness of Chinese medicine resources in Chongqing could reflect the objective situation of intercropping resources richness in traditional Chinese medicine better. Copyright© by the Chinese Pharmaceutical Association.

  18. New ghost-node method for linking different models with varied grid refinement

    USGS Publications Warehouse

    James, S.C.; Dickinson, J.E.; Mehl, S.W.; Hill, M.C.; Leake, S.A.; Zyvoloski, G.A.; Eddebbarh, A.-A.

    2006-01-01

    A flexible, robust method for linking grids of locally refined ground-water flow models constructed with different numerical methods is needed to address a variety of hydrologic problems. This work outlines and tests a new ghost-node model-linking method for a refined "child" model that is contained within a larger and coarser "parent" model that is based on the iterative method of Steffen W. Mehl and Mary C. Hill (2002, Advances in Water Res., 25, p. 497-511; 2004, Advances in Water Res., 27, p. 899-912). The method is applicable to steady-state solutions for ground-water flow. Tests are presented for a homogeneous two-dimensional system that has matching grids (parent cells border an integer number of child cells) or nonmatching grids. The coupled grids are simulated by using the finite-difference and finite-element models MODFLOW and FEHM, respectively. The simulations require no alteration of the MODFLOW or FEHM models and are executed using a batch file on Windows operating systems. Results indicate that when the grids are matched spatially so that nodes and child-cell boundaries are aligned, the new coupling technique has error nearly equal to that when coupling two MODFLOW models. When the grids are nonmatching, model accuracy is slightly increased compared to that for matching-grid cases. Overall, results indicate that the ghost-node technique is a viable means to couple distinct models because the overall head and flow errors relative to the analytical solution are less than if only the regional coarse-grid model was used to simulate flow in the child model's domain.

  19. On wave breaking for Boussinesq-type models

    NASA Astrophysics Data System (ADS)

    Kazolea, M.; Ricchiuto, M.

    2018-03-01

    We consider the issue of wave breaking closure for Boussinesq type models, and attempt at providing some more understanding of the sensitivity of some closure approaches to the numerical set-up, and in particular to mesh size. For relatively classical choices of weakly dispersive propagation models, we compare two closure strategies. The first is the hybrid method consisting in suppressing the dispersive terms in breaking regions, as initially suggested by Tonelli and Petti in 2009. The second is an eddy viscosity approach based on the solution of a a turbulent kinetic energy. The formulation follows early work by O. Nwogu in the 90's, and some more recent developments by Zhang and co-workers (Ocean Mod. 2014), adapting it to be consistent with the wave breaking detection used here. We perform a study of the behaviour of the two closures for different mesh sizes, with attention to the possibility of obtaining grid independent results. Based on a classical shallow water theory, we also suggest some monitors to quantify the different contributions to the dissipation mechanism, differentiating those associated to the scheme from those of the partial differential equation. These quantities are used to analyze the dynamics of dissipation in some classical benchmarks, and its dependence on the mesh size. Our main results show that numerical dissipation contributes very little to the the results obtained when using eddy viscosity method. This closure shows little sensitivity to the grid, and may lend itself to the development and use of non-dissipative/energy conserving numerical methods. The opposite is observed for the hybrid approach, for which numerical dissipation plays a key role, and unfortunately is sensitive to the size of the mesh. In particular, when working, the two approaches investigated provide results which are in the same ball range and which agree with what is usually reported in literature. With the hybrid method, however, the inception of instabilities is observed at mesh sizes which vary from case to case, and depend on the propagation model. These results are comforted by numerical computations on a large number of classical benchmarks. To perform a systematic study of the behaviour of the two closures for different mesh sizes, with attention to the possibility of obtaining grid independent results, To gain an insight into the mechanism actually responsible for wave breaking by providing a quantitative description of the different contributions to the dissipation mechanism, differentiating those associated to the numerical scheme from those introduced at the PDE level, To provide some understanding of the sensitivity of the above mentioned dissipation to the mesh size, To prove the equivalent capabilities of the approaches studied in reproducing simple as well as complex wave transformation, while showing the substantial difference in the underlying dissipation mechanisms. The paper is organised as follows. Section two presents the two Boussinesq approximations used in this work. Section 3 discusses the numerical approximation of the models, as well as of the wave breaking closure. The comparison of the two approaches on a wide selection of benchmarks is discussed in Section 4. The paper is ended by a summary and a sketch of the future and ongoing developments of this work.

  20. Numerical computation of complex multi-body Navier-Stokes flows with applications for the integrated Space Shuttle launch vehicle

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    1993-01-01

    An enhanced grid system for the Space Shuttle Orbiter was built by integrating CAD definitions from several sources and then generating the surface and volume grids. The new grid system contains geometric components not modeled previously plus significant enhancements on geometry that has been modeled in the old grid system. The new orbiter grids were then integrated with new grids for the rest of the launch vehicle. Enhancements were made to the hyperbolic grid generator HYPGEN and new tools for grid projection, manipulation, and modification, Cartesian box grid and far field grid generation and post-processing of flow solver data were developed.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kodavasal, Janardhan; Kolodziej, Christopher P.; Ciatti, Stephen A.

    Gasoline compression ignition (GCI) is a low temperature combustion (LTC) concept that has been gaining increasing interest over the recent years owing to its potential to achieve diesel-like thermal efficiencies with significantly reduced engine-out nitrogen oxides (NOx) and soot emissions compared to diesel engines. In this work, closed-cycle computational fluid dynamics (CFD) simulations are performed of this combustion mode using a sector mesh in an effort to understand effects of model settings on simulation results. One goal of this work is to provide recommendations for grid resolution, combustion model, chemical kinetic mechanism, and turbulence model to accurately capture experimental combustionmore » characteristics. Grid resolutions ranging from 0.7 mm to 0.1 mm minimum cell sizes were evaluated in conjunction with both Reynolds averaged Navier-Stokes (RANS) and Large Eddy Simulation (LES) based turbulence models. Solution of chemical kinetics using the multi-zone approach is evaluated against the detailed approach of solving chemistry in every cell. The relatively small primary reference fuel (PRF) mechanism (48 species) used in this study is also evaluated against a larger 312-species gasoline mechanism. Based on these studies the following model settings are chosen keeping in mind both accuracy and computation costs – 0.175 mm minimum cell size grid, RANS turbulence model, 48-species PRF mechanism, and multi-zone chemistry solution with bin limits of 5 K in temperature and 0.05 in equivalence ratio. With these settings, the performance of the CFD model is evaluated against experimental results corresponding to a low load start of injection (SOI) timing sweep. The model is then exercised to investigate the effect of SOI on combustion phasing with constant intake valve closing (IVC) conditions and fueling over a range of SOI timings to isolate the impact of SOI on charge preparation and ignition. Simulation results indicate that there is an optimum SOI timing, in this case -30?aTDC (after top dead center), which results in the most stable combustion. Advancing injection with respect to this point leads to significant fuel mass burning in the colder squish region, leading to retarded phasing and ultimately misfire for SOI timings earlier than -42?aTDC. On the other hand, retarding injection beyond this optimum timing results in reduced residence time available for gasoline ignition kinetics, and also leads to retarded phasing, with misfire at SOI timings later than -15?aTDC.« less

  2. A NASTRAN model of a large flexible swing-wing bomber. Volume 3: NASTRAN model development-wing structure

    NASA Technical Reports Server (NTRS)

    Mock, W. D.; Latham, R. A.

    1982-01-01

    The NASTRAN model plan for the wing structure was expanded in detail to generate the NASTRAN model for this substructure. The grid point coordinates were coded for each element. The material properties and sizing data for each element were specified. The wing substructure model was thoroughly checked out for continuity, connectivity, and constraints. This substructure was processed for structural influence coefficients (SIC) point loadings and the deflections were compared to those computed for the aircraft detail model. Finally, a demonstration and validation processing of this substructure was accomplished using the NASTRAN finite element program. The bulk data deck, stiffness matrices, and SIC output data were delivered.

  3. A NASTRAN model of a large flexible swing-wing bomber. Volume 2: NASTRAN model development-horizontal stabilzer, vertical stabilizer and nacelle structures

    NASA Technical Reports Server (NTRS)

    Mock, W. D.; Latham, R. A.; Tisher, E. D.

    1982-01-01

    The NASTRAN model plans for the horizontal stabilizer, vertical stabilizer, and nacelle structure were expanded in detail to generate the NASTRAN model for each of these substructures. The grid point coordinates were coded for each element. The material properties and sizing data for each element were specified. Each substructure model was thoroughly checked out for continuity, connectivity, and constraints. These substructures were processed for structural influence coefficients (SIC) point loadings and the deflections were compared to those computed for the aircraft detail models. Finally, a demonstration and validation processing of these substructures was accomplished using the NASTRAN finite element program installed at NASA/DFRC facility.

  4. A NASTRAN model of a large flexible swing-wing bomber. Volume 4: NASTRAN model development-fuselage structure

    NASA Technical Reports Server (NTRS)

    Mock, W. D.; Latham, R. A.

    1982-01-01

    The NASTRAN model plan for the fuselage structure was expanded in detail to generate the NASTRAN model for this substructure. The grid point coordinates were coded for each element. The material properties and sizing data for each element were specified. The fuselage substructure model was thoroughly checked out for continuity, connectivity, and constraints. This substructure was processed for structural influence coefficients (SIC) point loadings and the deflections were compared to those computed for the aircraft detail model. Finally, a demonstration and validation processing of this substructure was accomplished using the NASTRAN finite element program. The bulk data deck, stiffness matrices, and SIC output data were delivered.

  5. Parameterized Finite Element Modeling and Buckling Analysis of Six Typical Composite Grid Cylindrical Shells

    NASA Astrophysics Data System (ADS)

    Lai, Changliang; Wang, Junbiao; Liu, Chuang

    2014-10-01

    Six typical composite grid cylindrical shells are constructed by superimposing three basic types of ribs. Then buckling behavior and structural efficiency of these shells are analyzed under axial compression, pure bending, torsion and transverse bending by finite element (FE) models. The FE models are created by a parametrical FE modeling approach that defines FE models with original natural twisted geometry and orients cross-sections of beam elements exactly. And the approach is parameterized and coded by Patran Command Language (PCL). The demonstrations of FE modeling indicate the program enables efficient generation of FE models and facilitates parametric studies and design of grid shells. Using the program, the effects of helical angles on the buckling behavior of six typical grid cylindrical shells are determined. The results of these studies indicate that the triangle grid and rotated triangle grid cylindrical shell are more efficient than others under axial compression and pure bending, whereas under torsion and transverse bending, the hexagon grid cylindrical shell is most efficient. Additionally, buckling mode shapes are compared and provide an understanding of composite grid cylindrical shells that is useful in preliminary design of such structures.

  6. Phasor Domain Steady-State Modeling and Design of the DC–DC Modular Multilevel Converter

    DOE PAGES

    Yang, Heng; Qin, Jiangchao; Debnath, Suman; ...

    2016-01-06

    The DC-DC Modular Multilevel Converter (MMC), which originated from the AC-DC MMC, is an attractive converter topology for interconnection of medium-/high-voltage DC grids. This paper presents design considerations for the DC-DC MMC to achieve high efficiency and reduced component sizes. A steady-state mathematical model of the DC-DC MMC in the phasor-domain is developed. Based on the developed model, a design approach is proposed to size the components and to select the operating frequency of the converter to satisfy a set of design constraints while achieving high efficiency. The design approach includes sizing of the arm inductor, Sub-Module (SM) capacitor, andmore » phase filtering inductor along with the selection of AC operating frequency of the converter. The accuracy of the developed model and the effectiveness of the design approach are validated based on the simulation studies in the PSCAD/EMTDC software environment. The analysis and developments of this paper can be used as a guideline for design of the DC-DC MMC.« less

  7. Sensitivities of Summertime Mesoscale Circulations in the Coastal Carolinas to Modifications of the Kain-Fritsch Cumulus Parameterization.

    PubMed

    Sims, Aaron P; Alapaty, Kiran; Raman, Sethu

    2017-01-01

    Two mesoscale circulations, the Sandhills circulation and the sea breeze, influence the initiation of deep convection over the Sandhills and the coast in the Carolinas during the summer months. The interaction of these two circulations causes additional convection in this coastal region. Accurate representation of mesoscale convection is difficult as numerical models have problems with the prediction of the timing, amount, and location of precipitation. To address this issue, the authors have incorporated modifications to the Kain-Fritsch (KF) convective parameterization scheme and evaluated these mesoscale interactions using a high-resolution numerical model. The modifications include changes to the subgrid-scale cloud formulation, the convective turnover time scale, and the formulation of the updraft entrainment rates. The use of a grid-scaling adjustment parameter modulates the impact of the KF scheme as a function of the horizontal grid spacing used in a simulation. Results indicate that the impact of this modified cumulus parameterization scheme is more effective on domains with coarser grid sizes. Other results include a decrease in surface and near-surface temperatures in areas of deep convection (due to the inclusion of the effects of subgrid-scale clouds on the radiation), improvement in the timing of convection, and an increase in the strength of deep convection.

  8. Three-dimensional wideband electromagnetic modeling on massively parallel computers

    NASA Astrophysics Data System (ADS)

    Alumbaugh, David L.; Newman, Gregory A.; Prevost, Lydie; Shadid, John N.

    1996-01-01

    A method is presented for modeling the wideband, frequency domain electromagnetic (EM) response of a three-dimensional (3-D) earth to dipole sources operating at frequencies where EM diffusion dominates the response (less than 100 kHz) up into the range where propagation dominates (greater than 10 MHz). The scheme employs the modified form of the vector Helmholtz equation for the scattered electric fields to model variations in electrical conductivity, dielectric permitivity and magnetic permeability. The use of the modified form of the Helmholtz equation allows for perfectly matched layer ( PML) absorbing boundary conditions to be employed through the use of complex grid stretching. Applying the finite difference operator to the modified Helmholtz equation produces a linear system of equations for which the matrix is sparse and complex symmetrical. The solution is obtained using either the biconjugate gradient (BICG) or quasi-minimum residual (QMR) methods with preconditioning; in general we employ the QMR method with Jacobi scaling preconditioning due to stability. In order to simulate larger, more realistic models than has been previously possible, the scheme has been modified to run on massively parallel (MP) computer architectures. Execution on the 1840-processor Intel Paragon has indicated a maximum model size of 280 × 260 × 200 cells with a maximum flop rate of 14.7 Gflops. Three different geologic models are simulated to demonstrate the use of the code for frequencies ranging from 100 Hz to 30 MHz and for different source types and polarizations. The simulations show that the scheme is correctly able to model the air-earth interface and the jump in the electric and magnetic fields normal to discontinuities. For frequencies greater than 10 MHz, complex grid stretching must be employed to incorporate absorbing boundaries while below this normal (real) grid stretching can be employed.

  9. Scale Dependence of Statistics of Spatially Averaged Rain Rate Seen in TOGA COARE Comparison with Predictions from a Stochastic Model

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Bell, T. L.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    A characteristic feature of rainfall statistics is that they in general depend on the space and time scales over which rain data are averaged. As a part of an earlier effort to determine the sampling error of satellite rain averages, a space-time model of rainfall statistics was developed to describe the statistics of gridded rain observed in GATE. The model allows one to compute the second moment statistics of space- and time-averaged rain rate which can be fitted to satellite or rain gauge data to determine the four model parameters appearing in the precipitation spectrum - an overall strength parameter, a characteristic length separating the long and short wavelength regimes and a characteristic relaxation time for decay of the autocorrelation of the instantaneous local rain rate and a certain 'fractal' power law exponent. For area-averaged instantaneous rain rate, this exponent governs the power law dependence of these statistics on the averaging length scale $L$ predicted by the model in the limit of small $L$. In particular, the variance of rain rate averaged over an $L \\times L$ area exhibits a power law singularity as $L \\rightarrow 0$. In the present work the model is used to investigate how the statistics of area-averaged rain rate over the tropical Western Pacific measured with ship borne radar during TOGA COARE (Tropical Ocean Global Atmosphere Coupled Ocean Atmospheric Response Experiment) and gridded on a 2 km grid depends on the size of the spatial averaging scale. Good agreement is found between the data and predictions from the model over a wide range of averaging length scales.

  10. Solution adaptive grids applied to low Reynolds number flow

    NASA Astrophysics Data System (ADS)

    de With, G.; Holdø, A. E.; Huld, T. A.

    2003-08-01

    A numerical study has been undertaken to investigate the use of a solution adaptive grid for flow around a cylinder in the laminar flow regime. The main purpose of this work is twofold. The first aim is to investigate the suitability of a grid adaptation algorithm and the reduction in mesh size that can be obtained. Secondly, the uniform asymmetric flow structures are ideal to validate the mesh structures due to mesh refinement and consequently the selected refinement criteria. The refinement variable used in this work is a product of the rate of strain and the mesh cell size, and contains two variables Cm and Cstr which determine the order of each term. By altering the order of either one of these terms the refinement behaviour can be modified.

  11. Method and apparatus for jetting, manufacturing and attaching uniform solder balls

    DOEpatents

    Yost, F.G.; Frear, D.R.; Schmale, D.T.

    1999-01-05

    An apparatus and process are disclosed for jetting molten solder in the form of balls directly onto all the metallized interconnects lands for a ball grid array package in one step with no solder paste required. Molten solder is jetted out of a grid of holes using a piston attached to a piezoelectric crystal. When voltage is applied to the crystal it expands forcing the piston to extrude a desired volume of solder through holes in the aperture plate. When the voltage is decreased the piston reverses motion creating an instability in the molten solder at the aperture plate surface and thereby forming spherical solder balls that fall onto a metallized substrate. The molten solder balls land on the substrate and form a metallurgical bond with the metallized lands. The size of the solder balls is determined by a combination of the size of the holes in the aperture plate, the duration of the piston pulse, and the displacement of the piston. The layout of the balls is dictated by the location of the hooks in the grid. Changes in ball size and layout can be easily accomplished by changing the grid plate. This invention also allows simple preparation of uniform balls for subsequent supply to BGA users. 7 figs.

  12. Method and apparatus for jetting, manufacturing and attaching uniform solder balls

    DOEpatents

    Yost, Frederick G.; Frear, Darrel R.; Schmale, David T.

    1999-01-01

    An apparatus and process for jetting molten solder in the form of balls directly onto all the metallized interconnects lands for a ball grid array package in one step with no solder paste required. Molten solder is jetted out of a grid of holes using a piston attached to a piezoelectric crystal. When voltage is applied to the crystal it expands forcing the piston to extrude a desired volume of solder through holes in the aperture plate. When the voltage is decreased the piston reverses motion creating an instability in the molten solder at the aperture plate surface and thereby forming spherical solder balls that fall onto a metallized substrate. The molten solder balls land on the substrate and form a metallurgical bond with the metallized lands. The size of the solder balls is determined by a combination of the size of the holes in the aperture plate, the duration of the piston pulse, and the displacement of the piston. The layout of the balls is dictated by the location of the hooks in the grid. Changes in ball size and layout can be easily accomplished by changing the grid plate. This invention also allows simple preparation of uniform balls for subsequent supply to BGA users.

  13. Orientation domains: A mobile grid clustering algorithm with spherical corrections

    NASA Astrophysics Data System (ADS)

    Mencos, Joana; Gratacós, Oscar; Farré, Mercè; Escalante, Joan; Arbués, Pau; Muñoz, Josep Anton

    2012-12-01

    An algorithm has been designed and tested which was devised as a tool assisting the analysis of geological structures solely from orientation data. More specifically, the algorithm was intended for the analysis of geological structures that can be approached as planar and piecewise features, like many folded strata. Input orientation data is expressed as pairs of angles (azimuth and dip). The algorithm starts by considering the data in Cartesian coordinates. This is followed by a search for an initial clustering solution, which is achieved by comparing the results output from the systematic shift of a regular rigid grid over the data. This initial solution is optimal (achieves minimum square error) once the grid size and the shift increment are fixed. Finally, the algorithm corrects for the variable spread that is generally expected from the data type using a reshaped non-rigid grid. The algorithm is size-oriented, which implies the application of conditions over cluster size through all the process in contrast to density-oriented algorithms, also widely used when dealing with spatial data. Results are derived in few seconds and, when tested over synthetic examples, they were found to be consistent and reliable. This makes the algorithm a valuable alternative to the time-consuming traditional approaches available to geologists.

  14. Moving Computational Domain Method and Its Application to Flow Around a High-Speed Car Passing Through a Hairpin Curve

    NASA Astrophysics Data System (ADS)

    Watanabe, Koji; Matsuno, Kenichi

    This paper presents a new method for simulating flows driven by a body traveling with neither restriction on motion nor a limit of a region size. In the present method named 'Moving Computational Domain Method', the whole of the computational domain including bodies inside moves in the physical space without the limit of region size. Since the whole of the grid of the computational domain moves according to the movement of the body, a flow solver of the method has to be constructed on the moving grid system and it is important for the flow solver to satisfy physical and geometric conservation laws simultaneously on moving grid. For this issue, the Moving-Grid Finite-Volume Method is employed as the flow solver. The present Moving Computational Domain Method makes it possible to simulate flow driven by any kind of motion of the body in any size of the region with satisfying physical and geometric conservation laws simultaneously. In this paper, the method is applied to the flow around a high-speed car passing through a hairpin curve. The distinctive flow field driven by the car at the hairpin curve has been demonstrated in detail. The results show the promising feature of the method.

  15. Smart grid initialization reduces the computational complexity of multi-objective image registration based on a dual-dynamic transformation model to account for large anatomical differences

    NASA Astrophysics Data System (ADS)

    Bosman, Peter A. N.; Alderliesten, Tanja

    2016-03-01

    We recently demonstrated the strong potential of using dual-dynamic transformation models when tackling deformable image registration problems involving large anatomical differences. Dual-dynamic transformation models employ two moving grids instead of the common single moving grid for the target image (and single fixed grid for the source image). We previously employed powerful optimization algorithms to make use of the additional flexibility offered by a dual-dynamic transformation model with good results, directly obtaining insight into the trade-off between important registration objectives as a result of taking a multi-objective approach to optimization. However, optimization has so far been initialized using two regular grids, which still leaves a great potential of dual-dynamic transformation models untapped: a-priori grid alignment with image structures/areas that are expected to deform more. This allows (far) less grid points to be used, compared to using a sufficiently refined regular grid, leading to (far) more efficient optimization, or, equivalently, more accurate results using the same number of grid points. We study the implications of exploiting this potential by experimenting with two new smart grid initialization procedures: one manual expert-based and one automated image-feature-based. We consider a CT test case with large differences in bladder volume with and without a multi-resolution scheme and find a substantial benefit of using smart grid initialization.

  16. Numerical Study of a Convective Turbulence Encounter

    NASA Technical Reports Server (NTRS)

    Proctor, Fred H.; Hamilton, David W.; Bowles, Roland L.

    2002-01-01

    A numerical simulation of a convective turbulence event is investigated and compared with observational data. The specific case was encountered during one of NASA's flight tests and was characterized by severe turbulence. The event was associated with overshooting convective turrets that contained low to moderate radar reflectivity. Model comparisons with observations are quite favorable. Turbulence hazard metrics are proposed and applied to the numerical data set. Issues such as adequate grid size are examined.

  17. The three-dimensional Multi-Block Advanced Grid Generation System (3DMAGGS)

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.; Weilmuenster, Kenneth J.

    1993-01-01

    As the size and complexity of three dimensional volume grids increases, there is a growing need for fast and efficient 3D volumetric elliptic grid solvers. Present day solvers are limited by computational speed and do not have all the capabilities such as interior volume grid clustering control, viscous grid clustering at the wall of a configuration, truncation error limiters, and convergence optimization residing in one code. A new volume grid generator, 3DMAGGS (Three-Dimensional Multi-Block Advanced Grid Generation System), which is based on the 3DGRAPE code, has evolved to meet these needs. This is a manual for the usage of 3DMAGGS and contains five sections, including the motivations and usage, a GRIDGEN interface, a grid quality analysis tool, a sample case for verifying correct operation of the code, and a comparison to both 3DGRAPE and GRIDGEN3D. Since it was derived from 3DGRAPE, this technical memorandum should be used in conjunction with the 3DGRAPE manual (NASA TM-102224).

  18. Shearing-induced asymmetry in entorhinal grid cells.

    PubMed

    Stensola, Tor; Stensola, Hanne; Moser, May-Britt; Moser, Edvard I

    2015-02-12

    Grid cells are neurons with periodic spatial receptive fields (grids) that tile two-dimensional space in a hexagonal pattern. To provide useful information about location, grids must be stably anchored to an external reference frame. The mechanisms underlying this anchoring process have remained elusive. Here we show in differently sized familiar square enclosures that the axes of the grids are offset from the walls by an angle that minimizes symmetry with the borders of the environment. This rotational offset is invariably accompanied by an elliptic distortion of the grid pattern. Reversing the ellipticity analytically by a shearing transformation removes the angular offset. This, together with the near-absence of rotation in novel environments, suggests that the rotation emerges through non-coaxial strain as a function of experience. The systematic relationship between rotation and distortion of the grid pattern points to shear forces arising from anchoring to specific geometric reference points as key elements of the mechanism for alignment of grid patterns to the external world.

  19. Verification of the Icarus Material Response Tool

    NASA Technical Reports Server (NTRS)

    Schroeder, Olivia; Palmer, Grant; Stern, Eric; Schulz, Joseph; Muppidi, Suman; Martin, Alexandre

    2017-01-01

    Due to the complex physics encountered during reentry, material response solvers are used for two main purposes: improve the understanding of the physical phenomena; and design and size thermal protection systems (TPS). Icarus, is a three dimensional, unstructured material response tool that is intended to be used for design while maintaining the flexibility to easily implement physical models as needed. Because TPS selection and sizing is critical, it is of the utmost importance that the design tools be extensively verified and validated before their use. Verification tests aim at insuring that the numerical schemes and equations are implemented correctly by comparison to analytical solutions and grid convergence tests.

  20. Cloud cover estimation: Use of GOES imagery in development of cloud cover data base for insolation assessment

    NASA Technical Reports Server (NTRS)

    Huning, J. R.; Logan, T. L.; Smith, J. H.

    1982-01-01

    The potential of using digital satellite data to establish a cloud cover data base for the United States, one that would provide detailed information on the temporal and spatial variability of cloud development are studied. Key elements include: (1) interfacing GOES data from the University of Wisconsin Meteorological Data Facility with the Jet Propulsion Laboratory's VICAR image processing system and IBIS geographic information system; (2) creation of a registered multitemporal GOES data base; (3) development of a simple normalization model to compensate for sun angle; (4) creation of a variable size georeference grid that provides detailed cloud information in selected areas and summarized information in other areas; and (5) development of a cloud/shadow model which details the percentage of each grid cell that is cloud and shadow covered, and the percentage of cloud or shadow opacity. In addition, comparison of model calculations of insolation with measured values at selected test sites was accomplished, as well as development of preliminary requirements for a large scale data base of cloud cover statistics.

  1. High Fidelity BWR Fuel Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Su Jong

    This report describes the Consortium for Advanced Simulation of Light Water Reactors (CASL) work conducted for completion of the Thermal Hydraulics Methods (THM) Level 3 milestone THM.CFD.P13.03: High Fidelity BWR Fuel Simulation. High fidelity computational fluid dynamics (CFD) simulation for Boiling Water Reactor (BWR) was conducted to investigate the applicability and robustness performance of BWR closures. As a preliminary study, a CFD model with simplified Ferrule spacer grid geometry of NUPEC BWR Full-size Fine-mesh Bundle Test (BFBT) benchmark has been implemented. Performance of multiphase segregated solver with baseline boiling closures has been evaluated. Although the mean values of void fractionmore » and exit quality of CFD result for BFBT case 4101-61 agreed with experimental data, the local void distribution was not predicted accurately. The mesh quality was one of the critical factors to obtain converged result. The stability and robustness of the simulation was mainly affected by the mesh quality, combination of BWR closure models. In addition, the CFD modeling of fully-detailed spacer grid geometry with mixing vane is necessary for improving the accuracy of CFD simulation.« less

  2. Overload cascading failure on complex networks with heterogeneous load redistribution

    NASA Astrophysics Data System (ADS)

    Hou, Yueyi; Xing, Xiaoyun; Li, Menghui; Zeng, An; Wang, Yougui

    2017-09-01

    Many real systems including the Internet, power-grid and financial networks experience rare but large overload cascading failures triggered by small initial shocks. Many models on complex networks have been developed to investigate this phenomenon. Most of these models are based on the load redistribution process and assume that the load on a failed node shifts to nearby nodes in the networks either evenly or according to the load distribution rule before the cascade. Inspired by the fact that real power-grid tends to place the excess load on the nodes with high remaining capacities, we study a heterogeneous load redistribution mechanism in a simplified sandpile model in this paper. We find that weak heterogeneity in load redistribution can effectively mitigate the cascade while strong heterogeneity in load redistribution may even enlarge the size of the final failure. With a parameter θ to control the degree of the redistribution heterogeneity, we identify a rather robust optimal θ∗ = 1. Finally, we find that θ∗ tends to shift to a larger value if the initial sand distribution is homogeneous.

  3. The GEON Integrated Data Viewer (IDV) for Exploration of Geoscience Data With Visualizations

    NASA Astrophysics Data System (ADS)

    Wier, S.; Meertens, C.

    2008-12-01

    The GEON Integrated Data Viewer (GEON IDV) is a fully interactive, research-level, true 3D and 4D (latitude, longitude, depth or altitude, and time) tool to display and explore almost any data located on the Earth, inside the Earth, or above the Earth's surface. Although the GEON IDV makes impressive 3D displays, it is primarily designed for data exploration and analysis. The GEON IDV is designed to meet the challenge of investigating complex, multi-variate, time-varying, three- dimensional geoscience questions anywhere on earth. The GEON IDV supports simultaneous displays of data sets of differing sources and data type or character, with complete control over map projection and area, time animation, vertical scale, and color schemes. The GEON IDV displays gridded and point data, images, GIS shape files, and other types of data, from files, HTTP servers, OPeNDAP catalogs, RSS feeds, and web map servers. GEON IDV displays include images and geology maps on 3D topographic relief surfaces, vertical geologic cross sections in their correct depth extent, tectonic plate boundaries and plate motion vectors including time animation, GPS velocity vectors and error ellipses, GPS time series at a station, earthquake locations in depth optionally colored and sized by magnitude, earthquake focal mechanisms 'beachballs,' 2D grids of gravity or magnetic anomalies, 2D grids of crustal strain imagery, seismic raypaths, seismic tomography model 3D grids as vertical and horizontal cross sections and isosurfaces, 3D grids of crust and mantle structure for any property, and time animation of 3D grids of mantle convection models as cross sections and isosurfaces. The IDV can also show tracks of aircraft, ships, drifting buoys and marine animals, colored observed values, borehole soundings, and vertical probes of 3D grids. The GEON IDV can drive a GeoWall or other 3D stereo system. IDV output files include imagery, movies, and KML files for Google Earth. The IDV has built in analysis capabilities with user-created Python language routines, and with automatic conversion of data sources with differing units and grid structures. The IDV can be scripted to create display images on user request or automatically on data arrival, offering the use of the IDV as a back end to support image generation in a data portal. Examples of GEON IDV use in seismology, geodesy, geodynamics and other fields will be shown.

  4. A numerical analysis of phase-change problems including natural convection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Y.; Faghri, A.

    1990-08-01

    Fixed grid solutions for phase-change problems remove the need to satisfy conditions at the phase-change front and can be easily extended to multidimensional problems. The two most important and widely used methods are enthalpy methods and temperature-based equivalent heat capacity methods. Both methods in this group have advantages and disadvantages. Enthalpy methods (Shamsundar and Sparrow, 1975; Voller and Prakash, 1987; Cao et al., 1989) are flexible and can handle phase-change problems occurring both at a single temperature and over a temperature range. The drawback of this method is that although the predicted temperature distributions and melting fronts are reasonable, themore » predicted time history of the temperature at a typical grid point may have some oscillations. The temperature-based fixed grid methods (Morgan, 1981; Hsiao and Chung, 1984) have no such time history problems and are more convenient with conjugate problems involving an adjacent wall, but have to deal with the severe nonlinearity of the governing equations when the phase-change temperature range is small. In this paper, a new temperature-based fixed-grid formulation is proposed, and the reason that the original equivalent heat capacity model is subject to such restrictions on the time step, mesh size, and the phase-change temperature range will also be discussed.« less

  5. Thermal and chemical convection in planetary mantles

    NASA Technical Reports Server (NTRS)

    Dupeyrat, L.; Sotin, C.; Parmentier, E. M.

    1995-01-01

    Melting of the upper mantle and extraction of melt result in the formation of a less dense depleted mantle. This paper describes series of two-dimensional models that investigate the effects of chemical buoyancy induced by these density variations. A tracer particles method has been set up to follow as closely as possible the chemical state of the mantle and to model the chemical buoyant force at each grid point. Each series of models provides the evolution with time of magma production, crustal thickness, surface heat flux, and thermal and chemical state of the mantle. First, models that do not take into account the displacement of plates at the surface of Earth demonstrate that chemical buoyancy has an important effect on the geometry of convection. Then models include horizontal motion of plates 5000 km wide. Recycling of crust is taken into account. For a sufficiently high plate velocity which depends on the thermal Rayleigh number, the cell's size is strongly coupled with the plate's size. Plate motion forces chemically buoyant material to sink into the mantle. Then the positive chemical buoyancy yields upwelling as depleted mantle reaches the interface between the upper and the lower mantle. This process is very efficient in mixing the depleted and undepleted mantle at the scale of the grid spacing since these zones of upwelling disrupt the large convective flow. At low spreading rates, zones of upwelling develop quickly, melting occurs, and the model predicts intraplate volcanism by melting of subducted crust. At fast spreading rates, depleted mantle also favors the formation of these zones of upwelling, but they are not strong enough to yield partial melting. Their rapid displacement toward the ridge contributes to faster large-scale homogenization.

  6. Construction of a 3-arcsecond digital elevation model for the Gulf of Maine

    USGS Publications Warehouse

    Twomey, Erin R.; Signell, Richard P.

    2013-01-01

    A system-wide description of the seafloor topography is a basic requirement for most coastal oceanographic studies. The necessary detail of the topography obviously varies with application, but for many uses, a nominal resolution of roughly 100 m is sufficient. Creating a digital bathymetric grid with this level of resolution can be a complex procedure due to a multiplicity of data sources, data coverages, datums and interpolation procedures. This report documents the procedures used to construct a 3-arcsecond (approximately 90-meter grid cell size) digital elevation model for the Gulf of Maine (71°30' to 63° W, 39°30' to 46° N). We obtained elevation and bathymetric data from a variety of American and Canadian sources, converted all data to the North American Datum of 1983 for horizontal coordinates and the North American Vertical Datum of 1988 for vertical coordinates, used a combination of automatic and manual techniques for quality control, and interpolated gaps using a surface-fitting routine.

  7. A finite difference method for a coupled model of wave propagation in poroelastic materials.

    PubMed

    Zhang, Yang; Song, Limin; Deffenbaugh, Max; Toksöz, M Nafi

    2010-05-01

    A computational method for time-domain multi-physics simulation of wave propagation in a poroelastic medium is presented. The medium is composed of an elastic matrix saturated with a Newtonian fluid, and the method operates on a digital representation of the medium where a distinct material phase and properties are specified at each volume cell. The dynamic response to an acoustic excitation is modeled mathematically with a coupled system of equations: elastic wave equation in the solid matrix and linearized Navier-Stokes equation in the fluid. Implementation of the solution is simplified by introducing a common numerical form for both solid and fluid cells and using a rotated-staggered-grid which allows stable solutions without explicitly handling the fluid-solid boundary conditions. A stability analysis is presented which can be used to select gridding and time step size as a function of material properties. The numerical results are shown to agree with the analytical solution for an idealized porous medium of periodically alternating solid and fluid layers.

  8. Reprint of “Performance analysis of a model-sized superconducting DC transmission system based VSC-HVDC transmission technologies using RTDS”

    NASA Astrophysics Data System (ADS)

    Dinh, Minh-Chau; Ju, Chang-Hyeon; Kim, Sung-Kyu; Kim, Jin-Geun; Park, Minwon; Yu, In-Keun

    2013-01-01

    The combination of a high temperature superconducting DC power cable and a voltage source converter based HVDC (VSC-HVDC) creates a new option for transmitting power with multiple collection and distribution points for long distance and bulk power transmissions. It offers some greater advantages compared with HVAC or conventional HVDC transmission systems, and it is well suited for the grid integration of renewable energy sources in existing distribution or transmission systems. For this reason, a superconducting DC transmission system based HVDC transmission technologies is planned to be set up in the Jeju power system, Korea. Before applying this system to a real power system on Jeju Island, system analysis should be performed through a real time test. In this paper, a model-sized superconducting VSC-HVDC system, which consists of a small model-sized VSC-HVDC connected to a 2 m YBCO HTS DC model cable, is implemented. The authors have performed the real-time simulation method that incorporates the model-sized superconducting VSC-HVDC system into the simulated Jeju power system using Real Time Digital Simulator (RTDS). The performance analysis of the superconducting VSC-HVDC systems has been verified by the proposed test platform and the results were discussed in detail.

  9. Performance analysis of a model-sized superconducting DC transmission system based VSC-HVDC transmission technologies using RTDS

    NASA Astrophysics Data System (ADS)

    Dinh, Minh-Chau; Ju, Chang-Hyeon; Kim, Sung-Kyu; Kim, Jin-Geun; Park, Minwon; Yu, In-Keun

    2012-08-01

    The combination of a high temperature superconducting DC power cable and a voltage source converter based HVDC (VSC-HVDC) creates a new option for transmitting power with multiple collection and distribution points for long distance and bulk power transmissions. It offers some greater advantages compared with HVAC or conventional HVDC transmission systems, and it is well suited for the grid integration of renewable energy sources in existing distribution or transmission systems. For this reason, a superconducting DC transmission system based HVDC transmission technologies is planned to be set up in the Jeju power system, Korea. Before applying this system to a real power system on Jeju Island, system analysis should be performed through a real time test. In this paper, a model-sized superconducting VSC-HVDC system, which consists of a small model-sized VSC-HVDC connected to a 2 m YBCO HTS DC model cable, is implemented. The authors have performed the real-time simulation method that incorporates the model-sized superconducting VSC-HVDC system into the simulated Jeju power system using Real Time Digital Simulator (RTDS). The performance analysis of the superconducting VSC-HVDC systems has been verified by the proposed test platform and the results were discussed in detail.

  10. Panoramic Night Vision Goggle Testing For Diagnosis and Repair

    DTIC Science & Technology

    2000-01-01

    Visual Acuity Visual Acuity [ Marasco & Task, 1999] measures how well a human observer can see high contrast targets at specified light levels through...grid through the PNVG in-board and out-board channels simultaneously and comparing the defects to the size of grid features ( Marasco & Task, 1999). The

  11. Domain modeling and grid generation for multi-block structured grids with application to aerodynamic and hydrodynamic configurations

    NASA Technical Reports Server (NTRS)

    Spekreijse, S. P.; Boerstoel, J. W.; Vitagliano, P. L.; Kuyvenhoven, J. L.

    1992-01-01

    About five years ago, a joint development was started of a flow simulation system for engine-airframe integration studies on propeller as well as jet aircraft. The initial system was based on the Euler equations and made operational for industrial aerodynamic design work. The system consists of three major components: a domain modeller, for the graphical interactive subdivision of flow domains into an unstructured collection of blocks; a grid generator, for the graphical interactive computation of structured grids in blocks; and a flow solver, for the computation of flows on multi-block grids. The industrial partners of the collaboration and NLR have demonstrated that the domain modeller, grid generator and flow solver can be applied to simulate Euler flows around complete aircraft, including propulsion system simulation. Extension to Navier-Stokes flows is in progress. Delft Hydraulics has shown that both the domain modeller and grid generator can also be applied successfully for hydrodynamic configurations. An overview is given about the main aspects of both domain modelling and grid generation.

  12. A method of selecting grid size to account for Hertz deformation in finite element analysis of spur gears

    NASA Technical Reports Server (NTRS)

    Coy, J. J.; Chao, C. H. C.

    1981-01-01

    A method of selecting grid size for the finite element analysis of gear tooth deflection is presented. The method is based on a finite element study of two cylinders in line contact, where the criterion for establishing element size was that there be agreement with the classical Hertzian solution for deflection. The results are applied to calculate deflection for the gear specimen used in the NASA spur gear test rig. Comparisons are made between the present results and the results of two other methods of calculation. The results have application in design of gear tooth profile modifications to reduce noise and dynamic loads.

  13. Optimization of a centrifugal compressor impeller using CFD: the choice of simulation model parameters

    NASA Astrophysics Data System (ADS)

    Neverov, V. V.; Kozhukhov, Y. V.; Yablokov, A. M.; Lebedev, A. A.

    2017-08-01

    Nowadays the optimization using computational fluid dynamics (CFD) plays an important role in the design process of turbomachines. However, for the successful and productive optimization it is necessary to define a simulation model correctly and rationally. The article deals with the choice of a grid and computational domain parameters for optimization of centrifugal compressor impellers using computational fluid dynamics. Searching and applying optimal parameters of the grid model, the computational domain and solver settings allows engineers to carry out a high-accuracy modelling and to use computational capability effectively. The presented research was conducted using Numeca Fine/Turbo package with Spalart-Allmaras and Shear Stress Transport turbulence models. Two radial impellers was investigated: the high-pressure at ψT=0.71 and the low-pressure at ψT=0.43. The following parameters of the computational model were considered: the location of inlet and outlet boundaries, type of mesh topology, size of mesh and mesh parameter y+. Results of the investigation demonstrate that the choice of optimal parameters leads to the significant reduction of the computational time. Optimal parameters in comparison with non-optimal but visually similar parameters can reduce the calculation time up to 4 times. Besides, it is established that some parameters have a major impact on the result of modelling.

  14. FUN3D and CFL3D Computations for the First High Lift Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Lee-Rausch, Elizabeth M.; Rumsey, Christopher L.

    2011-01-01

    Two Reynolds-averaged Navier-Stokes codes were used to compute flow over the NASA Trapezoidal Wing at high lift conditions for the 1st AIAA CFD High Lift Prediction Workshop, held in Chicago in June 2010. The unstructured-grid code FUN3D and the structured-grid code CFL3D were applied to several different grid systems. The effects of code, grid system, turbulence model, viscous term treatment, and brackets were studied. The SST model on this configuration predicted lower lift than the Spalart-Allmaras model at high angles of attack; the Spalart-Allmaras model agreed better with experiment. Neglecting viscous cross-derivative terms caused poorer prediction in the wing tip vortex region. Output-based grid adaptation was applied to the unstructured-grid solutions. The adapted grids better resolved wake structures and reduced flap flow separation, which was also observed in uniform grid refinement studies. Limitations of the adaptation method as well as areas for future improvement were identified.

  15. Simplified galaxy formation with mesh-less hydrodynamics

    NASA Astrophysics Data System (ADS)

    Lupi, Alessandro; Volonteri, Marta; Silk, Joseph

    2017-09-01

    Numerical simulations have become a necessary tool to describe the complex interactions among the different processes involved in galaxy formation and evolution, unfeasible via an analytic approach. The last decade has seen a great effort by the scientific community in improving the sub-grid physics modelling and the numerical techniques used to make numerical simulations more predictive. Although the recently publicly available code gizmo has proven to be successful in reproducing galaxy properties when coupled with the model of the MUFASA simulations and the more sophisticated prescriptions of the Feedback In Realistic Environment (FIRE) set-up, it has not been tested yet using delayed cooling supernova feedback, which still represent a reasonable approach for large cosmological simulations, for which detailed sub-grid models are prohibitive. In order to limit the computational cost and to be able to resolve the disc structure in the galaxies we perform a suite of zoom-in cosmological simulations with rather low resolution centred around a sub-L* galaxy with a halo mass of 3 × 1011 M⊙ at z = 0, to investigate the ability of this simple model, coupled with the new hydrodynamic method of gizmo, to reproduce observed galaxy scaling relations (stellar to halo mass, stellar and baryonic Tully-Fisher, stellar mass-metallicity and mass-size). We find that the results are in good agreement with the main scaling relations, except for the total stellar mass, larger than that predicted by the abundance matching technique, and the effective sizes for the most massive galaxies in the sample, which are too small.

  16. Effects of downscaled high-resolution meteorological data on the PSCF identification of emission sources

    DOE PAGES

    Cheng, Meng -Dawn; Kabela, Erik D.

    2016-04-30

    The Potential Source Contribution Function (PSCF) model has been successfully used for identifying regions of emission source at a long distance in this study, the PSCF model relies on backward trajectories calculated by the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model. In this study, we investigated the impacts of grid resolution and Planetary Boundary Layer (PBL) parameterization (e.g., turbulent transport of pollutants) on the PSCF analysis. The Mellor-Yamada-Janjic (MYJ) and Yonsei University (YUS) parameterization schemes were selected to model the turbulent transport in the PBL within the Weather Research and Forecasting (WRF version 3.6) model. Two separate domain grid sizesmore » (83 and 27 km) were chosen in the WRF downscaling in generating the wind data for driving the HYSPLIT calculation. The effects of grid size and PBL parameterization are important in incorporating the influ- ence of regional and local meteorological processes such as jet streaks, blocking patterns, Rossby waves, and terrain-induced convection on the transport of pollutants by a wind trajectory. We found high resolution PSCF did discover and locate source areas more precisely than that with lower resolution meteorological inputs. The lack of anticipated improvement could also be because a PBL scheme chosen to produce the WRF data was only a local parameterization and unable to faithfully duplicate the real atmosphere on a global scale. The MYJ scheme was able to replicate PSCF source identification by those using the Reanalysis and discover additional source areas that was not identified by the Reanalysis data. In conclusion, a potential benefit for using high-resolution wind data in the PSCF modeling is that it could discover new source location in addition to those identified by using the Reanalysis data input.« less

  17. Thermal History and Mantle Dynamics of Venus

    NASA Technical Reports Server (NTRS)

    Hsui, Albert T.

    1997-01-01

    One objective of this research proposal is to develop a 3-D thermal history model for Venus. The basis of our study is a finite-element computer model to simulate thermal convection of fluids with highly temperature- and pressure-dependent viscosities in a three-dimensional spherical shell. A three-dimensional model for thermal history studies is necessary for the following reasons. To study planetary thermal evolution, one needs to consider global heat budgets of a planet throughout its evolution history. Hence, three-dimensional models are necessary. This is in contrasts to studies of some local phenomena or local structures where models of lower dimensions may be sufficient. There are different approaches to treat three-dimensional thermal convection problems. Each approach has its own advantages and disadvantages. Therefore, the choice of the various approaches is subjective and dependent on the problem addressed. In our case, we are interested in the effects of viscosities that are highly temperature dependent and that their magnitudes within the computing domain can vary over many orders of magnitude. In order to resolve the rapid change of viscosities, small grid spacings are often necessary. To optimize the amount of computing, variable grids become desirable. Thus, the finite-element numerical approach is chosen for its ability to place grid elements of different sizes over the complete computational domain. For this research proposal, we did not start from scratch and develop the finite element codes from the beginning. Instead, we adopted a finite-element model developed by Baumgardner, a collaborator of this research proposal, for three-dimensional thermal convection with constant viscosity. Over the duration supported by this research proposal, a significant amount of advancements have been accomplished.

  18. Preserving privacy whilst maintaining robust epidemiological predictions.

    PubMed

    Werkman, Marleen; Tildesley, Michael J; Brooks-Pollock, Ellen; Keeling, Matt J

    2016-12-01

    Mathematical models are invaluable tools for quantifying potential epidemics and devising optimal control strategies in case of an outbreak. State-of-the-art models increasingly require detailed individual farm-based and sensitive data, which may not be available due to either lack of capacity for data collection or privacy concerns. However, in many situations, aggregated data are available for use. In this study, we systematically investigate the accuracy of predictions made by mathematical models initialised with varying data aggregations, using the UK 2001 Foot-and-Mouth Disease Epidemic as a case study. We consider the scenario when the only data available are aggregated into spatial grid cells, and develop a metapopulation model where individual farms in a single subpopulation are assumed to behave uniformly and transmit randomly. We also adapt this standard metapopulation model to capture heterogeneity in farm size and composition, using farm census data. Our results show that homogeneous models based on aggregated data overestimate final epidemic size but can perform well for predicting spatial spread. Recognising heterogeneity in farm sizes improves predictions of the final epidemic size, identifying risk areas, determining the likelihood of epidemic take-off and identifying the optimal control strategy. In conclusion, in cases where individual farm-based data are not available, models can still generate meaningful predictions, although care must be taken in their interpretation and use. Copyright © 2016. Published by Elsevier B.V.

  19. A fast dynamic grid adaption scheme for meteorological flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiedler, B.H.; Trapp, R.J.

    1993-10-01

    The continuous dynamic grid adaption (CDGA) technique is applied to a compressible, three-dimensional model of a rising thermal. The computational cost, per grid point per time step, of using CDGA instead of a fixed, uniform Cartesian grid is about 53% of the total cost of the model with CDGA. The use of general curvilinear coordinates contributes 11.7% to this total, calculating and moving the grid 6.1%, and continually updating the transformation relations 20.7%. Costs due to calculations that involve the gridpoint velocities (as well as some substantial unexplained costs) contribute the remaining 14.5%. A simple way to limit the costmore » of calculating the grid is presented. The grid is adapted by solving an elliptic equation for gridpoint coordinates on a coarse grid and then interpolating the full finite-difference grid. In this application, the additional costs per grid point of CDGA are shown to be easily offset by the savings resulting from the reduction in the required number of grid points. In simulation of the thermal costs are reduced by a factor of 3, as compared with those of a companion model with a fixed, uniform Cartesian grid. 8 refs., 8 figs.« less

  20. Assessment of grid optimisation measures for the German transmission grid using open source grid data

    NASA Astrophysics Data System (ADS)

    Böing, F.; Murmann, A.; Pellinger, C.; Bruckmeier, A.; Kern, T.; Mongin, T.

    2018-02-01

    The expansion of capacities in the German transmission grid is a necessity for further integration of renewable energy sources into the electricity sector. In this paper, the grid optimisation measures ‘Overhead Line Monitoring’, ‘Power-to-Heat’ and ‘Demand Response in the Industry’ are evaluated and compared against conventional grid expansion for the year 2030. Initially, the methodical approach of the simulation model is presented and detailed descriptions of the grid model and the used grid data, which partly originates from open-source platforms, are provided. Further, this paper explains how ‘Curtailment’ and ‘Redispatch’ can be reduced by implementing grid optimisation measures and how the depreciation of economic costs can be determined considering construction costs. The developed simulations show that the conventional grid expansion is more efficient and implies more grid relieving effects than the evaluated grid optimisation measures.

  1. Turbulence Impact on Wind Turbines: Experimental Investigations on a Wind Turbine Model

    NASA Astrophysics Data System (ADS)

    Al-Abadi, A.; Kim, Y. J.; Ertunç, Ö.; Delgado, A.

    2016-09-01

    Experimental investigations have been conducted by exposing an efficient wind turbine model to different turbulence levels in a wind tunnel. Nearly isotropic turbulence is generated by using two static squared grids: fine and coarse one. In addition, the distance between the wind-turbine and the grid is adjusted. Hence, as the turbulence decays in the flow direction, the wind-turbine is exposed to turbulence with various energy and length scale content. The developments of turbulence scales in the flow direction at various Reynolds numbers and the grid mesh size are measured. Those measurements are conducted with hot-wire anemometry in the absence of the wind-turbine. Detailed measurements and analysis of the upstream and downstream velocities, turbulence intensity and spectrum distributions are done. Performance measurements are conducted with and without turbulence grids and the results are compared. Performance measurements are conducted with an experimental setup that allow measuring of torque, rotational speed from the electrical parameters. The study shows the higher the turbulence level, the higher the power coefficient. This is due to many reasons. First, is the interaction of turbulence scales with the blade surface boundary layer, which in turn delay the stall. Thus, suppressing the boundary layer and preventing it from separation and hence enhancing the aerodynamics characteristics of the blade. In addition, higher turbulence helps in damping the tip vortices. Thus, reduces the tip losses. Adding winglets to the blade tip will reduce the tip vortex. Further investigations of the near and far wake-surrounding intersection are performed to understand the energy exchange and the free stream entrainment that help in retrieving the velocity.

  2. Spatial heterogeneity of Cs-137 soil contamination at the landscape scale of the Bryansk Region (Russia)

    NASA Astrophysics Data System (ADS)

    Sokolov, Alexander; Sokolov, Anton; Linnik, Vitaly

    2016-04-01

    The passage of the Chernobyl plume over the Bryansk region (Russia) in the end of April 1986 led to the deposition of radionuclides on the ground by wet and dry deposition processes. According to the results of the Cs-137 air gamma survey (AGS, grid size: 100 m x100 m), which was conducted in summer 1993, it was shown that the processes of Cs-137 lateral migration took place due to nearly a fourfold increase of Cs-137 in the lower slope as compared to the upper part of the slope during a seven-year period after the Chernobyl accident. The variability patterns of Cs-137 could be described by a stochastic or a deterministic function of the measurement location. The patterns variations could be associated with the nonlinear response of many interacting variables within the landscape system. In the test area located at a distance of about 280 km from the Chernobyl Nuclear Power Plant Cs-137 surface activity typically ranges from below 7 kBq/m2 to approximately 50-60 kBq/m2 reflecting the combination of deposition due to global fallout from the atmospheric testing of nuclear weapons, and the relatively low levels of Chernobyl deposition to the area. To model the Cs-137 distribution depending on complex landscape attributes the following information layers were used: 1) the soil map at the scale of 1:50,000; 2) SRTM elevation data acquired from the Global Land Cover Facility at a 3 arc second resolution. Fundamental difficulties in distributed erosion modelling arise from the natural complexity of landscape systems and Cs-137 spatial heterogeneity. The SRTM DEM of the test site has a grid size about 90 m, which is not sufficient for distributed hydrological modelling at the landscape scale. The scaling problem arises because of the mismatch between SRTM DEM pixel dimensions and the size of erosion network (width about 10-50 m) that concentrates Cs-137 run-off from the overlying slopes and watershed areas. To build a hydrologically correct local drain direction (LDD) with a 12.5, 25 and 50 m grid a downscaling procedure was applied. The downscaling procedure is based on an original data approximation method - Simplicity versus Fitting (SvF). The method is to find a compromise between the simplicity of a model and the precision of experimental data replication. Using the downscaling method in a similar way, maps of cesium distribution with different levels of mesh - 12.5, 25 and 50 m were built. The study of scaling relationships for map resolution (pixel sizes) between cesium heterogeneity and DEM derivatives was conducted.

  3. Short-term Time Step Convergence in a Climate Model

    DOE PAGES

    Wan, Hui; Rasch, Philip J.; Taylor, Mark; ...

    2015-02-11

    A testing procedure is designed to assess the convergence property of a global climate model with respect to time step size, based on evaluation of the root-mean-square temperature difference at the end of very short (1 h) simulations with time step sizes ranging from 1 s to 1800 s. A set of validation tests conducted without sub-grid scale parameterizations confirmed that the method was able to correctly assess the convergence rate of the dynamical core under various configurations. The testing procedure was then applied to the full model, and revealed a slow convergence of order 0.4 in contrast to themore » expected first-order convergence. Sensitivity experiments showed without ambiguity that the time stepping errors in the model were dominated by those from the stratiform cloud parameterizations, in particular the cloud microphysics. This provides a clear guidance for future work on the design of more accurate numerical methods for time stepping and process coupling in the model.« less

  4. Microgrid Design Toolkit (MDT) Technical Documentation and Component Summaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arguello, Bryan; Gearhart, Jared Lee; Jones, Katherine A.

    2015-09-01

    The Microgrid Design Toolkit (MDT) is a decision support software tool for microgrid designers to use during the microgrid design process. The models that support the two main capabilities in MDT are described. The first capability, the Microgrid Sizing Capability (MSC), is used to determine the size and composition of a new microgrid in the early stages of the design process. MSC is a mixed-integer linear program that is focused on developing a microgrid that is economically viable when connected to the grid. The second capability is focused on refining a microgrid design for operation in islanded mode. This secondmore » capability relies on two models: the Technology Management Optimization (TMO) model and Performance Reliability Model (PRM). TMO uses a genetic algorithm to create and refine a collection of candidate microgrid designs. It uses PRM, a simulation based reliability model, to assess the performance of these designs. TMO produces a collection of microgrid designs that perform well with respect to one or more performance metrics.« less

  5. Method for experimental investigation of transient operation on Laval test stand for model size turbines

    NASA Astrophysics Data System (ADS)

    Fraser, R.; Coulaud, M.; Aeschlimann, V.; Lemay, J.; Deschenes, C.

    2016-11-01

    With the growing proportion of inconstant energy source as wind and solar, hydroelectricity becomes a first class source of peak energy in order to regularize the grid. The important increase of start - stop cycles may then cause a premature ageing of runners by both a higher number of cycles in stress fluctuations and by reaching a higher stress level in absolute. Aiming to sustain good quality development on fully homologous scale model turbines, the Hydraulic Machines Laboratory (LAMH) of Laval University has developed a methodology to operate model size turbines on transient regimes such as start-up, stop or load rejection on its test stand. This methodology allows maintaining a constant head while the wicket gates are opening or closing in a representative speed on the model scale of what is made on the prototype. This paper first presents the opening speed on model based on dimensionless numbers, the methodology itself and its application. Then both its limitation and the first results using a bulb turbine are detailed.

  6. Membrane potential dynamics of grid cells

    PubMed Central

    Domnisoru, Cristina; Kinkhabwala, Amina A.; Tank, David W.

    2014-01-01

    During navigation, grid cells increase their spike rates in firing fields arranged on a strikingly regular triangular lattice, while their spike timing is often modulated by theta oscillations. Oscillatory interference models of grid cells predict theta amplitude modulations of membrane potential during firing field traversals, while competing attractor network models predict slow depolarizing ramps. Here, using in-vivo whole-cell recordings, we tested these models by directly measuring grid cell intracellular potentials in mice running along linear tracks in virtual reality. Grid cells had large and reproducible ramps of membrane potential depolarization that were the characteristic signature tightly correlated with firing fields. Grid cells also exhibited intracellular theta oscillations that influenced their spike timing. However, the properties of theta amplitude modulations were not consistent with the view that they determine firing field locations. Our results support cellular and network mechanisms in which grid fields are produced by slow ramps, as in attractor models, while theta oscillations control spike timing. PMID:23395984

  7. A hybrid finite-difference and analytic element groundwater model

    USGS Publications Warehouse

    Haitjema, Henk M.; Feinstein, Daniel T.; Hunt, Randall J.; Gusyev, Maksym

    2010-01-01

    Regional finite-difference models tend to have large cell sizes, often on the order of 1–2 km on a side. Although the regional flow patterns in deeper formations may be adequately represented by such a model, the intricate surface water and groundwater interactions in the shallower layers are not. Several stream reaches and nearby wells may occur in a single cell, precluding any meaningful modeling of the surface water and groundwater interactions between the individual features. We propose to replace the upper MODFLOW layer or layers, in which the surface water and groundwater interactions occur, by an analytic element model (GFLOW) that does not employ a model grid; instead, it represents wells and surface waters directly by the use of point-sinks and line-sinks. For many practical cases it suffices to provide GFLOW with the vertical leakage rates calculated in the original coarse MODFLOW model in order to obtain a good representation of surface water and groundwater interactions. However, when the combined transmissivities in the deeper (MODFLOW) layers dominate, the accuracy of the GFLOW solution diminishes. For those cases, an iterative coupling procedure, whereby the leakages between the GFLOW and MODFLOW model are updated, appreciably improves the overall solution, albeit at considerable computational cost. The coupled GFLOW–MODFLOW model is applicable to relatively large areas, in many cases to the entire model domain, thus forming an attractive alternative to local grid refinement or inset models.

  8. Evaluation of load flow and grid expansion in a unit-commitment and expansion optimization model SciGRID International Conference on Power Grid Modelling

    NASA Astrophysics Data System (ADS)

    Senkpiel, Charlotte; Biener, Wolfgang; Shammugam, Shivenes; Längle, Sven

    2018-02-01

    Energy system models serve as a basis for long term system planning. Joint optimization of electricity generating technologies, storage systems and the electricity grid leads to lower total system cost compared to an approach in which the grid expansion follows a given technology portfolio and their distribution. Modelers often face the problem of finding a good tradeoff between computational time and the level of detail that can be modeled. This paper analyses the differences between a transport model and a DC load flow model to evaluate the validity of using a simple but faster transport model within the system optimization model in terms of system reliability. The main findings in this paper are that a higher regional resolution of a system leads to better results compared to an approach in which regions are clustered as more overloads can be detected. An aggregation of lines between two model regions compared to a line sharp representation has little influence on grid expansion within a system optimizer. In a DC load flow model overloads can be detected in a line sharp case, which is therefore preferred. Overall the regions that need to reinforce the grid are identified within the system optimizer. Finally the paper recommends the usage of a load-flow model to test the validity of the model results.

  9. Simulations of the DARPA Suboff Submarine Including Self-Propulsion with the E1619 Propeller

    DTIC Science & Technology

    2012-01-01

    and experiments are remarkable, including the maximum velocity in the wake of the 37 blades , the velocity deficit induced by the tip vortices...added to the wake matches the grid size of the fine grids used for the tips of the blades , thus providing a grid of consistent refinement for the...geometry or larger number of blades for the same advance coefficient. These two mechanisms in a marine propeller lead to larger induced wake

  10. Information Theoretically Secure, Enhanced Johnson Noise Based Key Distribution over the Smart Grid with Switched Filters

    PubMed Central

    2013-01-01

    We introduce a protocol with a reconfigurable filter system to create non-overlapping single loops in the smart power grid for the realization of the Kirchhoff-Law-Johnson-(like)-Noise secure key distribution system. The protocol is valid for one-dimensional radial networks (chain-like power line) which are typical of the electricity distribution network between the utility and the customer. The speed of the protocol (the number of steps needed) versus grid size is analyzed. When properly generalized, such a system has the potential to achieve unconditionally secure key distribution over the smart power grid of arbitrary geometrical dimensions. PMID:23936164

  11. Information theoretically secure, enhanced Johnson noise based key distribution over the smart grid with switched filters.

    PubMed

    Gonzalez, Elias; Kish, Laszlo B; Balog, Robert S; Enjeti, Prasad

    2013-01-01

    We introduce a protocol with a reconfigurable filter system to create non-overlapping single loops in the smart power grid for the realization of the Kirchhoff-Law-Johnson-(like)-Noise secure key distribution system. The protocol is valid for one-dimensional radial networks (chain-like power line) which are typical of the electricity distribution network between the utility and the customer. The speed of the protocol (the number of steps needed) versus grid size is analyzed. When properly generalized, such a system has the potential to achieve unconditionally secure key distribution over the smart power grid of arbitrary geometrical dimensions.

  12. A Solution Adaptive Structured/Unstructured Overset Grid Flow Solver with Applications to Helicopter Rotor Flows

    NASA Technical Reports Server (NTRS)

    Duque, Earl P. N.; Biswas, Rupak; Strawn, Roger C.

    1995-01-01

    This paper summarizes a method that solves both the three dimensional thin-layer Navier-Stokes equations and the Euler equations using overset structured and solution adaptive unstructured grids with applications to helicopter rotor flowfields. The overset structured grids use an implicit finite-difference method to solve the thin-layer Navier-Stokes/Euler equations while the unstructured grid uses an explicit finite-volume method to solve the Euler equations. Solutions on a helicopter rotor in hover show the ability to accurately convect the rotor wake. However, isotropic subdivision of the tetrahedral mesh rapidly increases the overall problem size.

  13. Arithmetic Data Cube as a Data Intensive Benchmark

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Shabano, Leonid

    2003-01-01

    Data movement across computational grids and across memory hierarchy of individual grid machines is known to be a limiting factor for application involving large data sets. In this paper we introduce the Data Cube Operator on an Arithmetic Data Set which we call Arithmetic Data Cube (ADC). We propose to use the ADC to benchmark grid capabilities to handle large distributed data sets. The ADC stresses all levels of grid memory by producing 2d views of an Arithmetic Data Set of d-tuples described by a small number of parameters. We control data intensity of the ADC by controlling the sizes of the views through choice of the tuple parameters.

  14. An Evaluation of Recently Developed RANS-Based Turbulence Models for Flow Over a Two-Dimensional Block Subjected to Different Mesh Structures and Grid Resolutions

    NASA Astrophysics Data System (ADS)

    Kardan, Farshid; Cheng, Wai-Chi; Baverel, Olivier; Porté-Agel, Fernando

    2016-04-01

    Understanding, analyzing and predicting meteorological phenomena related to urban planning and built environment are becoming more essential than ever to architectural and urban projects. Recently, various version of RANS models have been established but more validation cases are required to confirm their capability for wind flows. In the present study, the performance of recently developed RANS models, including the RNG k-ɛ , SST BSL k-ω and SST ⪆mma-Reθ , have been evaluated for the flow past a single block (which represent the idealized architecture scale). For validation purposes, the velocity streamlines and the vertical profiles of the mean velocities and variances were compared with published LES and wind tunnel experiment results. Furthermore, other additional CFD simulations were performed to analyze the impact of regular/irregular mesh structures and grid resolutions based on selected turbulence model in order to analyze the grid independency. Three different grid resolutions (coarse, medium and fine) of Nx × Ny × Nz = 320 × 80 × 320, 160 × 40 × 160 and 80 × 20 × 80 for the computational domain and nx × nz = 26 × 32, 13 × 16 and 6 × 8, which correspond to number of grid points on the block edges, were chosen and tested. It can be concluded that among all simulated RANS models, the SST ⪆mma-Reθ model performed best and agreed fairly well to the LES simulation and experimental results. It can also be concluded that the SST ⪆mma-Reθ model provides a very satisfactory results in terms of grid dependency in the fine and medium grid resolutions in both regular and irregular structure meshes. On the other hand, despite a very good performance of the RNG k-ɛ model in the fine resolution and in the regular structure grids, a disappointing performance of this model in the coarse and medium grid resolutions indicates that the RNG k-ɛ model is highly dependent on grid structure and grid resolution. These quantitative validations are essential to access the accuracy of RANS models for the simulation of flow in urban environment.

  15. SU-E-T-650: Quantification and Modeling of the Dosimetric Impact of the IBEAM Evo Treatment Couchtop EP (Elekta) in VMAT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, R; Mannheim Medical Center, Mannheim, Baden-Wurttemberg; Bai, W

    2015-06-15

    Purpose: quantification and modelling of the dosimetric impact of the treatment couch in Monaco Treatment Planning System. Methods: The attenuation characteristics of couchtop EP was evaluated for two different photon acceleration potentials (6MV and 10MV) for a field size of (10×10) cm2. Phantom positions in A-B direction: on the left half, in the center and on the right half of the couch. Dose measurements of couch attenuation were performed at gantry angles from 180° to 122°, using a 0.125cc semiflex ionization chamber isocentrically placed in the center of a homogeneous cylindric sliced RW3 phantom. Each experimental setup was first measuredmore » on the LINAC and then reproduced in the TPS. By adjusting the relative-to-water electron density (ED) values of the couch, the measured attenuation was replicated. The simulated results were evaluated by comparing the measurements and simulations. Results: Without the couch model included the maximum difference between measured and calculated dose was 5.5% (5.1%) and 6.6% (6.1%) for 2 mm and 5 mm voxel size, when the phantom was positioned on the left (center). The couch model was included in the TPS with a uniform ED of 0.18 or a 2 component model with a fiber ED= 0.6 and foam core ED= 0.1. After including the treatment couch, the mean dose attenuation was reduced from 2.8% without couch included to (0.0, 0.8, −0.2, 0.6)%. The 4 different values represent the 1 and 2 components model and 2 and 5 mm voxel grid size. Conclusion: For a uniform relative-to-water couch electron density of 0.18 a good agreement between measured and calculated dose distributions was obtained for all different energies, voxel grid spacings and gantry angles. Therefore, we conclude that the Monaco couch model accurately describes the dose perturbations due to the presence of the patient couch and should therefore be used during treatment planning. This project is supported by Technology Foundation for Selected Overseas Chinese Scholar, Ministry of Hebei Personnel of China.« less

  16. NREL: International Activities - Country Programs

    Science.gov Websites

    for use of mini-grid quality assurance and design standards and advising on mini-grid business models communities of practice and technical collaboration across countries on mini-grid development, modeling and interconnection standards and procedures, and with strengthening mini-grids and energy access programs. NREL is

  17. A Fast Full Tensor Gravity computation algorithm for High Resolution 3D Geologic Interpretations

    NASA Astrophysics Data System (ADS)

    Jayaram, V.; Crain, K.; Keller, G. R.

    2011-12-01

    We present an algorithm to rapidly calculate the vertical gravity and full tensor gravity (FTG) values due to a 3-D geologic model. This algorithm can be implemented on single, multi-core CPU and graphical processing units (GPU) architectures. Our technique is based on the line element approximation with a constant density within each grid cell. This type of parameterization is well suited for high-resolution elevation datasets with grid size typically in the range of 1m to 30m. The large high-resolution data grids in our studies employ a pre-filtered mipmap pyramid type representation for the grid data known as the Geometry clipmap. The clipmap was first introduced by Microsoft Research in 2004 to do fly-through terrain visualization. This method caches nested rectangular extents of down-sampled data layers in the pyramid to create view-dependent calculation scheme. Together with the simple grid structure, this allows the gravity to be computed conveniently on-the-fly, or stored in a highly compressed format. Neither of these capabilities has previously been available. Our approach can perform rapid calculations on large topographies including crustal-scale models derived from complex geologic interpretations. For example, we used a 1KM Sphere model consisting of 105000 cells at 10m resolution with 100000 gravity stations. The line element approach took less than 90 seconds to compute the FTG and vertical gravity on an Intel Core i7 CPU at 3.07 GHz utilizing just its single core. Also, unlike traditional gravity computational algorithms, the line-element approach can calculate gravity effects at locations interior or exterior to the model. The only condition that must be met is the observation point cannot be located directly above the line element. Therefore, we perform a location test and then apply appropriate formulation to those data points. We will present and compare the computational performance of the traditional prism method versus the line element approach on different CPU-GPU system configurations. The algorithm calculates the expected gravity at station locations where the observed gravity and FTG data were acquired. This algorithm can be used for all fast forward model calculations of 3D geologic interpretations for data from airborne, space and submarine gravity, and FTG instrumentation.

  18. OpenMP performance for benchmark 2D shallow water equations using LBM

    NASA Astrophysics Data System (ADS)

    Sabri, Khairul; Rabbani, Hasbi; Gunawan, Putu Harry

    2018-03-01

    Shallow water equations or commonly referred as Saint-Venant equations are used to model fluid phenomena. These equations can be solved numerically using several methods, like Lattice Boltzmann method (LBM), SIMPLE-like Method, Finite Difference Method, Godunov-type Method, and Finite Volume Method. In this paper, the shallow water equation will be approximated using LBM or known as LABSWE and will be simulated in performance of parallel programming using OpenMP. To evaluate the performance between 2 and 4 threads parallel algorithm, ten various number of grids Lx and Ly are elaborated. The results show that using OpenMP platform, the computational time for solving LABSWE can be decreased. For instance using grid sizes 1000 × 500, the speedup of 2 and 4 threads is observed 93.54 s and 333.243 s respectively.

  19. Application Note: Power Grid Modeling With Xyce.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sholander, Peter E.

    This application note describes how to model steady-state power flows and transient events in electric power grids with the SPICE-compatible Xyce TM Parallel Electronic Simulator developed at Sandia National Labs. This application notes provides a brief tutorial on the basic devices (branches, bus shunts, transformers and generators) found in power grids. The focus is on the features supported and assumptions made by the Xyce models for power grid elements. It then provides a detailed explanation, including working Xyce netlists, for simulating some simple power grid examples such as the IEEE 14-bus test case.

  20. New battery model considering thermal transport and partial charge stationary effects in photovoltaic off-grid applications

    NASA Astrophysics Data System (ADS)

    Sanz-Gorrachategui, Iván; Bernal, Carlos; Oyarbide, Estanis; Garayalde, Erik; Aizpuru, Iosu; Canales, Jose María; Bono-Nuez, Antonio

    2018-02-01

    The optimization of the battery pack in an off-grid Photovoltaic application must consider the minimum sizing that assures the availability of the system under the worst environmental conditions. Thus, it is necessary to predict the evolution of the state of charge of the battery under incomplete daily charging and discharging processes and fluctuating temperatures over day-night cycles. Much of previous development work has been carried out in order to model the short term evolution of battery variables. Many works focus on the on-line parameter estimation of available charge, using standard or advanced estimators, but they are not focused on the development of a model with predictive capabilities. Moreover, normally stable environmental conditions and standard charge-discharge patterns are considered. As the actual cycle-patterns differ from the manufacturer's tests, batteries fail to perform as expected. This paper proposes a novel methodology to model these issues, with predictive capabilities to estimate the remaining charge in a battery after several solar cycles. A new non-linear state space model is proposed as a basis, and the methodology to feed and train the model is introduced. The new methodology is validated using experimental data, providing only 5% of error at higher temperatures than the nominal one.

  1. a Marker-Based Eulerian-Lagrangian Method for Multiphase Flow with Supersonic Combustion Applications

    NASA Astrophysics Data System (ADS)

    Fan, Xiaofeng; Wang, Jiangfeng

    2016-06-01

    The atomization of liquid fuel is a kind of intricate dynamic process from continuous phase to discrete phase. Procedures of fuel spray in supersonic flow are modeled with an Eulerian-Lagrangian computational fluid dynamics methodology. The method combines two distinct techniques and develops an integrated numerical simulation method to simulate the atomization processes. The traditional finite volume method based on stationary (Eulerian) Cartesian grid is used to resolve the flow field, and multi-component Navier-Stokes equations are adopted in present work, with accounting for the mass exchange and heat transfer occupied by vaporization process. The marker-based moving (Lagrangian) grid is utilized to depict the behavior of atomized liquid sprays injected into a gaseous environment, and discrete droplet model 13 is adopted. To verify the current approach, the proposed method is applied to simulate processes of liquid atomization in supersonic cross flow. Three classic breakup models, TAB model, wave model and K-H/R-T hybrid model, are discussed. The numerical results are compared with multiple perspectives quantitatively, including spray penetration height and droplet size distribution. In addition, the complex flow field structures induced by the presence of liquid spray are illustrated and discussed. It is validated that the maker-based Eulerian-Lagrangian method is effective and reliable.

  2. Large Eddy Simulation of Wall-Bounded Turbulent Flows with the Lattice Boltzmann Method: Effect of Collision Model, SGS Model and Grid Resolution

    NASA Astrophysics Data System (ADS)

    Pradhan, Aniruddhe; Akhavan, Rayhaneh

    2017-11-01

    Effect of collision model, subgrid-scale model and grid resolution in Large Eddy Simulation (LES) of wall-bounded turbulent flows with the Lattice Boltzmann Method (LBM) is investigated in turbulent channel flow. The Single Relaxation Time (SRT) collision model is found to be more accurate than Multi-Relaxation Time (MRT) collision model in well-resolved LES. Accurate LES requires grid resolutions of Δ+ <= 4 in the near-wall region, which is comparable to Δ+ <= 2 required in DNS. At larger grid resolutions SRT becomes unstable, while MRT remains stable but gives unacceptably large errors. LES with no model gave errors comparable to the Dynamic Smagorinsky Model (DSM) and the Wall Adapting Local Eddy-viscosity (WALE) model. The resulting errors in the prediction of the friction coefficient in turbulent channel flow at a bulk Reynolds Number of 7860 (Reτ 442) with Δ+ = 4 and no-model, DSM and WALE were 1.7%, 2.6%, 3.1% with SRT, and 8.3% 7.5% 8.7% with MRT, respectively. These results suggest that LES of wall-bounded turbulent flows with LBM requires either grid-embedding in the near-wall region, with grid resolutions comparable to DNS, or a wall model. Results of LES with grid-embedding and wall models will be discussed.

  3. PECHCV, PECHFV, PEFHCV and PEFHFV: A set of atmospheric, primitive equation forecast models for the Northern Hemisphere, volume 3

    NASA Technical Reports Server (NTRS)

    Wellck, R. E.; Pearce, M. L.

    1976-01-01

    As part of the SEASAT program of NASA, a set of four hemispheric, atmospheric prediction models were developed. The models, which use a polar stereographic grid in the horizontal and a sigma coordinate in the vertical, are: (1) PECHCV - five sigma layers and a 63 x 63 horizontal grid, (2) PECHFV - ten sigma layers and a 63 x 63 horizontal grid, (3) PEFHCV - five sigma layers and a 187 x 187 horizontal grid, and (4) PEFHFV - ten sigma layers and a 187 x 187 horizontal grid. The models and associated computer programs are described.

  4. Development and analysis of a finite element model to simulate pulmonary emphysema in CT imaging.

    PubMed

    Diciotti, Stefano; Nobis, Alessandro; Ciulli, Stefano; Landini, Nicholas; Mascalchi, Mario; Sverzellati, Nicola; Innocenti, Bernardo

    2015-01-01

    In CT imaging, pulmonary emphysema appears as lung regions with Low-Attenuation Areas (LAA). In this study we propose a finite element (FE) model of lung parenchyma, based on a 2-D grid of beam elements, which simulates pulmonary emphysema related to smoking in CT imaging. Simulated LAA images were generated through space sampling of the model output. We employed two measurements of emphysema extent: Relative Area (RA) and the exponent D of the cumulative distribution function of LAA clusters size. The model has been used to compare RA and D computed on the simulated LAA images with those computed on the models output. Different mesh element sizes and various model parameters, simulating different physiological/pathological conditions, have been considered and analyzed. A proper mesh element size has been determined as the best trade-off between reliable results and reasonable computational cost. Both RA and D computed on simulated LAA images were underestimated with respect to those calculated on the models output. Such underestimations were larger for RA (≈ -44 ÷ -26%) as compared to those for D (≈ -16 ÷ -2%). Our FE model could be useful to generate standard test images and to design realistic physical phantoms of LAA images for the assessment of the accuracy of descriptors for quantifying emphysema in CT imaging.

  5. SU-E-T-538: Evaluation of IMRT Dose Calculation Based on Pencil-Beam and AAA Algorithms.

    PubMed

    Yuan, Y; Duan, J; Popple, R; Brezovich, I

    2012-06-01

    To evaluate the accuracy of dose calculation for intensity modulated radiation therapy (IMRT) based on Pencil Beam (PB) and Analytical Anisotropic Algorithm (AAA) computation algorithms. IMRT plans of twelve patients with different treatment sites, including head/neck, lung and pelvis, were investigated. For each patient, dose calculation with PB and AAA algorithms using dose grid sizes of 0.5 mm, 0.25 mm, and 0.125 mm, were compared with composite-beam ion chamber and film measurements in patient specific QA. Discrepancies between the calculation and the measurement were evaluated by percentage error for ion chamber dose and γ〉l failure rate in gamma analysis (3%/3mm) for film dosimetry. For 9 patients, ion chamber dose calculated with AAA-algorithms is closer to ion chamber measurement than that calculated with PB algorithm with grid size of 2.5 mm, though all calculated ion chamber doses are within 3% of the measurements. For head/neck patients and other patients with large treatment volumes, γ〉l failure rate is significantly reduced (within 5%) with AAA-based treatment planning compared to generally more than 10% with PB-based treatment planning (grid size=2.5 mm). For lung and brain cancer patients with medium and small treatment volumes, γ〉l failure rates are typically within 5% for both AAA and PB-based treatment planning (grid size=2.5 mm). For both PB and AAA-based treatment planning, improvements of dose calculation accuracy with finer dose grids were observed in film dosimetry of 11 patients and in ion chamber measurements for 3 patients. AAA-based treatment planning provides more accurate dose calculation for head/neck patients and other patients with large treatment volumes. Compared with film dosimetry, a γ〉l failure rate within 5% can be achieved for AAA-based treatment planning. © 2012 American Association of Physicists in Medicine.

  6. Adaptive Correlation Space Adjusted Open-Loop Tracking Approach for Vehicle Positioning with Global Navigation Satellite System in Urban Areas

    PubMed Central

    Ruan, Hang; Li, Jian; Zhang, Lei; Long, Teng

    2015-01-01

    For vehicle positioning with Global Navigation Satellite System (GNSS) in urban areas, open-loop tracking shows better performance because of its high sensitivity and superior robustness against multipath. However, no previous study has focused on the effects of the code search grid size on the code phase measurement accuracy of open-loop tracking. Traditional open-loop tracking methods are performed by the batch correlators with fixed correlation space. The code search grid size, which is the correlation space, is a constant empirical value and the code phase measuring accuracy will be largely degraded due to the improper grid size, especially when the signal carrier-to-noise density ratio (C/N0) varies. In this study, the Adaptive Correlation Space Adjusted Open-Loop Tracking Approach (ACSA-OLTA) is proposed to improve the code phase measurement dependent pseudo range accuracy. In ACSA-OLTA, the correlation space is adjusted according to the signal C/N0. The novel Equivalent Weighted Pseudo Range Error (EWPRE) is raised to obtain the optimal code search grid sizes for different C/N0. The code phase measuring errors of different measurement calculation methods are analyzed for the first time. The measurement calculation strategy of ACSA-OLTA is derived from the analysis to further improve the accuracy but reduce the correlator consumption. Performance simulation and real tests confirm that the pseudo range and positioning accuracy of ASCA-OLTA are better than the traditional open-loop tracking methods in the usual scenarios of urban area. PMID:26343683

  7. Stability assessment of a multi-port power electronic interface for hybrid micro-grid applications

    NASA Astrophysics Data System (ADS)

    Shamsi, Pourya

    Migration to an industrial society increases the demand for electrical energy. Meanwhile, social causes for preserving the environment and reducing pollutions seek cleaner forms of energy sources. Therefore, there has been a growth in distributed generation from renewable sources in the past decade. Existing regulations and power system coordination does not allow for massive integration of distributed generation throughout the grid. Moreover, the current infrastructures are not designed for interfacing distributed and deregulated generation. In order to remedy this problem, a hybrid micro-grid based on nano-grids is introduced. This system consists of a reliable micro-grid structure that provides a smooth transition from the current distribution networks to smart micro-grid systems. Multi-port power electronic interfaces are introduced to manage the local generation, storage, and consumption. Afterwards, a model for this micro-grid is derived. Using this model, the stability of the system under a variety of source and load induced disturbances is studied. Moreover, pole-zero study of the micro-grid is performed under various loading conditions. An experimental setup of this micro-grid is developed, and the validity of the model in emulating the dynamic behavior of the system is verified. This study provides a theory for a novel hybrid micro-grid as well as models for stability assessment of the proposed micro-grid.

  8. Towards the Development of a More Accurate Monitoring Procedure for Invertebrate Populations, in the Presence of an Unknown Spatial Pattern of Population Distribution in the Field

    PubMed Central

    Petrovskaya, Natalia B.; Forbes, Emily; Petrovskii, Sergei V.; Walters, Keith F. A.

    2018-01-01

    Studies addressing many ecological problems require accurate evaluation of the total population size. In this paper, we revisit a sampling procedure used for the evaluation of the abundance of an invertebrate population from assessment data collected on a spatial grid of sampling locations. We first discuss how insufficient information about the spatial population density obtained on a coarse sampling grid may affect the accuracy of an evaluation of total population size. Such information deficit in field data can arise because of inadequate spatial resolution of the population distribution (spatially variable population density) when coarse grids are used, which is especially true when a strongly heterogeneous spatial population density is sampled. We then argue that the average trap count (the quantity routinely used to quantify abundance), if obtained from a sampling grid that is too coarse, is a random variable because of the uncertainty in sampling spatial data. Finally, we show that a probabilistic approach similar to bootstrapping techniques can be an efficient tool to quantify the uncertainty in the evaluation procedure in the presence of a spatial pattern reflecting a patchy distribution of invertebrates within the sampling grid. PMID:29495513

  9. Fabrication and characterization of self-folding thermoplastic sheets using unbalanced thermal shrinkage.

    PubMed

    Danielson, Christian; Mehrnezhad, Ali; YekrangSafakar, Ashkan; Park, Kidong

    2017-06-14

    Self-folding or micro-origami technologies are actively investigated as a novel manufacturing process to fabricate three-dimensional macro/micro-structures. In this paper, we present a simple process to produce a self-folding structure with a biaxially oriented polystyrene sheet (BOPS) or Shrinky Dinks. A BOPS sheet is known to shrink to one-third of its original size in plane, when it is heated above 160 °C. A grid pattern is engraved on one side of the BOPS film with a laser engraver to decrease the thermal shrinkage of the engraved side. The thermal shrinkage of the non-engraved side remains the same and this unbalanced thermal shrinkage causes folding of the structure as the structure shrinks at high temperature. We investigated the self-folding mechanism and characterized how the grid geometry, the grid size, and the power of the laser engraver affect the bending curvature. The developed fabrication process to locally modulate thermomechanical properties of the material by engraving the grid pattern and the demonstrated design methodology to harness the unbalanced thermal shrinkage can be applied to develop complicated self-folding macro/micro structures.

  10. Optimal variable-grid finite-difference modeling for porous media

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yin, Xingyao; Li, Haishan

    2014-12-01

    Numerical modeling of poroelastic waves by the finite-difference (FD) method is more expensive than that of acoustic or elastic waves. To improve the accuracy and computational efficiency of seismic modeling, variable-grid FD methods have been developed. In this paper, we derived optimal staggered-grid finite difference schemes with variable grid-spacing and time-step for seismic modeling in porous media. FD operators with small grid-spacing and time-step are adopted for low-velocity or small-scale geological bodies, while FD operators with big grid-spacing and time-step are adopted for high-velocity or large-scale regions. The dispersion relations of FD schemes were derived based on the plane wave theory, then the FD coefficients were obtained using the Taylor expansion. Dispersion analysis and modeling results demonstrated that the proposed method has higher accuracy with lower computational cost for poroelastic wave simulation in heterogeneous reservoirs.

  11. Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Regional Study: Gujarat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochran, Jaquelin M; Palchak, Joseph D; Ehlen, Annaliese K

    This chapter on Andhra Pradesh is one of six state chapters included in Appendix C of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study). The objective of the state chapters is to provide modeling assumptions, results, and next steps to use and improve the model specific to each state. The model has inherent uncertainties, particularly in how the intrastate transmission network and RE generation projects will develop (e.g., locations, capacities). The model also does not include information on contracts or must-run status of particular plantsmore » for reliability purposes. By providing details on the higher spatial resolution model of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study), which better represents the impact of congestion on least-cost scheduling and dispatch, provides a deeper understanding of the relationship among renewable energy (RE) location, transmission, and system flexibility with regard to RE integration, compared to 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. I - National Study.'« less

  12. Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Regional Study: Tamil Nadu

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochran, Jaquelin M; Palchak, Joseph D; Ehlen, Annaliese K

    This chapter on Tamil Nadu is one of six state chapters included in Appendix C of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study). The objective of the state chapters is to provide modeling assumptions, results, and next steps to use and improve the model specific to each state. The model has inherent uncertainties, particularly in how the intrastate transmission network and RE generation projects will develop (e.g., locations, capacities). The model also does not include information on contracts or must-run status of particular plantsmore » for reliability purposes. By providing details on the higher spatial resolution model of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study), which better represents the impact of congestion on least-cost scheduling and dispatch, provides a deeper understanding of the relationship among renewable energy (RE) location, transmission, and system flexibility with regard to RE integration, compared to 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. I - National Study.'« less

  13. Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Regional Study: Rajasthan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochran, Jaquelin M; Palchak, Joseph D; Ehlen, Annaliese K

    This chapter on Andhra Pradesh is one of six state chapters included in Appendix C of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study). The objective of the state chapters is to provide modeling assumptions, results, and next steps to use and improve the model specific to each state. The model has inherent uncertainties, particularly in how the intrastate transmission network and RE generation projects will develop (e.g., locations, capacities). The model also does not include information on contracts or must-run status of particular plantsmore » for reliability purposes. By providing details on the higher spatial resolution model of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study), which better represents the impact of congestion on least-cost scheduling and dispatch, provides a deeper understanding of the relationship among renewable energy (RE) location, transmission, and system flexibility with regard to RE integration, compared to 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. I - National Study.'« less

  14. Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Regional Study: Andhra Pradesh

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochran, Jaquelin M; Palchak, Joseph D; Ehlen, Annaliese K

    This chapter on Andhra Pradesh is one of six state chapters included in Appendix C of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study). The objective of the state chapters is to provide modeling assumptions, results, and next steps to use and improve the model specific to each state. The model has inherent uncertainties, particularly in how the intrastate transmission network and RE generation projects will develop (e.g., locations, capacities). The model also does not include information on contracts or must-run status of particular plantsmore » for reliability purposes. By providing details on the higher spatial resolution model of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study), which better represents the impact of congestion on least-cost scheduling and dispatch, provides a deeper understanding of the relationship among renewable energy (RE) location, transmission, and system flexibility with regard to RE integration, compared to 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. I - National Study.'« less

  15. Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Regional Study: Karnataka

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochran, Jaquelin M; Palchak, Joseph D; Ehlen, Annaliese K

    This chapter on Andhra Pradesh is one of six state chapters included in Appendix C of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study). The objective of the state chapters is to provide modeling assumptions, results, and next steps to use and improve the model specific to each state. The model has inherent uncertainties, particularly in how the intrastate transmission network and RE generation projects will develop (e.g., locations, capacities). The model also does not include information on contracts or must-run status of particular plantsmore » for reliability purposes. By providing details on the higher spatial resolution model of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study), which better represents the impact of congestion on least-cost scheduling and dispatch, provides a deeper understanding of the relationship among renewable energy (RE) location, transmission, and system flexibility with regard to RE integration, compared to 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. I - National Study.'« less

  16. Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Regional Study: Maharashtra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochran, Jaquelin M; Palchak, Joseph D; Ehlen, Annaliese K

    This chapter on Andhra Pradesh is one of six state chapters included in Appendix C of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study). The objective of the state chapters is to provide modeling assumptions, results, and next steps to use and improve the model specific to each state. The model has inherent uncertainties, particularly in how the intrastate transmission network and RE generation projects will develop (e.g., locations, capacities). The model also does not include information on contracts or must-run status of particular plantsmore » for reliability purposes. By providing details on the higher spatial resolution model of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study), which better represents the impact of congestion on least-cost scheduling and dispatch, provides a deeper understanding of the relationship among renewable energy (RE) location, transmission, and system flexibility with regard to RE integration, compared to 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. I - National Study.'« less

  17. Streamline integration as a method for two-dimensional elliptic grid generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiesenberger, M., E-mail: Matthias.Wiesenberger@uibk.ac.at; Held, M.; Einkemmer, L.

    We propose a new numerical algorithm to construct a structured numerical elliptic grid of a doubly connected domain. Our method is applicable to domains with boundaries defined by two contour lines of a two-dimensional function. Furthermore, we can adapt any analytically given boundary aligned structured grid, which specifically includes polar and Cartesian grids. The resulting coordinate lines are orthogonal to the boundary. Grid points as well as the elements of the Jacobian matrix can be computed efficiently and up to machine precision. In the simplest case we construct conformal grids, yet with the help of weight functions and monitor metricsmore » we can control the distribution of cells across the domain. Our algorithm is parallelizable and easy to implement with elementary numerical methods. We assess the quality of grids by considering both the distribution of cell sizes and the accuracy of the solution to elliptic problems. Among the tested grids these key properties are best fulfilled by the grid constructed with the monitor metric approach. - Graphical abstract: - Highlights: • Construct structured, elliptic numerical grids with elementary numerical methods. • Align coordinate lines with or make them orthogonal to the domain boundary. • Compute grid points and metric elements up to machine precision. • Control cell distribution by adaption functions or monitor metrics.« less

  18. Turbulent premixed flames on fractal-grid-generated turbulence

    NASA Astrophysics Data System (ADS)

    Soulopoulos, N.; Kerl, J.; Sponfeldner, T.; Beyrau, F.; Hardalupas, Y.; Taylor, A. M. K. P.; Vassilicos, J. C.

    2013-12-01

    A space-filling, low blockage fractal grid is used as a novel turbulence generator in a premixed turbulent flame stabilized by a rod. The study compares the flame behaviour with a fractal grid to the behaviour when a standard square mesh grid with the same effective mesh size and solidity as the fractal grid is used. The isothermal gas flow turbulence characteristics, including mean flow velocity and rms of velocity fluctuations and Taylor length, were evaluated from hot-wire measurements. The behaviour of the flames was assessed with direct chemiluminescence emission from the flame and high-speed OH-laser-induced fluorescence. The characteristics of the two flames are considered in terms of turbulent flame thickness, local flame curvature and turbulent flame speed. It is found that, for the same flow rate and stoichiometry and at the same distance downstream of the location of the grid, fractal-grid-generated turbulence leads to a more turbulent flame with enhanced burning rate and increased flame surface area.

  19. Grid computing enhances standards-compatible geospatial catalogue service

    NASA Astrophysics Data System (ADS)

    Chen, Aijun; Di, Liping; Bai, Yuqi; Wei, Yaxing; Liu, Yang

    2010-04-01

    A catalogue service facilitates sharing, discovery, retrieval, management of, and access to large volumes of distributed geospatial resources, for example data, services, applications, and their replicas on the Internet. Grid computing provides an infrastructure for effective use of computing, storage, and other resources available online. The Open Geospatial Consortium has proposed a catalogue service specification and a series of profiles for promoting the interoperability of geospatial resources. By referring to the profile of the catalogue service for Web, an innovative information model of a catalogue service is proposed to offer Grid-enabled registry, management, retrieval of and access to geospatial resources and their replicas. This information model extends the e-business registry information model by adopting several geospatial data and service metadata standards—the International Organization for Standardization (ISO)'s 19115/19119 standards and the US Federal Geographic Data Committee (FGDC) and US National Aeronautics and Space Administration (NASA) metadata standards for describing and indexing geospatial resources. In order to select the optimal geospatial resources and their replicas managed by the Grid, the Grid data management service and information service from the Globus Toolkits are closely integrated with the extended catalogue information model. Based on this new model, a catalogue service is implemented first as a Web service. Then, the catalogue service is further developed as a Grid service conforming to Grid service specifications. The catalogue service can be deployed in both the Web and Grid environments and accessed by standard Web services or authorized Grid services, respectively. The catalogue service has been implemented at the George Mason University/Center for Spatial Information Science and Systems (GMU/CSISS), managing more than 17 TB of geospatial data and geospatial Grid services. This service makes it easy to share and interoperate geospatial resources by using Grid technology and extends Grid technology into the geoscience communities.

  20. A multi-resolution approach to electromagnetic modeling.

    NASA Astrophysics Data System (ADS)

    Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu

    2018-04-01

    We present a multi-resolution approach for three-dimensional magnetotelluric forward modeling. Our approach is motivated by the fact that fine grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography, and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. This is especially true for forward modeling required in regularized inversion, where conductivity variations at depth are generally very smooth. With a conventional structured finite-difference grid the fine discretization required to adequately represent rapid variations near the surface are continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modeling is especially important for solving regularized inversion problems. We implement a multi-resolution finite-difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of sub-grids, with each sub-grid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modeling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modeling operators on interfaces between adjacent sub-grids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models show that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.

  1. Effective grid-dependent dispersion coefficient for conservative and reactive transport simulations in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Cortinez, J. M.; Valocchi, A. J.; Herrera, P. A.

    2013-12-01

    Because of the finite size of numerical grids, it is very difficult to correctly account for processes that occur at different spatial scales to accurately simulate the migration of conservative and reactive compounds dissolved in groundwater. In one hand, transport processes in heterogeneous porous media are controlled by local-scale dispersion associated to transport processes at the pore-scale. On the other hand, variations of velocity at the continuum- or Darcy-scale produce spreading of the contaminant plume, which is referred to as macro-dispersion. Furthermore, under some conditions both effects interact, so that spreading may enhance the action of local-scale dispersion resulting in higher mixing, dilution and reaction rates. Traditionally, transport processes at different spatial scales have been included in numerical simulations by using a single dispersion coefficient. This approach implicitly assumes that the separate effects of local-dispersion and macro-dispersion can be added and represented by a unique effective dispersion coefficient. Moreover, the selection of the effective dispersion coefficient for numerical simulations usually do not consider the filtering effect of the grid size over the small-scale flow features. We have developed a multi-scale Lagragian numerical method that allows using two different dispersion coefficients to represent local- and macro-scale dispersion. This technique considers fluid particles that carry solute mass and whose locations evolve according to a deterministic component given by the grid-scale velocity and a stochastic component that corresponds to a block-effective macro-dispersion coefficient. Mass transfer between particles due to local-scale dispersion is approximated by a meshless method. We use our model to test under which transport conditions the combined effect of local- and macro-dispersion are additive and can be represented by a single effective dispersion coefficient. We also demonstrate that for the situations where both processes are additive, an effective grid-dependent dispersion coefficient can be derived based on the concept of block-effective dispersion. We show that the proposed effective dispersion coefficient is able to reproduce dilution, mixing and reaction rates for a wide range of transport conditions similar to the ones found in many practical applications.

  2. Spatial Distribution of Bed Particles in Natural Boulder-Bed Streams

    NASA Astrophysics Data System (ADS)

    Clancy, K. F.; Prestegaard, K. L.

    2001-12-01

    The Wolman pebble count is used to obtain the size distribution of bed particles in natural streams. Statistics such as median particle size (D50) are used in resistance calculations. Additional information such as bed particle heterogeneity may also be obtained from the particle distribution, which is used to predict sediment transport rates (Hey, 1979), (Ferguson, Prestegaard, Ashworth, 1989). Boulder-bed streams have an extreme range of particles in the particle size distribution ranging from sand size particles to particles larger than 0.5-m. A study of a natural boulder-bed reach demonstrated that the spatial distribution of the particles is a significant factor in predicting sediment transport and stream bed and bank stability. Further experiments were performed to test the limits of the spatial distribution's effect on sediment transport. Three stream reaches 40-m in length were selected with similar hydrologic characteristics and spatial distributions but varying average size particles. We used a grid 0.5 by 0.5-m and measured four particles within each grid cell. Digital photographs of the streambed were taken in each grid cell. The photographs were examined using image analysis software to obtain particle size and position of the largest particles (D84) within the reach's particle distribution. Cross section, topography and stream depth were surveyed. Velocity and velocity profiles were measured and recorded. With these data and additional surveys of bankfull floods, we tested the significance of the spatial distributions as average particle size decreases. The spatial distribution of streambed particles may provide information about stream valley formation, bank stability, sediment transport, and the growth rate of riparian vegetation.

  3. DEM Based Modeling: Grid or TIN? The Answer Depends

    NASA Astrophysics Data System (ADS)

    Ogden, F. L.; Moreno, H. A.

    2015-12-01

    The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hull, L.C.

    The Prickett and Lonnquist two-dimensional groundwater model has been programmed for the Apple II minicomputer. Both leaky and nonleaky confined aquifers can be simulated. The model was adapted from the FORTRAN version of Prickett and Lonnquist. In the configuration presented here, the program requires 64 K bits of memory. Because of the large number of arrays used in the program, and memory limitations of the Apple II, the maximum grid size that can be used is 20 rows by 20 columns. Input to the program is interactive, with prompting by the computer. Output consists of predicted lead values at themore » row-column intersections (nodes).« less

  5. HPC Aspects of Variable-Resolution Global Climate Modeling using a Multi-scale Convection Parameterization

    EPA Science Inventory

    High performance computing (HPC) requirements for the new generation variable grid resolution (VGR) global climate models differ from that of traditional global models. A VGR global model with 15 km grids over the CONUS stretching to 60 km grids elsewhere will have about ~2.5 tim...

  6. Schnek: A C++ library for the development of parallel simulation codes on regular grids

    NASA Astrophysics Data System (ADS)

    Schmitz, Holger

    2018-05-01

    A large number of algorithms across the field of computational physics are formulated on grids with a regular topology. We present Schnek, a library that enables fast development of parallel simulations on regular grids. Schnek contains a number of easy-to-use modules that greatly reduce the amount of administrative code for large-scale simulation codes. The library provides an interface for reading simulation setup files with a hierarchical structure. The structure of the setup file is translated into a hierarchy of simulation modules that the developer can specify. The reader parses and evaluates mathematical expressions and initialises variables or grid data. This enables developers to write modular and flexible simulation codes with minimal effort. Regular grids of arbitrary dimension are defined as well as mechanisms for defining physical domain sizes, grid staggering, and ghost cells on these grids. Ghost cells can be exchanged between neighbouring processes using MPI with a simple interface. The grid data can easily be written into HDF5 files using serial or parallel I/O.

  7. Scheduling multicore workload on shared multipurpose clusters

    NASA Astrophysics Data System (ADS)

    Templon, J. A.; Acosta-Silva, C.; Flix Molina, J.; Forti, A. C.; Pérez-Calero Yzquierdo, A.; Starink, R.

    2015-12-01

    With the advent of workloads containing explicit requests for multiple cores in a single grid job, grid sites faced a new set of challenges in workload scheduling. The most common batch schedulers deployed at HEP computing sites do a poor job at multicore scheduling when using only the native capabilities of those schedulers. This paper describes how efficient multicore scheduling was achieved at the sites the authors represent, by implementing dynamically-sized multicore partitions via a minimalistic addition to the Torque/Maui batch system already in use at those sites. The paper further includes example results from use of the system in production, as well as measurements on the dependence of performance (especially the ramp-up in throughput for multicore jobs) on node size and job size.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martínez-Rovira, I., E-mail: immamartinez@gmail.com; Prezado, Y.; Fois, G.

    Purpose: Spatial fractionation of the dose has proven to be a promising approach to increase the tolerance of healthy tissue, which is the main limitation of radiotherapy. A good example of that is GRID therapy, which has been successfully used in the management of large tumors with low toxicity. The aim of this work is to explore new avenues using nonconventional sources: GRID therapy by using kilovoltage (synchrotron) x-rays, the use of very high-energy electrons, and proton GRID therapy. They share in common the use of the smallest possible grid sizes in order to exploit the dose–volume effects. Methods: Montemore » Carlo simulations (PENELOPE/PENEASY and GEANT4/GATE codes) were used as a method to study dose distributions resulting from irradiations in different configurations of the three proposed techniques. As figure of merit, percentage (peak and valley) depth dose curves, penumbras, and central peak-to-valley dose ratios (PVDR) were evaluated. As shown in previous biological experiments, high PVDR values are requested for healthy tissue sparing. A superior tumor control may benefit from a lower PVDR. Results: High PVDR values were obtained in the healthy tissue for the three cases studied. When low energy photons are used, the treatment of deep-seated tumors can still be performed with submillimetric grid sizes. Superior PVDR values were reached with the other two approaches in the first centimeters along the beam path. The use of protons has the advantage of delivering a uniform dose distribution in the tumor, while healthy tissue benefits from the spatial fractionation of the dose. In the three evaluated techniques, there is a net reduction in penumbra with respect to radiosurgery. Conclusions: The high PVDR values in the healthy tissue and the use of small grid sizes in the three presented approaches might constitute a promising alternative to treat tumors with such spatially fractionated radiotherapy techniques. The dosimetric results presented here support the interest of performing radiobiology experiments in order to evaluate these new avenues.« less

  9. Hole-ness of point clouds

    NASA Astrophysics Data System (ADS)

    Gronz, Oliver; Seeger, Manuel; Klaes, Björn; Casper, Markus C.; Ries, Johannes B.

    2015-04-01

    Accurate and dense 3D models of soil surfaces can be used in various ways: They can be used as initial shapes for erosion models. They can be used as benchmark shapes for erosion model outputs. They can be used to derive metrics, such as random roughness... One easy and low-cost method to produce these models is structure from motion (SfM). Using this method, two questions arise: Does the soil moisture, which changes the colour, albedo and reflectivity of the soil, influence the model quality? How can the model quality be evaluated? To answer these questions, a suitable data set has been produced: soil has been placed on a tray and areas with different roughness structures have been formed. For different moisture states - dry, medium, saturated - and two different lighting conditions - direct and indirect - sets of high-resolution images at the same camera positions have been taken. From the six image sets, 3D point clouds have been produced using VisualSfM. The visual inspection of the 3D models showed that all models have different areas, where holes of different sizes occur. But it is obviously a subjective task to determine the model's quality by visual inspection. One typical approach to evaluate model quality objectively is to estimate the point density on a regular, two-dimensional grid: the number of 3D points in each grid cell projected on a plane is calculated. This works well for surfaces that do not show vertical structures. Along vertical structures, many points will be projected on the same grid cell and thus the point density rather depends on the shape of the surface but less on the quality of the model. Another approach has been applied by using the points resulting from Poisson Surface Reconstructions. One of this algorithm's properties is the filling of holes: new points are interpolated inside the holes. Using the original 3D point cloud and the interpolated Poisson point set, two analyses have been performed: For all Poisson points, the distance to the closest original point cloud member has been calculated. For the resulting set of distances, histograms have been produced that show the distribution of point distances. As the Poisson points also make up a connected mesh, the size and distribution of single holes can also be estimated by labeling Poisson points that belong to the same hole: each hole gets a specific number. Afterwards, the area of the mesh formed by each set of Poisson hole points can be calculated. The result is a set of distinctive holes and their sizes. The two approaches showed that the hole-ness of the point cloud depends on the soil moisture respectively the reflectivity: the distance distribution of the model of the saturated soil shows the smallest number of large distances. The histogram of the medium state shows more large distances and the dry model shows the largest distances. Models resulting from indirect lighting are better than the models resulting from direct light for all moisture states.

  10. Grid convergence errors in hemodynamic solution of patient-specific cerebral aneurysms.

    PubMed

    Hodis, Simona; Uthamaraj, Susheil; Smith, Andrea L; Dennis, Kendall D; Kallmes, David F; Dragomir-Daescu, Dan

    2012-11-15

    Computational fluid dynamics (CFD) has become a cutting-edge tool for investigating hemodynamic dysfunctions in the body. It has the potential to help physicians quantify in more detail the phenomena difficult to capture with in vivo imaging techniques. CFD simulations in anatomically realistic geometries pose challenges in generating accurate solutions due to the grid distortion that may occur when the grid is aligned with complex geometries. In addition, results obtained with computational methods should be trusted only after the solution has been verified on multiple high-quality grids. The objective of this study was to present a comprehensive solution verification of the intra-aneurysmal flow results obtained on different morphologies of patient-specific cerebral aneurysms. We chose five patient-specific brain aneurysm models with different dome morphologies and estimated the grid convergence errors for each model. The grid convergence errors were estimated with respect to an extrapolated solution based on the Richardson extrapolation method, which accounts for the degree of grid refinement. For four of the five models, calculated velocity, pressure, and wall shear stress values at six different spatial locations converged monotonically, with maximum uncertainty magnitudes ranging from 12% to 16% on the finest grids. Due to the geometric complexity of the fifth model, the grid convergence errors showed oscillatory behavior; therefore, each patient-specific model required its own grid convergence study to establish the accuracy of the analysis. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Examples of grid generation with implicitly specified surfaces using GridPro (TM)/az3000. 1: Filleted multi-tube configurations

    NASA Technical Reports Server (NTRS)

    Cheng, Zheming; Eiseman, Peter R.

    1995-01-01

    With examples, we illustrate how implicitly specified surfaces can be used for grid generation with GridPro/az3000. The particular examples address two questions: (1) How do you model intersecting tubes with fillets? and (2) How do you generate grids inside the intersected tubes? The implication is much more general. With the results in a forthcoming paper which develops an easy-to-follow procedure for implicit surface modeling, we provide a powerful means for rapid prototyping in grid generation.

  12. A comparative study of turbulence models for overset grids

    NASA Technical Reports Server (NTRS)

    Renze, Kevin J.; Buning, Pieter G.; Rajagopalan, R. G.

    1992-01-01

    The implementation of two different types of turbulence models for a flow solver using the Chimera overset grid method is examined. Various turbulence model characteristics, such as length scale determination and transition modeling, are found to have a significant impact on the computed pressure distribution for a multielement airfoil case. No inherent problem is found with using either algebraic or one-equation turbulence models with an overset grid scheme, but simulation of turbulence for multiple-body or complex geometry flows is very difficult regardless of the gridding method. For complex geometry flowfields, modification of the Baldwin-Lomax turbulence model is necessary to select the appropriate length scale in wall-bounded regions. The overset grid approach presents no obstacle to use of a one- or two-equation turbulence model. Both Baldwin-Lomax and Baldwin-Barth models have problems providing accurate eddy viscosity levels for complex multiple-body flowfields such as those involving the Space Shuttle.

  13. Impact of Considering 110 kV Grid Structures on the Congestion Management in the German Transmission Grid

    NASA Astrophysics Data System (ADS)

    Hoffrichter, André; Barrios, Hans; Massmann, Janek; Venkataramanachar, Bhavasagar; Schnettler, Armin

    2018-02-01

    The structural changes in the European energy system lead to an increase of renewable energy sources that are primarily connected to the distribution grid. Hence the stationary analysis of the transmission grid and the regionalization of generation capacities are strongly influenced by subordinate grid structures. To quantify the impact on the congestion management in the German transmission grid, a 110 kV grid model is derived using publicly available data delivered by Open Street Map and integrated into an existing model of the European transmission grid. Power flow and redispatch simulations are performed for three different regionalization methods and grid configurations. The results show a significant impact of the 110 kV system and prove an overestimation of power flows in the transmission grid when neglecting subordinate grids. Thus, the redispatch volume in Germany to dissolve bottlenecks in case of N-1 contingencies decreases by 38 % when considering the 110 kV grid.

  14. Wave Resource Characterization Using an Unstructured Grid Modeling Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Wei-Cheng; Yang, Zhaoqing; Wang, Taiping

    This paper presents a modeling study conducted on the central Oregon coast for wave resource characterization using the unstructured-grid SWAN model coupled with a nested-grid WWIII model. The flexibility of models of various spatial resolutions and the effects of open- boundary conditions simulated by a nested-grid WWIII model with different physics packages were evaluated. The model results demonstrate the advantage of the unstructured-grid modeling approach for flexible model resolution and good model skills in simulating the six wave resource parameters recommended by the International Electrotechnical Commission in comparison to the observed data in Year 2009 at National Data Buoy Centermore » Buoy 46050. Notably, spectral analysis indicates that the ST4 physics package improves upon the model skill of the ST2 physics package for predicting wave power density for large waves, which is important for wave resource assessment, device load calculation, and risk management. In addition, bivariate distributions show the simulated sea state of maximum occurrence with the ST4 physics package matched the observed data better than that with the ST2 physics package. This study demonstrated that the unstructured-grid wave modeling approach, driven by the nested-grid regional WWIII outputs with the ST4 physics package, can efficiently provide accurate wave hindcasts to support wave resource characterization. Our study also suggests that wind effects need to be considered if the dimension of the model domain is greater than approximately 100 km, or O (10^2 km).« less

  15. Dependence of Hurricane intensity and structures on vertical resolution and time-step size

    NASA Astrophysics Data System (ADS)

    Zhang, Da-Lin; Wang, Xiaoxue

    2003-09-01

    In view of the growing interests in the explicit modeling of clouds and precipitation, the effects of varying vertical resolution and time-step sizes on the 72-h explicit simulation of Hurricane Andrew (1992) are studied using the Pennsylvania State University/National Center for Atmospheric Research (PSU/NCAR) mesoscale model (i.e., MM5) with the finest grid size of 6 km. It is shown that changing vertical resolution and time-step size has significant effects on hurricane intensity and inner-core cloud/precipitation, but little impact on the hurricane track. In general, increasing vertical resolution tends to produce a deeper storm with lower central pressure and stronger three-dimensional winds, and more precipitation. Similar effects, but to a less extent, occur when the time-step size is reduced. It is found that increasing the low-level vertical resolution is more efficient in intensifying a hurricane, whereas changing the upper-level vertical resolution has little impact on the hurricane intensity. Moreover, the use of a thicker surface layer tends to produce higher maximum surface winds. It is concluded that the use of higher vertical resolution, a thin surface layer, and smaller time-step sizes, along with higher horizontal resolution, is desirable to model more realistically the intensity and inner-core structures and evolution of tropical storms as well as the other convectively driven weather systems.

  16. MODFLOW-LGR-Modifications to the streamflow-routing package (SFR2) to route streamflow through locally refined grids

    USGS Publications Warehouse

    Mehl, Steffen W.; Hill, Mary C.

    2011-01-01

    This report documents modifications to the Streamflow-Routing Package (SFR2) to route streamflow through grids constructed using the multiple-refined-areas capability of shared node Local Grid Refinement (LGR) of MODFLOW-2005. MODFLOW-2005 is the U.S. Geological Survey modular, three-dimensional, finite-difference groundwater-flow model. LGR provides the capability to simulate groundwater flow by using one or more block-shaped, higher resolution local grids (child model) within a coarser grid (parent model). LGR accomplishes this by iteratively coupling separate MODFLOW-2005 models such that heads and fluxes are balanced across the shared interfacing boundaries. Compatibility with SFR2 allows for streamflow routing across grids. LGR can be used in two- and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined groundwater systems.

  17. Semantic web data warehousing for caGrid.

    PubMed

    McCusker, James P; Phillips, Joshua A; González Beltrán, Alejandra; Finkelstein, Anthony; Krauthammer, Michael

    2009-10-01

    The National Cancer Institute (NCI) is developing caGrid as a means for sharing cancer-related data and services. As more data sets become available on caGrid, we need effective ways of accessing and integrating this information. Although the data models exposed on caGrid are semantically well annotated, it is currently up to the caGrid client to infer relationships between the different models and their classes. In this paper, we present a Semantic Web-based data warehouse (Corvus) for creating relationships among caGrid models. This is accomplished through the transformation of semantically-annotated caBIG Unified Modeling Language (UML) information models into Web Ontology Language (OWL) ontologies that preserve those semantics. We demonstrate the validity of the approach by Semantic Extraction, Transformation and Loading (SETL) of data from two caGrid data sources, caTissue and caArray, as well as alignment and query of those sources in Corvus. We argue that semantic integration is necessary for integration of data from distributed web services and that Corvus is a useful way of accomplishing this. Our approach is generalizable and of broad utility to researchers facing similar integration challenges.

  18. The Mass-loss Return from Evolved Stars to the Large Magellanic Cloud. IV. Construction and Validation of a Grid of Models for Oxygen-rich AGB Stars, Red Supergiants, and Extreme AGB Stars

    NASA Astrophysics Data System (ADS)

    Sargent, Benjamin A.; Srinivasan, S.; Meixner, M.

    2011-02-01

    To measure the mass loss from dusty oxygen-rich (O-rich) evolved stars in the Large Magellanic Cloud (LMC), we have constructed a grid of models of spherically symmetric dust shells around stars with constant mass-loss rates using 2Dust. These models will constitute the O-rich model part of the "Grid of Red supergiant and Asymptotic giant branch star ModelS" (GRAMS). This model grid explores four parameters—stellar effective temperature from 2100 K to 4700 K luminosity from 103 to 106 L sun; dust shell inner radii of 3, 7, 11, and 15 R star; and 10.0 μm optical depth from 10-4 to 26. From an initial grid of ~1200 2Dust models, we create a larger grid of ~69,000 models by scaling to cover the luminosity range required by the data. These models are available online to the public. The matching in color-magnitude diagrams and color-color diagrams to observed O-rich asymptotic giant branch (AGB) and red supergiant (RSG) candidate stars from the SAGE and SAGE-Spec LMC samples and a small sample of OH/IR stars is generally very good. The extreme AGB star candidates from SAGE are more consistent with carbon-rich (C-rich) than O-rich dust composition. Our model grid suggests lower limits to the mid-infrared colors of the dustiest AGB stars for which the chemistry could be O-rich. Finally, the fitting of GRAMS models to spectral energy distributions of sources fit by other studies provides additional verification of our grid and anticipates future, more expansive efforts.

  19. Effect of grid transparency and finite collector size on determining ion temperature and density by the retarding potential analyzer

    NASA Technical Reports Server (NTRS)

    Troy, B. E., Jr.; Maier, E. J.

    1975-01-01

    The effects of the grid transparency and finite collector size on the values of thermal ion density and temperature determined by the standard RPA (retarding potential analyzer) analysis method are investigated. The current-voltage curves calculated for varying RPA parameters and a given ion mass, temperature, and density are analyzed by the standard RPA method. It is found that only small errors in temperature and density are introduced for an RPA with typical dimensions, and that even when the density error is substantial for nontypical dimensions, the temperature error remains minimum.

  20. 3D data processing with advanced computer graphics tools

    NASA Astrophysics Data System (ADS)

    Zhang, Song; Ekstrand, Laura; Grieve, Taylor; Eisenmann, David J.; Chumbley, L. Scott

    2012-09-01

    Often, the 3-D raw data coming from an optical profilometer contains spiky noises and irregular grid, which make it difficult to analyze and difficult to store because of the enormously large size. This paper is to address these two issues for an optical profilometer by substantially reducing the spiky noise of the 3-D raw data from an optical profilometer, and by rapidly re-sampling the raw data into regular grids at any pixel size and any orientation with advanced computer graphics tools. Experimental results will be presented to demonstrate the effectiveness of the proposed approach.

Top