Sample records for model grid box

  1. A scale-invariant cellular-automata model for distributed seismicity

    NASA Technical Reports Server (NTRS)

    Barriere, Benoit; Turcotte, Donald L.

    1991-01-01

    In the standard cellular-automata model for a fault an element of stress is randomly added to a grid of boxes until a box has four elements, these are then redistributed to the adjacent boxes on the grid. The redistribution can result in one or more of these boxes having four or more elements in which case further redistributions are required. On the average added elements are lost from the edges of the grid. The model is modified so that the boxes have a scale-invariant distribution of sizes. The objective is to model a scale-invariant distribution of fault sizes. When a redistribution from a box occurs it is equivalent to a characteristic earthquake on the fault. A redistribution from a small box (a foreshock) can trigger an instability in a large box (the main shock). A redistribution from a large box always triggers many instabilities in the smaller boxes (aftershocks). The frequency-size statistics for both main shocks and aftershocks satisfy the Gutenberg-Richter relation with b = 0.835 for main shocks and b = 0.635 for aftershocks. Model foreshocks occur 28 percent of the time.

  2. Static aeroelastic analysis of wings using Euler/Navier-Stokes equations coupled with improved wing-box finite element structures

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; MacMurdy, Dale E.; Kapania, Rakesh K.

    1994-01-01

    Strong interactions between flow about an aircraft wing and the wing structure can result in aeroelastic phenomena which significantly impact aircraft performance. Time-accurate methods for solving the unsteady Navier-Stokes equations have matured to the point where reliable results can be obtained with reasonable computational costs for complex non-linear flows with shock waves, vortices and separations. The ability to combine such a flow solver with a general finite element structural model is key to an aeroelastic analysis in these flows. Earlier work involved time-accurate integration of modal structural models based on plate elements. A finite element model was developed to handle three-dimensional wing boxes, and incorporated into the flow solver without the need for modal analysis. Static condensation is performed on the structural model to reduce the structural degrees of freedom for the aeroelastic analysis. Direct incorporation of the finite element wing-box structural model with the flow solver requires finding adequate methods for transferring aerodynamic pressures to the structural grid and returning deflections to the aerodynamic grid. Several schemes were explored for handling the grid-to-grid transfer of information. The complex, built-up nature of the wing-box complicated this transfer. Aeroelastic calculations for a sample wing in transonic flow comparing various simple transfer schemes are presented and discussed.

  3. Horizontal Residual Mean Circulation: Evaluation of Spatial Correlations in Coarse Resolution Ocean Models

    NASA Astrophysics Data System (ADS)

    Li, Y.; McDougall, T. J.

    2016-02-01

    Coarse resolution ocean models lack knowledge of spatial correlations between variables on scales smaller than the grid scale. Some researchers have shown that these spatial correlations play a role in the poleward heat flux. In order to evaluate the poleward transport induced by the spatial correlations at a fixed horizontal position, an equation is obtained to calculate the approximate transport from velocity gradients. The equation involves two terms that can be added to the quasi-Stokes streamfunction (based on temporal correlations) to incorporate the contribution of spatial correlations. Moreover, these new terms do not need to be parameterized and is ready to be evaluated by using model data directly. In this study, data from a high resolution ocean model have been used to estimate the accuracy of this HRM approach for improving the horizontal property fluxes in coarse-resolution ocean models. A coarse grid is formed by sub-sampling and box-car averaging the fine grid scale. The transport calculated on the coarse grid is then compared to the transport on original high resolution grid scale accumulated over a corresponding number of grid boxes. The preliminary results have shown that the estimate on coarse resolution grids roughly match the corresponding transports on high resolution grids.

  4. Improvements in sub-grid, microphysics averages using quadrature based approaches

    NASA Astrophysics Data System (ADS)

    Chowdhary, K.; Debusschere, B.; Larson, V. E.

    2013-12-01

    Sub-grid variability in microphysical processes plays a critical role in atmospheric climate models. In order to account for this sub-grid variability, Larson and Schanen (2013) propose placing a probability density function on the sub-grid cloud microphysics quantities, e.g. autoconversion rate, essentially interpreting the cloud microphysics quantities as a random variable in each grid box. Random sampling techniques, e.g. Monte Carlo and Latin Hypercube, can be used to calculate statistics, e.g. averages, on the microphysics quantities, which then feed back into the model dynamics on the coarse scale. We propose an alternate approach using numerical quadrature methods based on deterministic sampling points to compute the statistical moments of microphysics quantities in each grid box. We have performed a preliminary test on the Kessler autoconversion formula, and, upon comparison with Latin Hypercube sampling, our approach shows an increased level of accuracy with a reduction in sample size by almost two orders of magnitude. Application to other microphysics processes is the subject of ongoing research.

  5. NCAR global model topography generation software for unstructured grids

    NASA Astrophysics Data System (ADS)

    Lauritzen, P. H.; Bacmeister, J. T.; Callaghan, P. F.; Taylor, M. A.

    2015-06-01

    It is the purpose of this paper to document the NCAR global model topography generation software for unstructured grids. Given a model grid, the software computes the fraction of the grid box covered by land, the gridbox mean elevation, and associated sub-grid scale variances commonly used for gravity wave and turbulent mountain stress parameterizations. The software supports regular latitude-longitude grids as well as unstructured grids; e.g. icosahedral, Voronoi, cubed-sphere and variable resolution grids. As an example application and in the spirit of documenting model development, exploratory simulations illustrating the impacts of topographic smoothing with the NCAR-DOE CESM (Community Earth System Model) CAM5.2-SE (Community Atmosphere Model version 5.2 - Spectral Elements dynamical core) are shown.

  6. Grid systems for Earth radiation budget experiment applications

    NASA Technical Reports Server (NTRS)

    Brooks, D. R.

    1981-01-01

    Spatial coordinate transformations are developed for several global grid systems of interest to the Earth Radiation Budget Experiment. The grid boxes are defined in terms of a regional identifier and longitude-latitude indexes. The transformations associate longitude with a particular grid box. The reverse transformations identify the center location of a given grid box. Transformations are given to relate the rotating (Earth-based) grid systems to solar position expressed in an inertial (nonrotating) coordinate system. The FORTRAN implementations of the transformations are given, along with sample input and output.

  7. Numerical computation of complex multi-body Navier-Stokes flows with applications for the integrated Space Shuttle launch vehicle

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    1993-01-01

    An enhanced grid system for the Space Shuttle Orbiter was built by integrating CAD definitions from several sources and then generating the surface and volume grids. The new grid system contains geometric components not modeled previously plus significant enhancements on geometry that has been modeled in the old grid system. The new orbiter grids were then integrated with new grids for the rest of the launch vehicle. Enhancements were made to the hyperbolic grid generator HYPGEN and new tools for grid projection, manipulation, and modification, Cartesian box grid and far field grid generation and post-processing of flow solver data were developed.

  8. Using High-Resolution Forward Model Simulations of Ideal Atmospheric Tracers to Assess the Spatial Information Content of Inverse CO2 Flux Estimates

    NASA Technical Reports Server (NTRS)

    Pawson, Steven; Nielsen, J. Eric

    2011-01-01

    Attribution of observed atmospheric carbon concentrations to emissions on the country, state or city level is often inferred using "inversion" techniques. Such computations are often performed using advanced mathematical techniques, such as synthesis inversion or four-dimensional variational analysis, that invoke tracing observed atmospheric concentrations backwards through a transport model to a source region. It is, to date, not well understood how well such techniques can represent fine spatial (and temporal) structure in the inverted flux fields. This question is addressed using forward-model computations with idealized tracers emitted at the surface in a large number of grid boxes over selected regions and examining how distinctly these emitted tracers can be detected downstream. Initial results show that tracers emitted in half-degree grid boxes over a large region of the Eastern USA cannot be distinguished from each other, even at short distances over the Atlantic Ocean, when they are emitted in grid boxes separated by less than five degrees of latitude - especially when only total-column observations are available. A large number of forward model simulations, with varying meteorological conditions, are used to assess how distinctly three types observations (total column, upper tropospheric column, and surface mixing ratio) can separate emissions from different sources. Inferences inverse modeling and source attribution will be drawn.

  9. Sampling Errors of SSM/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Sample Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

  10. A regional analysis of cloudy mean spherical albedo over the marine stratocumulus region and the tropical Atlantic Ocean. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Ginger, Kathryn M.

    1993-01-01

    Since clouds are the largest variable in Earth's radiation budget, it is critical to determine both the spatial and temporal characteristics of their radiative properties. The relationships between cloud properties and cloud fraction are studied in order to supplement grid scale parameterizations. The satellite data used is from three hourly ISCCP (International Satellite Cloud Climatology Project) and monthly ERBE (Earth Radiation Budget Experiment) data on a 2.5 deg x 2.5 deg latitude-longitude grid. Mean cloud spherical albedo, the mean optical depth distribution, and cloud fraction are examined and compared off the coast of California and the mid-tropical Atlantic for July 1987 and 1988. Individual grid boxes and spatial averages over several grid boxes are correlated to Coakley's theory of reflection for uniform and broken layered cloud and to Kedem, et al.'s findings that rainfall volume and fractional area of rain in convective systems is linear. Kedem's hypothesis can be expressed in terms of cloud properties. That is, the total volume of liquid in a box is a linear function of cloud fraction. Results for the marine stratocumulus regime indicate that albedo is often invariant for cloud fractions of 20% to 80%. Coakley's satellite model of small and large clouds with cores (1 km) and edges (100 m) is consistent with this observation. The cores maintain high liquid water concentrations and large droplets while the edges contain low liquid water concentrations and small droplets. Large clouds are just a collection of cores. The mean optical depth (TAU) distributions support the above observation with TAU values of 3.55 to 9.38 favored across all cloud fractions. From these results, a method based upon Kedem, et al's theory is proposed to separate the cloud fraction and liquid water path (LWP) calculations in a general circulation model (GCM). In terms of spatial averaging, a linear relationship between albedo and cloud fraction is observed. For tropical locations outside the Intertropical Convergence Zone (ITCZ), results of cloud fraction and albedo spatial averaging followed that of the stratus boxes containing few overcast scenes. Both the ideas of Coakley and Kedem, et al. apply. Within the ITCZ, the grid boxes tended to have the same statistical properties as stratus boxes containing many overcast scenes. Because different dynamical forcing mechanisms are present, it is difficult to devise a method for determining subgrid scale variations. Neither of the theories proposed by Kedem, et al. or Coakley works well for the boxes with numerous overcast scenes.

  11. A Regional Analysis of Cloudy Mean Spherical Albedo over the Marine Stratocumulus Region and the Tropical Atlantic Ocean

    NASA Technical Reports Server (NTRS)

    Ginger, Kathryn M.

    1993-01-01

    Since clouds are the largest variable in Earth's radiation budget, it is critical to determine both the spatial and temporal characteristics of their radiative properties. This study examines the relationships between cloud properties and cloud fraction in order to supplement grid scale parameterizations. The satellite data used in this study is from three hourly ISCCP (International Satellite Cloud Climatology Project) and monthly ERBE (Earth Radiation Budget Experiment) data on a 2.50 x 2.50 latitude-longitude grid. Mean cloud spherical albedo, the mean optical depth distribution and cloud fraction are examined and compared off the coast of California and the mid-tropical Atlantic for July 1987 and 1988. Individual grid boxes and spatial averages over several grid boxes are correlated to Coakleys (1991) theory of reflection for uniform and broken layered cloud and to Kedem, et al.(1990) findings that rainfall volume and fractional area of rain in convective systems is linear. Kedem's hypothesis can be expressed in terms of cloud properties. That is, the total volume of liquid in a box is a linear function of cloud fraction. Results for the marine stratocumulus regime indicate that albedo is often invariant for cloud fractions of 20% to 80%. Coakley's satellite model of small and large clouds with cores (1 km) and edges (100 in) is consistent with this observation. The cores maintain high liquid water concentrations and large droplets while the edges contain low liquid water concentrations and small droplets. Large clouds are just a collection of cores. The mean optical depth (TAU) distributions support the above observation with TAU values of 3.55 to 9.38 favored across all cloud fractions. From these results, a method based upon Kedem, et al. theory is proposed to separate the cloud fraction and liquid water path (LWP) calculations in a general circulation model (GCM). In terms of spatial averaging, a linear relationship between albedo and cloud fraction is observed. For tropical locations outside the Intertropical Convergence Zone (ITCZ), results of cloud fraction and albedo spatial averaging followed that of the stratus boxes containing few overcast scenes. Both the ideas of Coakley and Kedem, et al. apply. Within the ITCZ, the grid boxes tended to have the same statistical properties as stratus boxes containing many overcast scenes. Because different dynamical forcing mechanisms are present, it is difficult to devise a method for determining subgrid scale variations. Neither of the theories proposed by Kedem, et al. or Coakley works well for the boxes with numerous overcast scenes.

  12. Aerostructural Level Set Topology Optimization for a Common Research Model Wing

    NASA Technical Reports Server (NTRS)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2014-01-01

    The purpose of this work is to use level set topology optimization to improve the design of a representative wing box structure for the NASA common research model. The objective is to minimize the total compliance of the structure under aerodynamic and body force loading, where the aerodynamic loading is coupled to the structural deformation. A taxi bump case was also considered, where only body force loads were applied. The trim condition that aerodynamic lift must balance the total weight of the aircraft is enforced by allowing the root angle of attack to change. The level set optimization method is implemented on an unstructured three-dimensional grid, so that the method can optimize a wing box with arbitrary geometry. Fast matching and upwind schemes are developed for an unstructured grid, which make the level set method robust and efficient. The adjoint method is used to obtain the coupled shape sensitivities required to perform aerostructural optimization of the wing box structure.

  13. Scales of variability of black carbon plumes and their dependence on resolution of ECHAM6-HAM

    NASA Astrophysics Data System (ADS)

    Weigum, Natalie; Stier, Philip; Schutgens, Nick; Kipling, Zak

    2015-04-01

    Prediction of the aerosol effect on climate depends on the ability of three-dimensional numerical models to accurately estimate aerosol properties. However, a limitation of traditional grid-based models is their inability to resolve variability on scales smaller than a grid box. Past research has shown that significant aerosol variability exists on scales smaller than these grid-boxes, which can lead to discrepancies between observations and aerosol models. The aim of this study is to understand how a global climate model's (GCM) inability to resolve sub-grid scale variability affects simulations of important aerosol features. This problem is addressed by comparing observed black carbon (BC) plume scales from the HIPPO aircraft campaign to those simulated by ECHAM-HAM GCM, and testing how model resolution affects these scales. This study additionally investigates how model resolution affects BC variability in remote and near-source regions. These issues are examined using three different approaches: comparison of observed and simulated along-flight-track plume scales, two-dimensional autocorrelation analysis, and 3-dimensional plume analysis. We find that the degree to which GCMs resolve variability can have a significant impact on the scales of BC plumes, and it is important for models to capture the scales of aerosol plume structures, which account for a large degree of aerosol variability. In this presentation, we will provide further results from the three analysis techniques along with a summary of the implication of these results on future aerosol model development.

  14. : “Developing Regional Modeling Techniques Applicable for Simulating Future Climate Conditions in the Carolinas”

    EPA Science Inventory

    Global climate models (GCMs) are currently used to obtain information about future changes in the large-scale climate. However, such simulations are typically done at coarse spatial resolutions, with model grid boxes on the order of 100 km on a horizontal side. Therefore, techniq...

  15. A dynamically adaptive multigrid algorithm for the incompressible Navier-Stokes equations: Validation and model problems

    NASA Technical Reports Server (NTRS)

    Thompson, C. P.; Leaf, G. K.; Vanrosendale, J.

    1991-01-01

    An algorithm is described for the solution of the laminar, incompressible Navier-Stokes equations. The basic algorithm is a multigrid based on a robust, box-based smoothing step. Its most important feature is the incorporation of automatic, dynamic mesh refinement. This algorithm supports generalized simple domains. The program is based on a standard staggered-grid formulation of the Navier-Stokes equations for robustness and efficiency. Special grid transfer operators were introduced at grid interfaces in the multigrid algorithm to ensure discrete mass conservation. Results are presented for three models: the driven-cavity, a backward-facing step, and a sudden expansion/contraction.

  16. A locally refined rectangular grid finite element method - Application to computational fluid dynamics and computational physics

    NASA Technical Reports Server (NTRS)

    Young, David P.; Melvin, Robin G.; Bieterman, Michael B.; Johnson, Forrester T.; Samant, Satish S.

    1991-01-01

    The present FEM technique addresses both linear and nonlinear boundary value problems encountered in computational physics by handling general three-dimensional regions, boundary conditions, and material properties. The box finite elements used are defined by a Cartesian grid independent of the boundary definition, and local refinements proceed by dividing a given box element into eight subelements. Discretization employs trilinear approximations on the box elements; special element stiffness matrices are included for boxes cut by any boundary surface. Illustrative results are presented for representative aerodynamics problems involving up to 400,000 elements.

  17. Continental-scale river flow in climate models

    NASA Technical Reports Server (NTRS)

    Miller, James R.; Russell, Gary L.; Caliri, Guilherme

    1994-01-01

    The hydrologic cycle is a major part of the global climate system. There is an atmospheric flux of water from the ocean surface to the continents. The cycle is closed by return flow in rivers. In this paper a river routing model is developed to use with grid box climate models for the whole earth. The routing model needs an algorithm for the river mass flow and a river direction file, which has been compiled for 4 deg x 5 deg and 2 deg x 2.5 deg resolutions. River basins are defined by the direction files. The river flow leaving each grid box depends on river and lake mass, downstream distance, and an effective flow speed that depends on topography. As input the routing model uses monthly land source runoff from a 5-yr simulation of the NASA/GISS atmospheric climate model (Hansen et al.). The land source runoff from the 4 deg x 5 deg resolution model is quartered onto a 2 deg x 2.5 deg grid, and the effect of grid resolution is examined. Monthly flow at the mouth of the world's major rivers is compared with observations, and a global error function for river flow is used to evaluate the routing model and its sensitivity to physical parameters. Three basinwide parameters are introduced: the river length weighted by source runoff, the turnover rate, and the basinwide speed. Although the values of these parameters depend on the resolution at which the rivers are defined, the values should converge as the grid resolution becomes finer. When the routing scheme described here is coupled with a climate model's source runoff, it provides the basis for closing the hydrologic cycle in coupled atmosphere-ocean models by realistically allowing water to return to the ocean at the correct location and with the proper magnitude and timing.

  18. On the Use and Validation of Mosaic Heterogeneity in Atmospheric Numerical Models

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Atlas, Robert M. (Technical Monitor)

    2001-01-01

    The mosaic land modeling approach allows for the representation of multiple surface types in a single atmospheric general circulation model grid box. Each surface type, collectively called 'tiles' correspond to different sets of surface characteristics (e.g. for grass, crop or forest). Typically, the tile space data is averaged to grid space by weighting the tiles with their fractional cover. While grid space data is routinely evaluated, little attention has been given to the tile space data. The present paper explores uses of the tile space surface data in validation with station observations. The results indicate the limitations that the mosaic heterogeneity parameterization has in reproducing variations observed between stations at the Atmospheric Radiation Measurement Southern Great Plains field site.

  19. Black box multigrid

    NASA Technical Reports Server (NTRS)

    Dendy, J. E., Jr.

    1981-01-01

    The black box multigrid (BOXMG) code, which only needs specification of the matrix problem for application in the multigrid method was investigated. It is contended that a major problem with the multigrid method is that each new grid configuration requires a major programming effort to develop a code that specifically handles that grid configuration. The SOR and ICCG methods only specify the matrix problem, no matter what the grid configuration. It is concluded that the BOXMG does everything else necessary to set up the auxiliary coarser problems to achieve a multigrid solution.

  20. Multivariate Spline Algorithms for CAGD

    NASA Technical Reports Server (NTRS)

    Boehm, W.

    1985-01-01

    Two special polyhedra present themselves for the definition of B-splines: a simplex S and a box or parallelepiped B, where the edges of S project into an irregular grid, while the edges of B project into the edges of a regular grid. More general splines may be found by forming linear combinations of these B-splines, where the three-dimensional coefficients are called the spline control points. Univariate splines are simplex splines, where s = 1, whereas splines over a regular triangular grid are box splines, where s = 2. Two simple facts render the development of the construction of B-splines: (1) any face of a simplex or a box is again a simplex or box but of lower dimension; and (2) any simplex or box can be easily subdivided into smaller simplices or boxes. The first fact gives a geometric approach to Mansfield-like recursion formulas that express a B-spline in B-splines of lower order, where the coefficients depend on x. By repeated recursion, the B-spline will be expressed as B-splines of order 1; i.e., piecewise constants. In the case of a simplex spline, the second fact gives a so-called insertion algorithm that constructs the new control points if an additional knot is inserted.

  1. A flexible importance sampling method for integrating subgrid processes

    DOE PAGES

    Raut, E. K.; Larson, V. E.

    2016-01-29

    Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that containsmore » both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). Here, the resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.« less

  2. Grid and non-grid cells in medial entorhinal cortex represent spatial location and environmental features with complementary coding schemes

    PubMed Central

    Diehl, Geoffrey W.; Hon, Olivia J.; Leutgeb, Stefan; Leutgeb, Jill K.

    2017-01-01

    Summary The medial entorhinal cortex (mEC) has been identified as a hub for spatial information processing by the discovery of grid, border, and head-direction cells. Here we find that in addition to these well characterized classes, nearly all of the remaining two thirds of mEC cells can be categorized as spatially selective. We refer to these cells as non-grid spatial cells and confirmed that their spatial firing patterns were unrelated to running speed and highly reproducible within the same environment. However, in response to manipulations of environmental features, such as box shape or box color, non-grid spatial cells completely reorganized their spatial firing patterns. At the same time, grid cells retained their spatial alignment and predominantly responded with redistributed firing rates across their grid fields. Thus, mEC contains a joint representation of both spatial and environmental feature content, with specialized cell types showing different types of integrated coding of multimodal information. PMID:28343867

  3. The interpretation of remotely sensed cloud properties from a model paramterization perspective

    NASA Technical Reports Server (NTRS)

    HARSHVARDHAN; Wielicki, Bruce A.; Ginger, Kathryn M.

    1994-01-01

    A study has been made of the relationship between mean cloud radiative properties and cloud fraction in stratocumulus cloud systems. The analysis is of several Land Resources Satellite System (LANDSAT) images and three hourly International Satellite Cloud Climatology Project (ISCCP) C-1 data during daylight hours for two grid boxes covering an area typical of a general circulation model (GCM) grid increment. Cloud properties were inferred from the LANDSAT images using two thresholds and several pixel resolutions ranging from roughly 0.0625 km to 8 km. At the finest resolution, the analysis shows that mean cloud optical depth (or liquid water path) increases somewhat with increasing cloud fraction up to 20% cloud coverage. More striking, however, is the lack of correlation between the two quantities for cloud fractions between roughly 0.2 and 0.8. When the scene is essentially overcast, the mean cloud optical tends to be higher. Coarse resolution LANDSAT analysis and the ISCCP 8-km data show lack of correlation between mean cloud optical depth and cloud fraction for coverage less than about 90%. This study shows that there is perhaps a local mean liquid water path (LWP) associated with partly cloudy areas of stratocumulus clouds. A method has been suggested to use this property to construct the cloud fraction paramterization in a GCM when the model computes a grid-box-mean LWP.

  4. Quantification of marine aerosol subgrid variability and its correlation with clouds based on high-resolution regional modeling: Quantifying Aerosol Subgrid Variability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Guangxing; Qian, Yun; Yan, Huiping

    One limitation of most global climate models (GCMs) is that with the horizontal resolutions they typically employ, they cannot resolve the subgrid variability (SGV) of clouds and aerosols, adding extra uncertainties to the aerosol radiative forcing estimation. To inform the development of an aerosol subgrid variability parameterization, here we analyze the aerosol SGV over the southern Pacific Ocean simulated by the high-resolution Weather Research and Forecasting model coupled to Chemistry. We find that within a typical GCM grid, the aerosol mass subgrid standard deviation is 15% of the grid-box mean mass near the surface on a 1 month mean basis.more » The fraction can increase to 50% in the free troposphere. The relationships between the sea-salt mass concentration, meteorological variables, and sea-salt emission rate are investigated in both the clear and cloudy portion. Under clear-sky conditions, marine aerosol subgrid standard deviation is highly correlated with the standard deviations of vertical velocity, cloud water mixing ratio, and sea-salt emission rates near the surface. It is also strongly connected to the grid box mean aerosol in the free troposphere (between 2 km and 4 km). In the cloudy area, interstitial sea-salt aerosol mass concentrations are smaller, but higher correlation is found between the subgrid standard deviations of aerosol mass and vertical velocity. Additionally, we find that decreasing the model grid resolution can reduce the marine aerosol SGV but strengthen the correlations between the aerosol SGV and the total water mixing ratio (sum of water vapor, cloud liquid, and cloud ice mixing ratios).« less

  5. Connecting spatial and temporal scales of tropical precipitation in observations and the MetUM-GA6

    NASA Astrophysics Data System (ADS)

    Martin, Gill M.; Klingaman, Nicholas P.; Moise, Aurel F.

    2017-01-01

    This study analyses tropical rainfall variability (on a range of temporal and spatial scales) in a set of parallel Met Office Unified Model (MetUM) simulations at a range of horizontal resolutions, which are compared with two satellite-derived rainfall datasets. We focus on the shorter scales, i.e. from the native grid and time step of the model through sub-daily to seasonal, since previous studies have paid relatively little attention to sub-daily rainfall variability and how this feeds through to longer scales. We find that the behaviour of the deep convection parametrization in this model on the native grid and time step is largely independent of the grid-box size and time step length over which it operates. There is also little difference in the rainfall variability on larger/longer spatial/temporal scales. Tropical convection in the model on the native grid/time step is spatially and temporally intermittent, producing very large rainfall amounts interspersed with grid boxes/time steps of little or no rain. In contrast, switching off the deep convection parametrization, albeit at an unrealistic resolution for resolving tropical convection, results in very persistent (for limited periods), but very sporadic, rainfall. In both cases, spatial and temporal averaging smoothes out this intermittency. On the ˜ 100 km scale, for oceanic regions, the spectra of 3-hourly and daily mean rainfall in the configurations with parametrized convection agree fairly well with those from satellite-derived rainfall estimates, while at ˜ 10-day timescales the averages are overestimated, indicating a lack of intra-seasonal variability. Over tropical land the results are more varied, but the model often underestimates the daily mean rainfall (partly as a result of a poor diurnal cycle) but still lacks variability on intra-seasonal timescales. Ultimately, such work will shed light on how uncertainties in modelling small-/short-scale processes relate to uncertainty in climate change projections of rainfall distribution and variability, with a view to reducing such uncertainty through improved modelling of small-/short-scale processes.

  6. A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Bell, Thomas L.

    2003-01-01

    A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.

  7. Modeling North Atlantic Nor'easters With Modern Wave Forecast Models

    NASA Astrophysics Data System (ADS)

    Perrie, Will; Toulany, Bechara; Roland, Aron; Dutour-Sikiric, Mathieu; Chen, Changsheng; Beardsley, Robert C.; Qi, Jianhua; Hu, Yongcun; Casey, Michael P.; Shen, Hui

    2018-01-01

    Three state-of-the-art operational wave forecast model systems are implemented on fine-resolution grids for the Northwest Atlantic. These models are: (1) a composite model system consisting of SWAN implemented within WAVEWATCHIII® (the latter is hereafter, WW3) on a nested system of traditional structured grids, (2) an unstructured grid finite-volume wave model denoted "SWAVE," using SWAN physics, and (3) an unstructured grid finite element wind wave model denoted as "WWM" (for "wind wave model") which uses WW3 physics. Models are implemented on grid systems that include relatively large domains to capture the wave energy generated by the storms, as well as including fine-resolution nearshore regions of the southern Gulf of Maine with resolution on the scale of 25 m to simulate areas where inundation and coastal damage have occurred, due to the storms. Storm cases include three intense midlatitude cases: a spring Nor'easter storm in May 2005, the Patriot's Day storm in 2007, and the Boxing Day storm in 2010. Although these wave model systems have comparable overall properties in terms of their performance and skill, it is found that there are differences. Models that use more advanced physics, as presented in recent versions of WW3, tuned to regional characteristics, as in the Gulf of Maine and the Northwest Atlantic, can give enhanced results.

  8. Quantification of effective plant rooting depth: advancing global hydrological modelling

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Donohue, R. J.; McVicar, T.

    2017-12-01

    Plant rooting depth (Zr) is a key parameter in hydrological and biogeochemical models, yet the global spatial distribution of Zr is largely unknown due to the difficulties in its direct measurement. Moreover, Zr observations are usually only representative of a single plant or several plants, which can differ greatly from the effective Zr over a modelling unit (e.g., catchment or grid-box). Here, we provide a global parameterization of an analytical Zr model that balances the marginal carbon cost and benefit of deeper roots, and produce a climatological (i.e., 1982-2010 average) global Zr map. To test the Zr estimates, we apply the estimated Zr in a highly transparent hydrological model (i.e., the Budyko-Choudhury-Porporato (BCP) model) to estimate mean annual actual evapotranspiration (E) across the globe. We then compare the estimated E with both water balance-based E observations at 32 major catchments and satellite grid-box retrievals across the globe. Our results show that the BCP model, when implemented with Zr estimated herein, optimally reproduced the spatial pattern of E at both scales and provides improved model outputs when compared to BCP model results from two already existing global Zr datasets. These results suggest that our Zr estimates can be effectively used in state-of-the-art hydrological models, and potentially biogeochemical models, where the determination of Zr currently largely relies on biome type-based look-up tables.

  9. Cloud Modeling

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Moncrieff, Mitchell; Einaud, Franco (Technical Monitor)

    2001-01-01

    Numerical cloud models have been developed and applied extensively to study cloud-scale and mesoscale processes during the past four decades. The distinctive aspect of these cloud models is their ability to treat explicitly (or resolve) cloud-scale dynamics. This requires the cloud models to be formulated from the non-hydrostatic equations of motion that explicitly include the vertical acceleration terms since the vertical and horizontal scales of convection are similar. Such models are also necessary in order to allow gravity waves, such as those triggered by clouds, to be resolved explicitly. In contrast, the hydrostatic approximation, usually applied in global or regional models, does allow the presence of gravity waves. In addition, the availability of exponentially increasing computer capabilities has resulted in time integrations increasing from hours to days, domain grids boxes (points) increasing from less than 2000 to more than 2,500,000 grid points with 500 to 1000 m resolution, and 3-D models becoming increasingly prevalent. The cloud resolving model is now at a stage where it can provide reasonably accurate statistical information of the sub-grid, cloud-resolving processes poorly parameterized in climate models and numerical prediction models.

  10. A comparative study on the motion of various objects inside an air tunnel

    NASA Astrophysics Data System (ADS)

    Shibani, Wanis Mustafa E.; Zulkafli, Mohd Fadhli; Basunoand, Bambang

    2017-04-01

    This paper presents a comparative study of the movement of various rigid bodies through an air tunnel for both two and three-dimensional flow problems. Three kinds of objects under investigation are in the form of box, ball and wedge shape. The investigation was carried out through the use of a commercial CFD software, named Fluent, in order to determine aerodynamic forces, act on the object as well as to track its movement. Adopted numerical scheme is the time-averaged Navier-Stokes equation with k - ɛ as its turbulence modeling and the scheme was solved using the SIMPLE algorithm. Triangular elements grid was used in 2D case, while tetrahedron elements for 3D case. Grid independence studies were performed for each problem from a coarse to fine grid. The motion of an object is restricted in one direction only and is found by tracking its center of mass at every time step. The result indicates the movement of the object is increasing as the flow moves down stream and the box have the fastest speed compare to the other two shapes for both 2D and 3D cases.

  11. Finite volume solution of the compressible boundary-layer equations

    NASA Technical Reports Server (NTRS)

    Loyd, B.; Murman, E. M.

    1986-01-01

    A box-type finite volume discretization is applied to the integral form of the compressible boundary layer equations. Boundary layer scaling is introduced through the grid construction: streamwise grid lines follow eta = y/h = const., where y is the normal coordinate and h(x) is a scale factor proportional to the boundary layer thickness. With this grid, similarity can be applied explicity to calculate initial conditions. The finite volume method preserves the physical transparency of the integral equations in the discrete approximation. The resulting scheme is accurate, efficient, and conceptually simple. Computations for similar and non-similar flows show excellent agreement with tabulated results, solutions computed with Keller's Box scheme, and experimental data.

  12. Linking Satellite-Derived Fire Counts to Satellite-Derived Weather Data in Fire Prediction Models to Forecast Extreme Fires in Siberia

    NASA Astrophysics Data System (ADS)

    Westberg, David; Soja, Amber; Stackhouse, Paul, Jr.

    2010-05-01

    Fire is the dominant disturbance that precipitates ecosystem change in boreal regions, and fire is largely under the control of weather and climate. Boreal systems contain the largest pool of terrestrial carbon, and Russia holds 2/3 of the global boreal forests. Fire frequency, fire severity, area burned and fire season length are predicted to increase in boreal regions under climate change scenarios. Meteorological parameters influence fire danger and fire is a catalyst for ecosystem change. Therefore to predict fire weather and ecosystem change, we must understand the factors that influence fire regimes and at what scale these are viable. Our data consists of NASA Langley Research Center (LaRC)-derived fire weather indices (FWI) and National Climatic Data Center (NCDC) surface station-derived FWI on a domain from 50°N-80°N latitude and 70°E-170°W longitude and the fire season from April through October for the years of 1999, 2002, and 2004. Both of these are calculated using the Canadian Forest Service (CFS) FWI, which is based on local noon surface-level air temperature, relative humidity, wind speed, and daily (noon-noon) rainfall. The large-scale (1°) LaRC product uses NASA Goddard Earth Observing System version 4 (GEOS-4) reanalysis and NASA Global Precipitation Climatology Project (GEOS-4/GPCP) data to calculate FWI. CFS Natural Resources Canada uses Geographic Information Systems (GIS) to interpolate NCDC station data and calculate FWI. We compare the LaRC GEOS- 4/GPCP FWI and CFS NCDC FWI based on their fraction of 1° grid boxes that contain satellite-derived fire counts and area burned to the domain total number of 1° grid boxes with a common FWI category (very low to extreme). These are separated by International Geosphere-Biosphere Programme (IGBP) 1°x1° resolution vegetation types to determine and compare fire regimes in each FWI/ecosystem class and to estimate the fraction of each of the 18 IGBP ecosystems burned, which are dependent on the FWI. On days with fire counts, the domain total of 1°x1° grid boxes with and without daily fire counts and area burned are totaled. The fraction of 1° grid boxes with fire counts and area burned to the total number of 1° grid boxes having common FWI category and vegetation type are accumulated, and a daily mean for the burning season is calculated. The mean fire counts and mean area burned plots appear to be well related. The ultimate goal of this research is to assess the viability of large-scale (1°) data to be used to assess fire weather danger and fire regimes, so these data can be confidently used to predict future fire regimes using large-scale fire weather data. Specifically, we related large-scale fire weather, area burned, and the amount of fire-induced ecosystem change. Both the LaRC and CFS FWI showed gradual linear increase in fraction of grid boxes with fire counts and area burned with increasing FWI category, with an exponential increase in the higher FWI categories in some cases, for the majority of the vegetation types. Our analysis shows a direct correlation between increased fire activity and increased FWI, independent of time or the severity of the fire season. During normal and extreme fire seasons, we noticed the fraction of fire counts and area burned per 1° grid box increased with increasing FWI rating. Given this analysis, we are confident large-scale weather and climate data, in this case from the GEOS-4 reanalysis and the GPCP data sets, can be used to accurately assess future fire potential. This increases confidence in the ability of large-scale IPCC weather and climate scenarios to predict future fire regimes in boreal regions.

  13. A GIS-based multi-source and multi-box modeling approach (GMSMB) for air pollution assessment--a North American case study.

    PubMed

    Wang, Bao-Zhen; Chen, Zhi

    2013-01-01

    This article presents a GIS-based multi-source and multi-box modeling approach (GMSMB) to predict the spatial concentration distributions of airborne pollutant on local and regional scales. In this method, an extended multi-box model combined with a multi-source and multi-grid Gaussian model are developed within the GIS framework to examine the contributions from both point- and area-source emissions. By using GIS, a large amount of data including emission sources, air quality monitoring, meteorological data, and spatial location information required for air quality modeling are brought into an integrated modeling environment. It helps more details of spatial variation in source distribution and meteorological condition to be quantitatively analyzed. The developed modeling approach has been examined to predict the spatial concentration distribution of four air pollutants (CO, NO(2), SO(2) and PM(2.5)) for the State of California. The modeling results are compared with the monitoring data. Good agreement is acquired which demonstrated that the developed modeling approach could deliver an effective air pollution assessment on both regional and local scales to support air pollution control and management planning.

  14. Experiences in Automated Calibration of a Nickel Equation of State

    NASA Astrophysics Data System (ADS)

    Carpenter, John H.

    2017-06-01

    Wide availability of large computers has led to increasing incorporation of computational data, such as from density functional theory molecular dynamics, in the development of equation of state (EOS) models. Once a grid of computational data is available, it is usually left to an expert modeler to model the EOS using traditional techniques. One can envision the possibility of using the increasing computing resources to perform black-box calibration of EOS models, with the goal of reducing the workload on the modeler or enabling non-experts to generate good EOSs with such a tool. Progress towards building such a black-box calibration tool will be explored in the context of developing a new, wide-range EOS for nickel. While some details of the model and data will be shared, the focus will be on what was learned by automatically calibrating the model in a black-box method. Model choices and ensuring physicality will also be discussed. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  15. OVERGRID: A Unified Overset Grid Generation Graphical Interface

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Akien, Edwin W. (Technical Monitor)

    1999-01-01

    This paper presents a unified graphical interface and gridding strategy for performing overset grid generation. The interface called OVERGRID has been specifically designed to follow an efficient overset gridding strategy, and contains general grid manipulation capabilities as well as modules that are specifically suited for overset grids. General grid utilities include functions for grid redistribution, smoothing, concatenation, extraction, extrapolation, projection, and many others. Modules specially tailored for overset grids include a seam curve extractor, hyperbolic and algebraic surface grid generators, a hyperbolic volume grid generator, and a Cartesian box grid generator, Grid visualization is achieved using OpenGL while widgets are constructed with Tcl/Tk. The software is portable between various platforms from UNIX workstations to personal computers.

  16. Linear scaling computation of the Fock matrix. VI. Data parallel computation of the exchange-correlation matrix

    NASA Astrophysics Data System (ADS)

    Gan, Chee Kwan; Challacombe, Matt

    2003-05-01

    Recently, early onset linear scaling computation of the exchange-correlation matrix has been achieved using hierarchical cubature [J. Chem. Phys. 113, 10037 (2000)]. Hierarchical cubature differs from other methods in that the integration grid is adaptive and purely Cartesian, which allows for a straightforward domain decomposition in parallel computations; the volume enclosing the entire grid may be simply divided into a number of nonoverlapping boxes. In our data parallel approach, each box requires only a fraction of the total density to perform the necessary numerical integrations due to the finite extent of Gaussian-orbital basis sets. This inherent data locality may be exploited to reduce communications between processors as well as to avoid memory and copy overheads associated with data replication. Although the hierarchical cubature grid is Cartesian, naive boxing leads to irregular work loads due to strong spatial variations of the grid and the electron density. In this paper we describe equal time partitioning, which employs time measurement of the smallest sub-volumes (corresponding to the primitive cubature rule) to load balance grid-work for the next self-consistent-field iteration. After start-up from a heuristic center of mass partitioning, equal time partitioning exploits smooth variation of the density and grid between iterations to achieve load balance. With the 3-21G basis set and a medium quality grid, equal time partitioning applied to taxol (62 heavy atoms) attained a speedup of 61 out of 64 processors, while for a 110 molecule water cluster at standard density it achieved a speedup of 113 out of 128. The efficiency of equal time partitioning applied to hierarchical cubature improves as the grid work per processor increases. With a fine grid and the 6-311G(df,p) basis set, calculations on the 26 atom molecule α-pinene achieved a parallel efficiency better than 99% with 64 processors. For more coarse grained calculations, superlinear speedups are found to result from reduced computational complexity associated with data parallelism.

  17. Virtualizing access to scientific applications with the Application Hosting Environment

    NASA Astrophysics Data System (ADS)

    Zasada, S. J.; Coveney, P. V.

    2009-12-01

    The growing power and number of high performance computing resources made available through computational grids present major opportunities as well as a number of challenges to the user. At issue is how these resources can be accessed and how their power can be effectively exploited. In this paper we first present our views on the usability of contemporary high-performance computational resources. We introduce the concept of grid application virtualization as a solution to some of the problems with grid-based HPC usability. We then describe a middleware tool that we have developed to realize the virtualization of grid applications, the Application Hosting Environment (AHE), and describe the features of the new release, AHE 2.0, which provides access to a common platform of federated computational grid resources in standard and non-standard ways. Finally, we describe a case study showing how AHE supports clinical use of whole brain blood flow modelling in a routine and automated fashion. Program summaryProgram title: Application Hosting Environment 2.0 Catalogue identifier: AEEJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence, Version 2 No. of lines in distributed program, including test data, etc.: not applicable No. of bytes in distributed program, including test data, etc.: 1 685 603 766 Distribution format: tar.gz Programming language: Perl (server), Java (Client) Computer: x86 Operating system: Linux (Server), Linux/Windows/MacOS (Client) RAM: 134 217 728 (server), 67 108 864 (client) bytes Classification: 6.5 External routines: VirtualBox (server), Java (client) Nature of problem: The middleware that makes grid computing possible has been found by many users to be too unwieldy, and presents an obstacle to use rather than providing assistance [1,2]. Such problems are compounded when one attempts to harness the power of a grid, or a federation of different grids, rather than just a single resource on the grid. Solution method: To address the above problem, we have developed AHE, a lightweight interface, designed to simplify the process of running scientific codes on a grid of HPC and local resources. AHE does this by introducing a layer of middleware between the user and the grid, which encapsulates much of the complexity associated with launching grid applications. Unusual features: The server is distributed as a VirtualBox virtual machine. VirtualBox ( http://www.virtualbox.org) must be downloaded and installed in order to run the AHE server virtual machine. Details of how to do this are given in the AHE 2.0 Quick Start Guide. Running time: Not applicable References:J. Chin, P.V. Coveney, Towards tractable toolkits for the grid: A plea for lightweight, useable middleware, NeSC Technical Report, 2004, http://nesc.ac.uk/technical_papers/UKeS-2004-01.pdf. P.V. Coveney, R.S. Saksena, S.J. Zasada, M. McKeown, S. Pickles, The Application Hosting Environment: Lightweight middleware for grid-based computational science, Computer Physics Communications 176 (2007) 406-418.

  18. Improvements to the gridding of precipitation data across Europe under the E-OBS scheme

    NASA Astrophysics Data System (ADS)

    Cornes, Richard; van den Besselaar, Else; Jones, Phil; van der Schrier, Gerard; Verver, Ge

    2016-04-01

    Gridded precipitation data are a valuable resource for analyzing past variations and trends in the hydroclimate. Such data also provide a reference against which model simulations may be driven, compared and/or adjusted. The E-OBS precipitation dataset is widely used for such analyses across Europe, and is particularly valuable since it provides a spatially complete, daily field across the European domain. In this analysis, improvements to the E-OBS precipitation dataset will be presented that aim to provide a more reliable estimate of grid-box precipitation values, particularly in mountainous areas and in regions with a relative sparsity of input station data. The established three-stage E-OBS gridding scheme is retained, whereby monthly precipitation totals are gridded using a thin-plate spline; daily anomalies are gridded using indicator kriging; and the final dataset is produced by multiplying the two grids. The current analysis focuses on improving the monthly thin-plate spline, which has overall control on the final daily dataset. The results from different techniques are compared and the influence on the final daily data is assessed by comparing the data against gridded country-wide datasets produced by various National Meteorological Services

  19. Relationships of surrounding riparian habitat to nest-box use and reproductive outcome in House Wrens

    Treesearch

    Deborah M. Finch

    1989-01-01

    I assessed relationships among habitat structure, nest-site selection, and reproductive outcome of House Wrens (Troglodytes aedon) by establishing three nest-box grids in riparian woodlands in southeastern Wyoming. Over a 3-year period, 37% of the boxes contained House Wren nests; 20% contained unused nests built by male House Wrens; and 42% were never used by wrens....

  20. Gut microbiota composition is correlated to grid floor induced stress and behavior in the BALB/c mouse.

    PubMed

    Bangsgaard Bendtsen, Katja Maria; Krych, Lukasz; Sørensen, Dorte Bratbo; Pang, Wanyong; Nielsen, Dennis Sandris; Josefsen, Knud; Hansen, Lars H; Sørensen, Søren J; Hansen, Axel Kornerup

    2012-01-01

    Stress has profound influence on the gastro-intestinal tract, the immune system and the behavior of the animal. In this study, the correlation between gut microbiota composition determined by Denaturing Grade Gel Electrophoresis (DGGE) and tag-encoded 16S rRNA gene amplicon pyrosequencing (454/FLX) and behavior in the Tripletest (Elevated Plus Maze, Light/Dark Box, and Open Field combined), the Tail Suspension Test, and Burrowing in 28 female BALB/c mice exposed to two weeks of grid floor induced stress was investigated. Cytokine and glucose levels were measured at baseline, during and after exposure to grid floor. Stressing the mice clearly changed the cecal microbiota as determined by both DGGE and pyrosequencing. Odoribacter, Alistipes and an unclassified genus from the Coriobacteriaceae family increased significantly in the grid floor housed mice. Compared to baseline, the mice exposed to grid floor housing changed the amount of time spent in the Elevated Plus Maze, in the Light/Dark Box, and burrowing behavior. The grid floor housed mice had significantly longer immobility duration in the Tail Suspension Test and increased their number of immobility episodes from baseline. Significant correlations were found between GM composition and IL-1α, IFN-γ, closed arm entries of Elevated Plus Maze, total time in Elevated Plus Maze, time spent in Light/Dark Box, and time spent in the inner zone of the Open Field as well as total time in the Open Field. Significant correlations were found to the levels of Firmicutes, e.g. various species of Ruminococccaceae and Lachnospiraceae. No significant difference was found for the evaluated cytokines, except an overall decrease in levels from baseline to end. A significant lower level of blood glucose was found in the grid floor housed mice, whereas the HbA1c level was significantly higher. It is concluded that grid floor housing changes the GM composition, which seems to influence certain anxiety-related parameters.

  1. Uncertain Representations of Sub-Grid Pollutant Transport in Chemistry-Transport Models and Impacts on Long-Range Transport and Global Composition

    NASA Technical Reports Server (NTRS)

    Pawson, Steven; Zhu, Z.; Ott, L. E.; Molod, A.; Duncan, B. N.; Nielsen, J. E.

    2009-01-01

    Sub-grid transport, by convection and turbulence, is known to play an important role in lofting pollutants from their source regions. Consequently, the long-range transport and climatology of simulated atmospheric composition are impacted. This study uses the Goddard Earth Observing System, Version 5 (GEOS-5) atmospheric model to study pollutant transport. The baseline model uses a Relaxed Arakawa-Schubert (RAS) scheme that represents convection through a sequence of linearly entraining cloud plumes characterized by unique detrainment levels. Thermodynamics, moisture and trace gases are transported in the same manner. Various approximate forms of trace-gas transport are implemented, in which the box-averaged cloud mass fluxes from RAS are used with different numerical approaches. Substantial impacts on forward-model simulations of CO (using a linearized chemistry) are evident. In particular, some aspects of simulations using a diffusive form of sub-grid transport bear more resemblance to space-biased CO observations than do the baseline simulations with RAS transport. Implications for transport in the real atmosphere will be discussed. Another issue of importance is that many adjoint/inversion computations use simplified representations of sub-grid transport that may be inconsistent with the forward models: implications will be discussed. Finally, simulations using a complex chemistry model in GEOS-5 (in place of the linearized CO model) are underway: noteworthy results from this simulation will be mentioned.

  2. Marine ecosystem modeling beyond the box: using GIS to study carbon fluxes in a coastal ecosystem.

    PubMed

    Wijnbladh, Erik; Jönsson, Bror Fredrik; Kumblad, Linda

    2006-12-01

    Studies of carbon fluxes in marine ecosystems are often done by using box model approaches with basin size boxes, or highly resolved 3D models, and an emphasis on the pelagic component of the ecosystem. Those approaches work well in the ocean proper, but can give rise to considerable problems when applied to coastal systems, because of the scale of certain ecological niches and the fact that benthic organisms are the dominant functional group of the ecosystem. In addition, 3D models require an extensive modeling effort. In this project, an intermediate approach based on a high resolution (20x20 m) GIS data-grid has been developed for the coastal ecosystem in the Laxemar area (Baltic Sea, Sweden) based on a number of different site investigations. The model has been developed in the context of a safety assessment project for a proposed nuclear waste repository, in which the fate of hypothetically released radionuclides from the planned repository is estimated. The assessment project requires not only a good understanding of the ecosystem dynamics at the site, but also quantification of stocks and flows of matter in the system. The data-grid was then used to set up a carbon budget describing the spatial distribution of biomass, primary production, net ecosystem production and thus where carbon sinks and sources are located in the area. From these results, it was clear that there was a large variation in ecosystem characteristics within the basins and, on a larger scale, that the inner areas are net producing and the outer areas net respiring, even in shallow phytobenthic communities. Benthic processes had a similar or larger influence on carbon fluxes as advective processes in inner areas, whereas the opposite appears to be true in the outer basins. As many radionuclides are expected to follow the pathways of organic matter in the environment, these findings enhance our abilities to realistically describe and predict their fate in the ecosystem.

  3. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids

    PubMed Central

    Hesford, Andrew J.; Waag, Robert C.

    2010-01-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased. PMID:20835366

  4. The fast multipole method and Fourier convolution for the solution of acoustic scattering on regular volumetric grids

    NASA Astrophysics Data System (ADS)

    Hesford, Andrew J.; Waag, Robert C.

    2010-10-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  5. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids.

    PubMed

    Hesford, Andrew J; Waag, Robert C

    2010-10-20

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  6. Evaluation of subgrid-scale turbulence models using a fully simulated turbulent flow

    NASA Technical Reports Server (NTRS)

    Clark, R. A.; Ferziger, J. H.; Reynolds, W. C.

    1977-01-01

    An exact turbulent flow field was calculated on a three-dimensional grid with 64 points on a side. The flow simulates grid-generated turbulence from wind tunnel experiments. In this simulation, the grid spacing is small enough to include essentially all of the viscous energy dissipation, and the box is large enough to contain the largest eddy in the flow. The method is limited to low-turbulence Reynolds numbers, in our case R sub lambda = 36.6. To complete the calculation using a reasonable amount of computer time with reasonable accuracy, a third-order time-integration scheme was developed which runs at about the same speed as a simple first-order scheme. It obtains this accuracy by saving the velocity field and its first-time derivative at each time step. Fourth-order accurate space-differencing is used.

  7. Machine learning of atmospheric chemistry. Applications to a global chemistry transport model.

    NASA Astrophysics Data System (ADS)

    Evans, M. J.; Keller, C. A.

    2017-12-01

    Atmospheric chemistry is central to many environmental issues such as air pollution, climate change, and stratospheric ozone loss. Chemistry Transport Models (CTM) are a central tool for understanding these issues, whether for research or for forecasting. These models split the atmosphere in a large number of grid-boxes and consider the emission of compounds into these boxes and their subsequent transport, deposition, and chemical processing. The chemistry is represented through a series of simultaneous ordinary differential equations, one for each compound. Given the difference in life-times between the chemical compounds (mili-seconds for O(1D) to years for CH4) these equations are numerically stiff and solving them consists of a significant fraction of the computational burden of a CTM.We have investigated a machine learning approach to solving the differential equations instead of solving them numerically. From an annual simulation of the GEOS-Chem model we have produced a training dataset consisting of the concentration of compounds before and after the differential equations are solved, together with some key physical parameters for every grid-box and time-step. From this dataset we have trained a machine learning algorithm (random regression forest) to be able to predict the concentration of the compounds after the integration step based on the concentrations and physical state at the beginning of the time step. We have then included this algorithm back into the GEOS-Chem model, bypassing the need to integrate the chemistry.This machine learning approach shows many of the characteristics of the full simulation and has the potential to be substantially faster. There are a wide range of application for such an approach - generating boundary conditions, for use in air quality forecasts, chemical data assimilation systems, centennial scale climate simulations etc. We discuss our approches' speed and accuracy, and highlight some potential future directions for improving this approach.

  8. On solving three-dimensional open-dimension rectangular packing problems

    NASA Astrophysics Data System (ADS)

    Junqueira, Leonardo; Morabito, Reinaldo

    2017-05-01

    In this article, a recently proposed three-dimensional open-dimension rectangular packing problem is considered, in which the objective is to find a minimal volume rectangular container that packs a set of rectangular boxes. The literature has tackled small-sized instances of this problem by means of optimization solvers, position-free mixed-integer programming (MIP) formulations and piecewise linearization approaches. In this study, the problem is alternatively addressed by means of grid-based position MIP formulations, whereas still considering optimization solvers and the same piecewise linearization techniques. A comparison of the computational performance of both models is then presented, when tested with benchmark problem instances and with new instances, and it is shown that the grid-based position MIP formulation can be competitive, depending on the characteristics of the instances. The grid-based position MIP formulation is also embedded with real-world practical constraints, such as cargo stability, and results are additionally presented.

  9. A physical model of ice sheet response to changes in subglacial hydrology

    NASA Astrophysics Data System (ADS)

    Andrews, L. C.; Catania, G. A.; Buttles, J. L.; Andrews, A.; Markowski, M.

    2010-12-01

    Using a physical ice sheet model, we investigate the degree to which motion is controlled by local loss of basal traction versus longitudinal coupling during diurnal, seasonal, and event-type water pulses. Our model can be used to reproduce the spatial pattern and magnitude of ice surface displacements and can aid in the interpretation of ground-based GPS measurements, as it eliminates many of the complicating factors influencing surface velocity measurements. This model consists of a 3 x 1.5 meter plastic box with a grid of holes on the bed used to inject water directly between the interface of the box and a silicone polymer. Water flow is visualized using a colored dye. The polymer response to perturbations in water flow is measured by tracking surface markers through a series of overhead images. We report on a suite of experiments that explore the relationship between water discharge, basal traction, and surface displacements and compare our results to ground-based GPS measurements from a transect in western Greenland.

  10. A satellite simulator for TRMM PR applied to climate model simulations

    NASA Astrophysics Data System (ADS)

    Spangehl, T.; Schroeder, M.; Bodas-Salcedo, A.; Hollmann, R.; Riley Dellaripa, E. M.; Schumacher, C.

    2017-12-01

    Climate model simulations have to be compared against observation based datasets in order to assess their skill in representing precipitation characteristics. Here we use a satellite simulator for TRMM PR in order to evaluate simulations performed with MPI-ESM (Earth system model of the Max Planck Institute for Meteorology in Hamburg, Germany) performed within the MiKlip project (https://www.fona-miklip.de/, funded by Federal Ministry of Education and Research in Germany). While classical evaluation methods focus on geophysical parameters such as precipitation amounts, the application of the satellite simulator enables an evaluation in the instrument's parameter space thereby reducing uncertainties on the reference side. The CFMIP Observation Simulator Package (COSP) provides a framework for the application of satellite simulators to climate model simulations. The approach requires the introduction of sub-grid cloud and precipitation variability. Radar reflectivities are obtained by applying Mie theory, with the microphysical assumptions being chosen to match the atmosphere component of MPI-ESM (ECHAM6). The results are found to be sensitive to the methods used to distribute the convective precipitation over the sub-grid boxes. Simple parameterization methods are used to introduce sub-grid variability of convective clouds and precipitation. In order to constrain uncertainties a comprehensive comparison with sub-grid scale convective precipitation variability which is deduced from TRMM PR observations is carried out.

  11. Assessment and Planning Using Portfolio Analysis

    ERIC Educational Resources Information Center

    Roberts, Laura B.

    2010-01-01

    Portfolio analysis is a simple yet powerful management tool. Programs and activities are placed on a grid with mission along one axis and financial return on the other. The four boxes of the grid (low mission, low return; high mission, low return; high return, low mission; high return, high mission) help managers identify which programs might be…

  12. Spatial Variability of CCN Sized Aerosol Particles

    NASA Astrophysics Data System (ADS)

    Asmi, A.; Väänänen, R.

    2014-12-01

    The computational limitations restrict the grid size used in GCM models, and for many cloud types they are too large when compared to the scale of the cloud formation processes. Several parameterizations for e.g. convective cloud formation exist, but information on spatial subgrid variation of the cloud condensation nuclei (CCNs) sized aerosol concentration is not known. We quantify this variation as a function of the spatial scale by using datasets from airborne aerosol measurement campaigns around the world including EUCAARI LONGREX, ATAR, INCA, INDOEX, CLAIRE, PEGASOS and several regional airborne campaigns in Finland. The typical shapes of the distributions are analyzed. When possible, we use information obtained by CCN counters. In some other cases, we use particle size distribution measured by for example SMPS to get approximated CCN concentration. Other instruments used include optical particle counters or condensational particle counters. When using the GCM models, the CCN concentration used for each the grid-box is often considered to be either flat, or as an arithmetic mean of the concentration inside the grid-box. However, the aircraft data shows that the concentration values are often lognormal distributed. This, combined with the subgrid variations in the land use and atmospheric properties, might cause that the aerosol-cloud interactions calculated by using mean values to vary significantly from the true effects both temporary and spatially. This, in turn, can cause non-linear bias into the GCMs. We calculate the CCN aerosol concentration distribution as a function of different spatial scales. The measurements allow us to study the variation of these distributions within from hundreds of meters up to hundreds of kilometers. This is used to quantify the potential error when mean values are used in GCMs.

  13. Gridding Cloud and Irradiance to Quantify Variability at the ARM Southern Great Plains Site

    NASA Astrophysics Data System (ADS)

    Riihimaki, L.; Long, C. N.; Gaustad, K.

    2017-12-01

    Ground-based radiometers provide the most accurate measurements of surface irradiance. However, geometry differences between surface point measurements and large area climate model grid boxes or satellite-based footprints can cause systematic differences in surface irradiance comparisons. In this work, irradiance measurements from a network of ground stations around Kansas and Oklahoma at the US Department of Energy Atmospheric Radiation Measurement (ARM) Southern Great Plains facility are examined. Upwelling and downwelling broadband shortwave and longwave radiometer measurements are available at each site as well as surface meteorological measurements. In addition to the measured irradiances, clear sky irradiance and cloud fraction estimates are analyzed using well established methods based on empirical fits to measured clear sky irradiances. Measurements are interpolated onto a 0.25 degree latitude and longitude grid using a Gaussian weight scheme in order to provide a more accurate statistical comparison between ground measurements and a larger area such as that used in climate models, plane parallel radiative transfer calculations, and other statistical and climatological research. Validation of the gridded product will be shown, as well as analysis that quantifies the impact of site location, cloud type, and other factors on the resulting surface irradiance estimates. The results of this work are being incorporated into the Surface Cloud Grid operational data product produced by ARM, and will be made publicly available for use by others.

  14. The neglected nonlocal effects of deforestation

    NASA Astrophysics Data System (ADS)

    Winckler, Johannes; Reick, Christian; Pongratz, Julia

    2017-04-01

    Deforestation changes surface temperature locally via biogeophysical effects by changing the water, energy and momentum balance. Adding to these locally induced changes (local effects), deforestation at a given location can cause changes in temperature elsewhere (nonlocal effects). Most previous studies have not considered local and nonlocal effects separately, but investigated the total (local plus nonlocal) effects, for which global deforestation was found to cause a global mean cooling. Recent modeling and observational studies focused on the isolated local effects: The local effects are relevant for local living conditions, and they can be obtained from in-situ and satellite observations. Observational studies suggest that the local effects of potential deforestation cause a warming when averaged globally. This contrast between local warming and total cooling indicates that the nonlocal effects of deforestation are causing a cooling and thus counteract the local effects. It is still unclear how the nonlocal effects depend on the spatial scale of deforestation, and whether they still compensate the local warming in a more realistic spatial distribution of deforestation. To investigate this, we use a fully coupled climate model and separate local and nonlocal effects of deforestation in three steps: Starting from a forest world, we simulate deforestation in one out of four grid boxes using a regular spatial pattern and increase the number of deforestation grid boxes step-wise up to three out of four boxes in subsequent simulations. To compare these idealized spatial distributions of deforestation to a more realistic case, we separate local and nonlocal effects in a simulation where deforestation is applied in regions where it occurred historically. We find that the nonlocal effects scale nearly linearly with the number of deforested grid boxes, and the spatial distribution of the nonlocal effects is similar for the regular spatial distribution of deforestation and the more realistic pattern. Globally averaged, the deforestation-induced warming of the local effects is counteracted by the nonlocal effects, which are about three times as strong as the local effects (up to 0.1K local warming versus -0.3K nonlocal cooling). Thus, the nonlocal effects are more cooling than the local effects are warming, and this is valid not only for idealized simulations of large-scale deforestation, but also for a more realistic deforestation scenario. We conclude that the local effects of deforestation only yield an incomplete picture of the total climate effects by biogeophysical pathways. While the local effects capture the direct climatic response at the site of deforestation, the nonlocal effects have to be included if the biogeophysical effects of deforestation are considered for an implementation in climate policies.

  15. Software Design Document SAF Simulation Host CSCI (8). Volume 1, Sections 1.0 - 2.7

    DTIC Science & Technology

    1991-06-01

    list for the patch, testing edges matching grid-loc-woni for intervisibility blocks. Calls Function IWhere Described Icheck edges Sec. 2.6.7.1.8 Table...edges matching grid-loc-word for intervisibility blocks. Calls Function Where Described check box Sec. 2.6.7.1.31 treelines Sec. 2.6.7.1.16 Icheck edges

  16. Visualizing Spatially Varying Distribution Data

    NASA Technical Reports Server (NTRS)

    Kao, David; Luo, Alison; Dungan, Jennifer L.; Pang, Alex; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Box plot is a compact representation that encodes the minimum, maximum, mean, median, and quarters information of a distribution. In practice, a single box plot is drawn for each variable of interest. With the advent of more accessible computing power, we are now facing the problem of visual icing data where there is a distribution at each 2D spatial location. Simply extending the box plot technique to distributions over 2D domain is not straightforward. One challenge is reducing the visual clutter if a box plot is drawn over each grid location in the 2D domain. This paper presents and discusses two general approaches, using parametric statistics and shape descriptors, to present 2D distribution data sets. Both approaches provide additional insights compared to the traditional box plot technique

  17. The Overgrid Interface for Computational Simulations on Overset Grids

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    Computational simulations using overset grids typically involve multiple steps and a variety of software modules. A graphical interface called OVERGRID has been specially designed for such purposes. Data required and created by the different steps include geometry, grids, domain connectivity information and flow solver input parameters. The interface provides a unified environment for the visualization, processing, generation and diagnosis of such data. General modules are available for the manipulation of structured grids and unstructured surface triangulations. Modules more specific for the overset approach include surface curve generators, hyperbolic and algebraic surface grid generators, a hyperbolic volume grid generator, Cartesian box grid generators, and domain connectivity: pre-processing tools. An interface provides automatic selection and viewing of flow solver boundary conditions, and various other flow solver inputs. For problems involving multiple components in relative motion, a module is available to build the component/grid relationships and to prescribe and animate the dynamics of the different components.

  18. powerbox: Arbitrarily structured, arbitrary-dimension boxes and log-normal mocks

    NASA Astrophysics Data System (ADS)

    Murray, Steven G.

    2018-05-01

    powerbox creates density grids (or boxes) with an arbitrary two-point distribution (i.e. power spectrum). The software works in any number of dimensions, creates Gaussian or Log-Normal fields, and measures power spectra of output fields to ensure consistency. The primary motivation for creating the code was the simple creation of log-normal mock galaxy distributions, but the methodology can be used for other applications.

  19. Advanced Unstructured Grid Generation for Complex Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2008-01-01

    A new approach for distribution of grid points on the surface and in the volume has been developed and implemented in the NASA unstructured grid generation code VGRID. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.

  20. Automated analysis of lightning leader speed, local flash rates and electric charge structure in thunderstorms

    NASA Astrophysics Data System (ADS)

    Van Der Velde, O. A.; Montanya, J.; López, J. A.

    2017-12-01

    A Lightning Mapping Array (LMA) maps radio pulses emitted by lightning leaders, displaying lightning flash development in the cloud in three dimensions. Since the last 10 years about a dozen of these advanced systems have become operational in the United States and in Europe, often with the purpose of severe weather monitoring or lightning research. We introduce new methods for the analysis of complex three-dimensional lightning data produced by LMAs and illustrate them by cases of a mid-latitude severe weather producing thunderstorm and a tropical thunderstorm in Colombia. The method is based on the characteristics of bidrectional leader development as observed in LMA data (van der Velde and Montanyà, 2013, JGR-Atmospheres), where mapped positive leaders were found to propagate at characteristic speeds around 2 · 104 m s-1, while negative leaders typically propagate at speeds around 105 m s-1. Here, we determine leader speed for every 1.5 x 1.5 x 0.75 km grid box in 3 ms time steps, using two time intervals (e.g., 9 ms and 27 ms) and circles (4.5 km and 2.5 km wide) in which a robust Theil-Sen fitting of the slope is performed for fast and slow leaders. The two are then merged such that important speed characteristics are optimally maintained in negative and positive leaders, and labeled with positive or negative polarity according to the resulting velocity. The method also counts how often leaders from a lightning flash initiate or pass through each grid box. This "local flash rate" may be used in severe thunderstorm or NOx production studies and shall be more meaningful than LMA source density which is biased by the detection efficiency. Additionally, in each grid box the median x, y and z components of the leader propagation vectors of all flashes result in a 3D vector grid which can be compared to vectors in numerical models of leader propagation in response to cloud charge structure. Finally, the charge region altitudes, thickness and rates are summarized from vertical profiles of positive and negative leader rates where these exceed their 7-point averaged profiles. The summarized data can be used to follow charge structure evolution over time, and will be useful for climatological studies and statistical comparison against the parameters of the meteorological environment of storms.

  1. Evaluation of incremental reactivity and its uncertainty in Southern California.

    PubMed

    Martien, Philip T; Harley, Robert A; Milford, Jana B; Russell, Armistead G

    2003-04-15

    The incremental reactivity (IR) and relative incremental reactivity (RIR) of carbon monoxide and 30 individual volatile organic compounds (VOC) were estimated for the South Coast Air Basin using two photochemical air quality models: a 3-D, grid-based model and a vertically resolved trajectory model. Both models include an extended version of the SAPRC99 chemical mechanism. For the 3-D modeling, the decoupled direct method (DDM-3D) was used to assess reactivities. The trajectory model was applied to estimate uncertainties in reactivities due to uncertainties in chemical rate parameters, deposition parameters, and emission rates using Monte Carlo analysis with Latin hypercube sampling. For most VOC, RIRs were found to be consistent in rankings with those produced by Carter using a box model. However, 3-D simulations show that coastal regions, upwind of most of the emissions, have comparatively low IR but higher RIR than predicted by box models for C4-C5 alkenes and carbonyls that initiate the production of HOx radicals. Biogenic VOC emissions were found to have a lower RIR than predicted by box model estimates, because emissions of these VOC were mostly downwind of the areas of primary ozone production. Uncertainties in RIR of individual VOC were found to be dominated by uncertainties in the rate parameters of their primary oxidation reactions. The coefficient of variation (COV) of most RIR values ranged from 20% to 30%, whereas the COV of absolute incremental reactivity ranged from about 30% to 40%. In general, uncertainty and variability both decreased when relative rather than absolute reactivity metrics were used.

  2. Advanced Unstructured Grid Generation for Complex Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar

    2010-01-01

    A new approach for distribution of grid points on the surface and in the volume has been developed. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.

  3. Boundary Conditions for Scalar (Co)Variances over Heterogeneous Surfaces

    NASA Astrophysics Data System (ADS)

    Machulskaya, Ekaterina; Mironov, Dmitrii

    2018-05-01

    The problem of boundary conditions for the variances and covariances of scalar quantities (e.g., temperature and humidity) at the underlying surface is considered. If the surface is treated as horizontally homogeneous, Monin-Obukhov similarity suggests the Neumann boundary conditions that set the surface fluxes of scalar variances and covariances to zero. Over heterogeneous surfaces, these boundary conditions are not a viable choice since the spatial variability of various surface and soil characteristics, such as the ground fluxes of heat and moisture and the surface radiation balance, is not accounted for. Boundary conditions are developed that are consistent with the tile approach used to compute scalar (and momentum) fluxes over heterogeneous surfaces. To this end, the third-order transport terms (fluxes of variances) are examined analytically using a triple decomposition of fluctuating velocity and scalars into the grid-box mean, the fluctuation of tile-mean quantity about the grid-box mean, and the sub-tile fluctuation. The effect of the proposed boundary conditions on mixing in an archetypical stably-stratified boundary layer is illustrated with a single-column numerical experiment. The proposed boundary conditions should be applied in atmospheric models that utilize turbulence parametrization schemes with transport equations for scalar variances and covariances including the third-order turbulent transport (diffusion) terms.

  4. Box-Counting Dimension Revisited: Presenting an Efficient Method of Minimizing Quantization Error and an Assessment of the Self-Similarity of Structural Root Systems

    PubMed Central

    Bouda, Martin; Caplan, Joshua S.; Saiers, James E.

    2016-01-01

    Fractal dimension (FD), estimated by box-counting, is a metric used to characterize plant anatomical complexity or space-filling characteristic for a variety of purposes. The vast majority of published studies fail to evaluate the assumption of statistical self-similarity, which underpins the validity of the procedure. The box-counting procedure is also subject to error arising from arbitrary grid placement, known as quantization error (QE), which is strictly positive and varies as a function of scale, making it problematic for the procedure's slope estimation step. Previous studies either ignore QE or employ inefficient brute-force grid translations to reduce it. The goals of this study were to characterize the effect of QE due to translation and rotation on FD estimates, to provide an efficient method of reducing QE, and to evaluate the assumption of statistical self-similarity of coarse root datasets typical of those used in recent trait studies. Coarse root systems of 36 shrubs were digitized in 3D and subjected to box-counts. A pattern search algorithm was used to minimize QE by optimizing grid placement and its efficiency was compared to the brute force method. The degree of statistical self-similarity was evaluated using linear regression residuals and local slope estimates. QE, due to both grid position and orientation, was a significant source of error in FD estimates, but pattern search provided an efficient means of minimizing it. Pattern search had higher initial computational cost but converged on lower error values more efficiently than the commonly employed brute force method. Our representations of coarse root system digitizations did not exhibit details over a sufficient range of scales to be considered statistically self-similar and informatively approximated as fractals, suggesting a lack of sufficient ramification of the coarse root systems for reiteration to be thought of as a dominant force in their development. FD estimates did not characterize the scaling of our digitizations well: the scaling exponent was a function of scale. Our findings serve as a caution against applying FD under the assumption of statistical self-similarity without rigorously evaluating it first. PMID:26925073

  5. Evaluation of tracking accuracy of the CyberKnife system using a webcam and printed calibrated grid.

    PubMed

    Sumida, Iori; Shiomi, Hiroya; Higashinaka, Naokazu; Murashima, Yoshikazu; Miyamoto, Youichi; Yamazaki, Hideya; Mabuchi, Nobuhisa; Tsuda, Eimei; Ogawa, Kazuhiko

    2016-03-08

    Tracking accuracy for the CyberKnife's Synchrony system is commonly evaluated using a film-based verification method. We have evaluated a verification system that uses a webcam and a printed calibrated grid to verify tracking accuracy over three different motion patterns. A box with an attached printed calibrated grid and four fiducial markers was attached to the motion phantom. A target marker was positioned at the grid's center. The box was set up using the other three markers. Target tracking accuracy was evaluated under three conditions: 1) stationary; 2) sinusoidal motion with different amplitudes of 5, 10, 15, and 20 mm for the same cycle of 4 s and different cycles of 2, 4, 6, and 8 s with the same amplitude of 15 mm; and 3) irregular breathing patterns in six human volunteers breathing normally. Infrared markers were placed on the volunteers' abdomens, and their trajectories were used to simulate the target motion. All tests were performed with one-dimensional motion in craniocaudal direction. The webcam captured the grid's motion and a laser beam was used to simulate the CyberKnife's beam. Tracking error was defined as the difference between the grid's center and the laser beam. With a stationary target, mean tracking error was measured at 0.4 mm. For sinusoidal motion, tracking error was less than 2 mm for any amplitude and breathing cycle. For the volunteers' breathing patterns, the mean tracking error range was 0.78-1.67 mm. Therefore, accurate lesion targeting requires individual quality assurance for each patient.

  6. Large-scale Density Structures in Magneto-rotational Disk Turbulence

    NASA Astrophysics Data System (ADS)

    Youdin, Andrew; Johansen, A.; Klahr, H.

    2009-01-01

    Turbulence generated by the magneto-rotational instability (MRI) is a strong candidate to drive accretion flows in disks, including sufficiently ionized regions of protoplanetary disks. The MRI is often studied in local shearing boxes, which model a small section of the disk at high resolution. I will present simulations of large, stratified shearing boxes which extend up to 10 gas scale-heights across. These simulations are a useful bridge to fully global disk simulations. We find that MRI turbulence produces large-scale, axisymmetric density perturbations . These structures are part of a zonal flow --- analogous to the banded flow in Jupiter's atmosphere --- which survives in near geostrophic balance for tens of orbits. The launching mechanism is large-scale magnetic tension generated by an inverse cascade. We demonstrate the robustness of these results by careful study of various box sizes, grid resolutions, and microscopic diffusion parameterizations. These gas structures can trap solid material (in the form of large dust or ice particles) with important implications for planet formation. Resolved disk images at mm-wavelengths (e.g. from ALMA) will verify or constrain the existence of these structures.

  7. Anisotropy of Observed and Simulated Turbulence in Marine Stratocumulus

    NASA Astrophysics Data System (ADS)

    Pedersen, J. G.; Ma, Y.-F.; Grabowski, W. W.; Malinowski, S. P.

    2018-02-01

    Anisotropy of turbulence near the top of the stratocumulus-topped boundary layer (STBL) is studied using large-eddy simulation (LES) and measurements from the POST and DYCOMS-II field campaigns. Focusing on turbulence ˜100 m below the cloud top, we see remarkable similarity between daytime and nocturnal flight data covering different inversion strengths and free-tropospheric conditions. With λ denoting wavelength and zt cloud-top height, we find that turbulence at λ/zt≃0.01 is weakly dominated by horizontal fluctuations, while turbulence at λ/zt>1 becomes strongly dominated by horizontal fluctuations. Between are scales at which vertical fluctuations dominate. Typical-resolution LES of the STBL (based on POST flight 13 and DYCOMS-II flight 1) captures observed characteristics of below-cloud-top turbulence reasonably well. However, using a fixed vertical grid spacing of 5 m, decreasing the horizontal grid spacing and increasing the subgrid-scale mixing length leads to increased dominance of vertical fluctuations, increased entrainment velocity, and decreased liquid water path. Our analysis supports the notion that entrainment parameterizations (e.g., in climate models) could potentially be improved by accounting more accurately for anisotropic deformation of turbulence in the cloud-top region. While LES has the potential to facilitate improved understanding of anisotropic cloud-top turbulence, sensitivity to grid spacing, grid-box aspect ratio, and subgrid-scale model needs to be addressed.

  8. PEGASUS 5: An Automated Pre-Processor for Overset-Grid CFD

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.; Suhs, Norman; Dietz, William; Rogers, Stuart; Nash, Steve; Chan, William; Tramel, Robert; Onufer, Jeff

    2006-01-01

    This viewgraph presentation reviews the use and requirements of Pegasus 5. PEGASUS 5 is a code which performs a pre-processing step for the Overset CFD method. The code prepares the overset volume grids for the flow solver by computing the domain connectivity database, and blanking out grid points which are contained inside a solid body. PEGASUS 5 successfully automates most of the overset process. It leads to dramatic reduction in user input over previous generations of overset software. It also can lead to an order of magnitude reduction in both turn-around time and user expertise requirements. It is also however not a "black-box" procedure; care must be taken to examine the resulting grid system.

  9. Convective Weather Forecast Quality Metrics for Air Traffic Management Decision-Making

    NASA Technical Reports Server (NTRS)

    Chatterji, Gano B.; Gyarfas, Brett; Chan, William N.; Meyn, Larry A.

    2006-01-01

    Since numerical weather prediction models are unable to accurately forecast the severity and the location of the storm cells several hours into the future when compared with observation data, there has been a growing interest in probabilistic description of convective weather. The classical approach for generating uncertainty bounds consists of integrating the state equations and covariance propagation equations forward in time. This step is readily recognized as the process update step of the Kalman Filter algorithm. The second well known method, known as the Monte Carlo method, consists of generating output samples by driving the forecast algorithm with input samples selected from distributions. The statistical properties of the distributions of the output samples are then used for defining the uncertainty bounds of the output variables. This method is computationally expensive for a complex model compared to the covariance propagation method. The main advantage of the Monte Carlo method is that a complex non-linear model can be easily handled. Recently, a few different methods for probabilistic forecasting have appeared in the literature. A method for computing probability of convection in a region using forecast data is described in Ref. 5. Probability at a grid location is computed as the fraction of grid points, within a box of specified dimensions around the grid location, with forecast convection precipitation exceeding a specified threshold. The main limitation of this method is that the results are dependent on the chosen dimensions of the box. The examples presented Ref. 5 show that this process is equivalent to low-pass filtering of the forecast data with a finite support spatial filter. References 6 and 7 describe the technique for computing percentage coverage within a 92 x 92 square-kilometer box and assigning the value to the center 4 x 4 square-kilometer box. This technique is same as that described in Ref. 5. Characterizing the forecast, following the process described in Refs. 5 through 7, in terms of percentage coverage or confidence level is notionally sound compared to characterizing in terms of probabilities because the probability of the forecast being correct can only be determined using actual observations. References 5 through 7 only use the forecast data and not the observations. The method for computing the probability of detection, false alarm ratio and several forecast quality metrics (Skill Scores) using both the forecast and observation data are given in Ref. 2. This paper extends the statistical verification method in Ref. 2 to determine co-occurrence probabilities. The method consists of computing the probability that a severe weather cell (grid location) is detected in the observation data in the neighborhood of the severe weather cell in the forecast data. Probabilities of occurrence at the grid location and in its neighborhood with higher severity, and with lower severity in the observation data compared to that in the forecast data are examined. The method proposed in Refs. 5 through 7 is used for computing the probability that a certain number of cells in the neighborhood of severe weather cells in the forecast data are seen as severe weather cells in the observation data. Finally, the probability of existence of gaps in the observation data in the neighborhood of severe weather cells in forecast data is computed. Gaps are defined as openings between severe weather cells through which an aircraft can safely fly to its intended destination. The rest of the paper is organized as follows. Section II summarizes the statistical verification method described in Ref. 2. The extension of this method for computing the co-occurrence probabilities in discussed in Section HI. Numerical examples using NCWF forecast data and NCWD observation data are presented in Section III to elucidate the characteristics of the co-occurrence probabilities. This section also discusses the procedure for computing throbabilities that the severity of convection in the observation data will be higher or lower in the neighborhood of grid locations compared to that indicated at the grid locations in the forecast data. The probability of coverage of neighborhood grid cells is also described via examples in this section. Section IV discusses the gap detection algorithm and presents a numerical example to illustrate the method. The locations of the detected gaps in the observation data are used along with the locations of convective weather cells in the forecast data to determine the probability of existence of gaps in the neighborhood of these cells. Finally, the paper is concluded in Section V.

  10. Neutral Beam Injection System for the SHIP Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdrashitov, G.F.; Abdrashitov, A.G.; Anikeev, A.V.

    2005-01-15

    The injector ion source is based on an arcdischarge plasma box. The plasma emitter is produced by a 1 kA arc discharge in deuterium. A multipole magnetic field produced with permanent magnets at the periphery of the plasma box is used to increase its efficiency and improve homogeneity of the plasma emitter. The ion beam is extracted by a 4-electrodes ion optical system (IOS). Initial beam diameter is 200 mm. The grids of the IOS have a spherical curvature for geometrical focusing of the beam. The optimal IOS geometry and grid potentials were found by means of numerical simulation tomore » provide precise beam formation. The measured angular divergence of the beam is 0.025 rad, which corresponds to a 4.7 cm Gaussian radius of the beam profile measured at focal point.« less

  11. Human-modified temperatures induce species changes: Joint attribution.

    PubMed

    Root, Terry L; MacMynowski, Dena P; Mastrandrea, Michael D; Schneider, Stephen H

    2005-05-24

    Average global surface-air temperature is increasing. Contention exists over relative contributions by natural and anthropogenic forcings. Ecological studies attribute plant and animal changes to observed warming. Until now, temperature-species connections have not been statistically attributed directly to anthropogenic climatic change. Using modeled climatic variables and observed species data, which are independent of thermometer records and paleoclimatic proxies, we demonstrate statistically significant "joint attribution," a two-step linkage: human activities contribute significantly to temperature changes and human-changed temperatures are associated with discernible changes in plant and animal traits. Additionally, our analyses provide independent testing of grid-box-scale temperature projections from a general circulation model (HadCM3).

  12. High-Fidelity Computational Aerodynamics of the Elytron 4S UAV

    NASA Technical Reports Server (NTRS)

    Ventura Diaz, Patricia; Yoon, Seokkwan; Theodore, Colin R.

    2018-01-01

    High-fidelity Computational Fluid Dynamics (CFD) have been carried out for the Elytron 4S Unmanned Aerial Vehicle (UAV), also known as the converticopter "proto12". It is the scaled wind tunnel model of the Elytron 4S, an Urban Air Mobility (UAM) concept, a tilt-wing, box-wing rotorcraft capable of Vertical Take-Off and Landing (VTOL). The three-dimensional unsteady Navier-Stokes equations are solved on overset grids employing high-order accurate schemes, dual-time stepping, and a hybrid turbulence model using NASA's CFD code OVERFLOW. The Elytron 4S UAV has been simulated in airplane mode and in helicopter mode.

  13. Evaluation of tracking accuracy of the CyberKnife system using a webcam and printed calibrated grid

    PubMed Central

    Shiomi, Hiroya; Higashinaka, Naokazu; Murashima, Yoshikazu; Miyamoto, Youichi; Yamazaki, Hideya; Mabuchi, Nobuhisa; Tsuda, Eimei; Ogawa, Kazuhiko

    2016-01-01

    Tracking accuracy for the CyberKnife's Synchrony system is commonly evaluated using a film‐based verification method. We have evaluated a verification system that uses a webcam and a printed calibrated grid to verify tracking accuracy over three different motion patterns. A box with an attached printed calibrated grid and four fiducial markers was attached to the motion phantom. A target marker was positioned at the grid's center. The box was set up using the other three markers. Target tracking accuracy was evaluated under three conditions: 1) stationary; 2) sinusoidal motion with different amplitudes of 5, 10, 15, and 20 mm for the same cycle of 4 s and different cycles of 2, 4, 6, and 8 s with the same amplitude of 15 mm; and 3) irregular breathing patterns in six human volunteers breathing normally. Infrared markers were placed on the volunteers’ abdomens, and their trajectories were used to simulate the target motion. All tests were performed with one‐dimensional motion in craniocaudal direction. The webcam captured the grid's motion and a laser beam was used to simulate the CyberKnife's beam. Tracking error was defined as the difference between the grid's center and the laser beam. With a stationary target, mean tracking error was measured at 0.4 mm. For sinusoidal motion, tracking error was less than 2 mm for any amplitude and breathing cycle. For the volunteers’ breathing patterns, the mean tracking error range was 0.78‐1.67 mm. Therefore, accurate lesion targeting requires individual quality assurance for each patient. PACS number(s): 87.55.D‐, 87.55.km, 87.55.Qr, 87.56.Fc PMID:27074474

  14. Use of Moderate-Resolution Imaging Spectroradiometer bidirectional reflectance distribution function products to enhance simulated surface albedos

    NASA Astrophysics Data System (ADS)

    Roesch, Andreas; Schaaf, Crystal; Gao, Feng

    2004-06-01

    Moderate-Resolution Imaging Spectroradiometer (MODIS) surface albedo at high spatial and spectral resolution is compared with other remotely sensed climatologies, ground-based data, and albedos simulated with the European Center/Hamburg 4 (ECHAM4) global climate model at T42 resolution. The study demonstrates the importance of MODIS data in assessing and improving albedo parameterizations in weather forecast and climate models. The remotely sensed PINKER surface albedo climatology follows the MODIS estimates fairly well in both the visible and near-infrared spectra, whereas ECHAM4 simulates high positive albedo biases over snow-covered boreal forests and the Himalayas. In contrast, the ECHAM4 albedo is probably too low over the Sahara sand desert and adjacent steppes. The study clearly indicates that neglecting albedo variations within T42 grid boxes leads to significant errors in the simulated regional climate and horizontal fluxes, mainly in mountainous and/or snow-covered regions. MODIS surface albedo at 0.05 resolution agrees quite well with in situ field measurements collected at Baseline Surface Radiation Network (BSRN) sites during snow-free periods, while significant positive biases are found under snow-covered conditions, mainly due to differences in the vegetation cover at the BSRN site (short grass) and the vegetation within the larger MODIS grid box. Black sky (direct beam) albedo from the MODIS bidirectional reflectance distribution function model captures the diurnal albedo cycle at BSRN sites with sufficient accuracy. The greatest negative biases are generally found when the Sun is low. A realistic approach for relating albedo and zenith angle has been proposed. Detailed evaluations have demonstrated that ignoring the zenith angle dependence may lead to significant errors in the surface energy balance.

  15. Sensitivity simulations of superparameterised convection in a general circulation model

    NASA Astrophysics Data System (ADS)

    Rybka, Harald; Tost, Holger

    2015-04-01

    Cloud Resolving Models (CRMs) covering a horizontal grid spacing from a few hundred meters up to a few kilometers have been used to explicitly resolve small-scale and mesoscale processes. Special attention has been paid to realistically represent cloud dynamics and cloud microphysics involving cloud droplets, ice crystals, graupel and aerosols. The entire variety of physical processes on the small-scale interacts with the larger-scale circulation and has to be parameterised on the coarse grid of a general circulation model (GCM). Since more than a decade an approach to connect these two types of models which act on different scales has been developed to resolve cloud processes and their interactions with the large-scale flow. The concept is to use an ensemble of CRM grid cells in a 2D or 3D configuration in each grid cell of the GCM to explicitly represent small-scale processes avoiding the use of convection and large-scale cloud parameterisations which are a major source for uncertainties regarding clouds. The idea is commonly known as superparameterisation or cloud-resolving convection parameterisation. This study presents different simulations of an adapted Earth System Model (ESM) connected to a CRM which acts as a superparameterisation. Simulations have been performed with the ECHAM/MESSy atmospheric chemistry (EMAC) model comparing conventional GCM runs (including convection and large-scale cloud parameterisations) with the improved superparameterised EMAC (SP-EMAC) modeling one year with prescribed sea surface temperatures and sea ice content. The sensitivity of atmospheric temperature, precipiation patterns, cloud amount and types is observed changing the embedded CRM represenation (orientation, width, no. of CRM cells, 2D vs. 3D). Additionally, we also evaluate the radiation balance with the new model configuration, and systematically analyse the impact of tunable parameters on the radiation budget and hydrological cycle. Furthermore, the subgrid variability (individual CRM cell output) is analysed in order to illustrate the importance of a highly varying atmospheric structure inside a single GCM grid box. Finally, the convective transport of Radon is observed comparing different transport procedures and their influence on the vertical tracer distribution.

  16. Modeling of Turbulent Natural Convection in Enclosed Tall Cavities

    NASA Astrophysics Data System (ADS)

    Goloviznin, V. M.; Korotkin, I. A.; Finogenov, S. A.

    2017-12-01

    It was shown in our previous work (J. Appl. Mech. Tech. Phys 57 (7), 1159-1171 (2016)) that the eddy-resolving parameter-free CABARET scheme as applied to two-and three-dimensional de Vahl Davis benchmark tests (thermal convection in a square cavity) yields numerical results on coarse (20 × 20 and 20 × 20 × 20) grids that agree surprisingly well with experimental data and highly accurate computations for Rayleigh numbers of up to 1014. In the present paper, the sensitivity of this phenomenon to the cavity shape (varying from cubical to highly elongated) is analyzed. Box-shaped computational domains with aspect ratios of 1: 4, 1: 10, and 1: 28.6 are considered. The results produced by the CABARET scheme are compared with experimental data (aspect ratio of 1: 28.6), DNS results (aspect ratio of 1: 4), and an empirical formula (aspect ratio of 1: 10). In all the cases, the CABARET-based integral parameters of the cavity flow agree well with the other authors' results. Notably coarse grids with mesh refinement toward the walls are used in the CABARET calculations. It is shown that acceptable numerical accuracy on extremely coarse grids is achieved for an aspect ratio of up to 1: 10. For higher aspect ratios, the number of grid cells required for achieving prescribed accuracy grows significantly.

  17. Gliding Box method applied to trace element distribution of a geochemical data set

    NASA Astrophysics Data System (ADS)

    Paz González, Antonio; Vidal Vázquez, Eva; Rosario García Moreno, M.; Paz Ferreiro, Jorge; Saa Requejo, Antonio; María Tarquis, Ana

    2010-05-01

    The application of fractal theory to process geochemical prospecting data can provide useful information for evaluating mineralization potential. A geochemical survey was carried out in the west area of Coruña province (NW Spain). Major elements and trace elements were determined by standard analytical techniques. It is well known that there are specific elements or arrays of elements, which are associated with specific types of mineralization. Arsenic has been used to evaluate the metallogenetic importance of the studied zone. Moreover, as can be considered as a pathfinder of Au, as these two elements are genetically associated. The main objective of this study was to use multifractal analysis to characterize the distribution of three trace elements, namely Au, As, and Sb. Concerning the local geology, the study area comprises predominantly acid rocks, mainly alkaline and calcalkaline granites, gneiss and migmatites. The most significant structural feature of this zone is the presence of a mylonitic band, with an approximate NE-SW orientation. The data set used in this study comprises 323 samples collected, with standard geochemical criteria, preferentially in the B horizon of the soil. Occasionally where this horizon was not present, samples were collected from the C horizon. Samples were taken in a rectilinear grid. The sampling lines were perpendicular to the NE-SW tectonic structures. Frequency distributions of the studied elements departed from normal. Coefficients of variation ranked as follows: Sb < As < Au. Significant correlation coefficients between Au, Sb, and As were found, even if these were low. The so-called ‘gliding box' algorithm (GB) proposed originally for lacunarity analysis has been extended to multifractal modelling and provides an alternative to the ‘box-counting' method for implementing multifractal analysis. The partitioning method applied in GB algorithm constructs samples by gliding a box of certain size (a) over the grid map in all possible directions. An "up-scaling" partitioning process will begin with a minimum size or area box (amin) up to a certain size less than the total area A. An advantage of the GB method is the large sample size that usually leads to better statistical results on Dq values, particularly for negative values of q. Because this partitioning overlaps, the measure defined on these boxes is not statistically independent and the definition of the measure in the gliding boxes is different. In order to show the advantages of the GB method, spatial distributions of As, Sb, and Au in the studied area were analyzed. We discussed the usefulness of this method to achieve the numerical characterization of anomalies and its differentiation from the background from the available data of the geochemistry survey.

  18. Three-dimensional Monte Carlo calculation of atmospheric thermal heating rates

    NASA Astrophysics Data System (ADS)

    Klinger, Carolin; Mayer, Bernhard

    2014-09-01

    We present a fast Monte Carlo method for thermal heating and cooling rates in three-dimensional atmospheres. These heating/cooling rates are relevant particularly in broken cloud fields. We compare forward and backward photon tracing methods and present new variance reduction methods to speed up the calculations. For this application it turns out that backward tracing is in most cases superior to forward tracing. Since heating rates may be either calculated as the difference between emitted and absorbed power per volume or alternatively from the divergence of the net flux, both approaches have been tested. We found that the absorption/emission method is superior (with respect to computational time for a given uncertainty) if the optical thickness of the grid box under consideration is smaller than about 5 while the net flux divergence may be considerably faster for larger optical thickness. In particular, we describe the following three backward tracing methods: the first and most simple method (EMABS) is based on a random emission of photons in the grid box of interest and a simple backward tracing. Since only those photons which cross the grid box boundaries contribute to the heating rate, this approach behaves poorly for large optical thicknesses which are common in the thermal spectral range. For this reason, the second method (EMABS_OPT) uses a variance reduction technique to improve the distribution of the photons in a way that more photons are started close to the grid box edges and thus contribute to the result which reduces the uncertainty. The third method (DENET) uses the flux divergence approach where - in backward Monte Carlo - all photons contribute to the result, but in particular for small optical thickness the noise becomes large. The three methods have been implemented in MYSTIC (Monte Carlo code for the phYSically correct Tracing of photons In Cloudy atmospheres). All methods are shown to agree within the photon noise with each other and with a discrete ordinate code for a one-dimensional case. Finally a hybrid method is built using a combination of EMABS_OPT and DENET, and application examples are shown. It should be noted that for this application, only little improvement is gained by EMABS_OPT compared to EMABS.

  19. Combining Synthetic Human Odours and Low-Cost Electrocuting Grids to Attract and Kill Outdoor-Biting Mosquitoes: Field and Semi-Field Evaluation of an Improved Mosquito Landing Box

    PubMed Central

    Matowo, Nancy S.; Koekemoer, Lizette L.; Moore, Sarah J.; Mmbando, Arnold S.; Mapua, Salum A.; Coetzee, Maureen; Okumu, Fredros O.

    2016-01-01

    Background On-going malaria transmission is increasingly mediated by outdoor-biting vectors, especially where indoor insecticidal interventions such as long-lasting insecticide treated nets (LLINs) are widespread. Often, the vectors are also physiologically resistant to insecticides, presenting major obstacles for elimination. We tested a combination of electrocuting grids with synthetic odours as an alternative killing mechanism against outdoor-biting mosquitoes. Methods An odour-baited device, the Mosquito Landing Box (MLB), was improved by fitting it with low-cost electrocuting grids to instantly kill mosquitoes attracted to the odour lure, and automated photo switch to activate attractant-dispensing and mosquito-killing systems between dusk and dawn. MLBs fitted with one, two or three electrocuting grids were compared outdoors in a malaria endemic village in Tanzania, where vectors had lost susceptibility to pyrethroids. MLBs with three grids were also tested in a large semi-field cage (9.6×9.6×4.5m), to assess effects on biting-densities of laboratory-reared Anopheles arabiensis on volunteers sitting near MLBs. Results Significantly more mosquitoes were killed when MLBs had two or three grids, than one grid in wet and dry seasons (P<0.05). The MLBs were highly efficient against Mansonia species and malaria vector, An. arabiensis. Of all mosquitoes, 99% were non-blood fed, suggesting host-seeking status. In the semi-field, the MLBs reduced mean number of malaria mosquitoes attempting to bite humans fourfold. Conclusion The improved odour-baited MLBs effectively kill outdoor-biting malaria vector mosquitoes that are behaviourally and physiologically resistant to insecticidal interventions e.g. LLINs. The MLBs reduce human-biting vector densities even when used close to humans, and are insecticide-free, hence potentially antiresistance. The devices could either be used as surveillance tools or complementary mosquito control interventions to accelerate malaria elimination where outdoor transmission is significant. PMID:26789733

  20. ED(MF)n: Humidity-Convection Feedbacks in a Mass Flux Scheme Based on Resolved Size Densities

    NASA Astrophysics Data System (ADS)

    Neggers, R.

    2014-12-01

    Cumulus cloud populations remain at least partially unresolved in present-day numerical simulations of global weather and climate, and accordingly their impact on the larger-scale flow has to be represented through parameterization. Various methods have been developed over the years, ranging in complexity from the early bulk models relying on a single plume to more recent approaches that attempt to reconstruct the underlying probability density functions, such as statistical schemes and multiple plume approaches. Most of these "classic" methods capture key aspects of cumulus cloud populations, and have been successfully implemented in operational weather and climate models. However, the ever finer discretizations of operational circulation models, driven by advances in the computational efficiency of supercomputers, is creating new problems for existing sub-grid schemes. Ideally, a sub-grid scheme should automatically adapt its impact on the resolved scales to the dimension of the grid-box within which it is supposed to act. It can be argued that this is only possible when i) the scheme is aware of the range of scales of the processes it represents, and ii) it can distinguish between contributions as a function of size. How to conceptually represent this knowledge of scale in existing parameterization schemes remains an open question that is actively researched. This study considers a relatively new class of models for sub-grid transport in which ideas from the field of population dynamics are merged with the concept of multi plume modelling. More precisely, a multiple mass flux framework for moist convective transport is formulated in which the ensemble of plumes is created in "size-space". It is argued that thus resolving the underlying size-densities creates opportunities for introducing scale-awareness and scale-adaptivity in the scheme. The behavior of an implementation of this framework in the Eddy Diffusivity Mass Flux (EDMF) model, named ED(MF)n, is examined for a standard case of subtropical marine shallow cumulus. We ask if a system of multiple independently resolved plumes is able to automatically create the vertical profile of bulk (mass) flux at which the sub-grid scale transport balances the imposed larger-scale forcings in the cloud layer.

  1. From GCM grid cell to agricultural plot: scale issues affecting modelling of climate impact

    PubMed Central

    Baron, Christian; Sultan, Benjamin; Balme, Maud; Sarr, Benoit; Traore, Seydou; Lebel, Thierry; Janicot, Serge; Dingkuhn, Michael

    2005-01-01

    General circulation models (GCM) are increasingly capable of making relevant predictions of seasonal and long-term climate variability, thus improving prospects of predicting impact on crop yields. This is particularly important for semi-arid West Africa where climate variability and drought threaten food security. Translating GCM outputs into attainable crop yields is difficult because GCM grid boxes are of larger scale than the processes governing yield, involving partitioning of rain among runoff, evaporation, transpiration, drainage and storage at plot scale. This study analyses the bias introduced to crop simulation when climatic data is aggregated spatially or in time, resulting in loss of relevant variation. A detailed case study was conducted using historical weather data for Senegal, applied to the crop model SARRA-H (version for millet). The study was then extended to a 10°N–17° N climatic gradient and a 31 year climate sequence to evaluate yield sensitivity to the variability of solar radiation and rainfall. Finally, a down-scaling model called LGO (Lebel–Guillot–Onibon), generating local rain patterns from grid cell means, was used to restore the variability lost by aggregation. Results indicate that forcing the crop model with spatially aggregated rainfall causes yield overestimations of 10–50% in dry latitudes, but nearly none in humid zones, due to a biased fraction of rainfall available for crop transpiration. Aggregation of solar radiation data caused significant bias in wetter zones where radiation was limiting yield. Where climatic gradients are steep, these two situations can occur within the same GCM grid cell. Disaggregation of grid cell means into a pattern of virtual synoptic stations having high-resolution rainfall distribution removed much of the bias caused by aggregation and gave realistic simulations of yield. It is concluded that coupling of GCM outputs with plot level crop models can cause large systematic errors due to scale incompatibility. These errors can be avoided by transforming GCM outputs, especially rainfall, to simulate the variability found at plot level. PMID:16433096

  2. Gridding Global δ 18Owater and Interpreting Core Top δ 18Oforam

    NASA Astrophysics Data System (ADS)

    Legrande, A. N.; Schmidt, G.

    2004-05-01

    Estimations of the oxygen isotope ratio in seawater (δ 18O water) traditionally have relied on regional δ 18O water to salinity relationships to convert seawater salinity into δ 18O water. This indirect method of determining δ 18O water is necessary since ?18Owater measurements are relatively sparse. We improve upon this process by constructing local δ 18O water to salinity curves using the Schmidt et al. (1999) global database of δ 18O water and salinity. We calculate local δ 18O water to salinity relationship on a 1x1 grid based on the closest database points to each grid box. Each ocean basin is analyzed separately, and each curve is processed to exclude outliers. These local relationships in combination with seawater salinity (Levitus, 1994) allow us to construct a global map of δ 18O water on a 1x1 grid. We combine seawater temperature (Levitus, 1994) with this dataset to predict δ 18O calcite on a 1x1 grid. These predicted values are then compared to previous compilations of core top δ 18O foram data for individual species of foraminifera. This comparison provides insight into the calcification habitats (as inferred by seawater temperature and salinity) of these species. Additionally, we compare the 1x1 grid of δ 18O water to preliminary output from the latest GISS coupled Atmosphere/Ocean GCM that tracks water isotopes through the hydrologic cycle. This comparison provides insight into possible model applications as a tool to aid in interpreting paleo-isotope data.

  3. The prediction of sea-surface temperature variations by means of an advective mixed-layer ocean model

    NASA Technical Reports Server (NTRS)

    Atlas, R. M.

    1976-01-01

    An advective mixed layer ocean model was developed by eliminating the assumption of horizontal homogeneity in an already existing mixed layer model, and then superimposing a mean and anomalous wind driven current field. This model is based on the principle of conservation of heat and mechanical energy and utilizes a box grid for the advective part of the calculation. Three phases of experiments were conducted: evaluation of the model's ability to account for climatological sea surface temperature (SST) variations in the cooling and heating seasons, sensitivity tests in which the effect of hypothetical anomalous winds was evaluated, and a thirty-day synoptic calculation using the model. For the case studied, the accuracy of the predictions was improved by the inclusion of advection, although nonadvective effects appear to have dominated.

  4. Simulation of the Summer Monsoon Rainfall over East Asia using the NCEP GFS Cumulus Parameterization at Different Horizontal Resolutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Kyo-Sun; Hong, Song You; Yoon, Jin-Ho

    2014-10-01

    The most recent version of Simplified Arakawa-Schubert (SAS) cumulus scheme in National Center for Environmental Prediction (NCEP) Global Forecast System (GFS) (GFS SAS) has been implemented into the Weather and Research Forecasting (WRF) model with a modification of triggering condition and convective mass flux to become depending on model’s horizontal grid spacing. East Asian Summer Monsoon of 2006 from June to August is selected to evaluate the performance of the modified GFS SAS scheme. Simulated monsoon rainfall with the modified GFS SAS scheme shows better agreement with observation compared to the original GFS SAS scheme. The original GFS SAS schememore » simulates the similar ratio of subgrid-scale precipitation, which is calculated from a cumulus scheme, against total precipitation regardless of model’s horizontal grid spacing. This is counter-intuitive because the portion of resolved clouds in a grid box should be increased as the model grid spacing decreases. This counter-intuitive behavior of the original GFS SAS scheme is alleviated by the modified GFS SAS scheme. Further, three different cumulus schemes (Grell and Freitas, Kain and Fritsch, and Betts-Miller-Janjic) are chosen to investigate the role of a horizontal resolution on simulated monsoon rainfall. The performance of high-resolution modeling is not always enhanced as the spatial resolution becomes higher. Even though improvement of probability density function of rain rate and long wave fluxes by the higher-resolution simulation is robust regardless of a choice of cumulus parameterization scheme, the overall skill score of surface rainfall is not monotonically increasing with spatial resolution.« less

  5. A comparative analysis of dynamic grids vs. virtual grids using the A3pviGrid framework.

    PubMed

    Shankaranarayanan, Avinas; Amaldas, Christine

    2010-11-01

    With the proliferation of Quad/Multi-core micro-processors in mainstream platforms such as desktops and workstations; a large number of unused CPU cycles can be utilized for running virtual machines (VMs) as dynamic nodes in distributed environments. Grid services and its service oriented business broker now termed cloud computing could deploy image based virtualization platforms enabling agent based resource management and dynamic fault management. In this paper we present an efficient way of utilizing heterogeneous virtual machines on idle desktops as an environment for consumption of high performance grid services. Spurious and exponential increases in the size of the datasets are constant concerns in medical and pharmaceutical industries due to the constant discovery and publication of large sequence databases. Traditional algorithms are not modeled at handing large data sizes under sudden and dynamic changes in the execution environment as previously discussed. This research was undertaken to compare our previous results with running the same test dataset with that of a virtual Grid platform using virtual machines (Virtualization). The implemented architecture, A3pviGrid utilizes game theoretic optimization and agent based team formation (Coalition) algorithms to improve upon scalability with respect to team formation. Due to the dynamic nature of distributed systems (as discussed in our previous work) all interactions were made local within a team transparently. This paper is a proof of concept of an experimental mini-Grid test-bed compared to running the platform on local virtual machines on a local test cluster. This was done to give every agent its own execution platform enabling anonymity and better control of the dynamic environmental parameters. We also analyze performance and scalability of Blast in a multiple virtual node setup and present our findings. This paper is an extension of our previous research on improving the BLAST application framework using dynamic Grids on virtualization platforms such as the virtual box.

  6. Quantifying the impact of sub-grid surface wind variability on sea salt and dust emissions in CAM5

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Zhao, Chun; Wan, Hui; Qian, Yun; Easter, Richard C.; Ghan, Steven J.; Sakaguchi, Koichi; Liu, Xiaohong

    2016-02-01

    This paper evaluates the impact of sub-grid variability of surface wind on sea salt and dust emissions in the Community Atmosphere Model version 5 (CAM5). The basic strategy is to calculate emission fluxes multiple times, using different wind speed samples of a Weibull probability distribution derived from model-predicted grid-box mean quantities. In order to derive the Weibull distribution, the sub-grid standard deviation of surface wind speed is estimated by taking into account four mechanisms: turbulence under neutral and stable conditions, dry convective eddies, moist convective eddies over the ocean, and air motions induced by mesoscale systems and fine-scale topography over land. The contributions of turbulence and dry convective eddy are parameterized using schemes from the literature. Wind variabilities caused by moist convective eddies and fine-scale topography are estimated using empirical relationships derived from an operational weather analysis data set at 15 km resolution. The estimated sub-grid standard deviations of surface wind speed agree well with reference results derived from 1 year of global weather analysis at 15 km resolution and from two regional model simulations with 3 km grid spacing.The wind-distribution-based emission calculations are implemented in CAM5. In terms of computational cost, the increase in total simulation time turns out to be less than 3 %. Simulations at 2° resolution indicate that sub-grid wind variability has relatively small impacts (about 7 % increase) on the global annual mean emission of sea salt aerosols, but considerable influence on the emission of dust. Among the considered mechanisms, dry convective eddies and mesoscale flows associated with topography are major causes of dust emission enhancement. With all the four mechanisms included and without additional adjustment of uncertain parameters in the model, the simulated global and annual mean dust emission increase by about 50 % compared to the default model. By tuning the globally constant dust emission scale factor, the global annual mean dust emission, aerosol optical depth, and top-of-atmosphere radiative fluxes can be adjusted to the level of the default model, but the frequency distribution of dust emission changes, with more contribution from weaker wind events and less contribution from stronger wind events. In Africa and Asia, the overall frequencies of occurrence of dust emissions increase, and the seasonal variations are enhanced, while the geographical patterns of the emission frequency show little change.

  7. Quantifying the impact of sub-grid surface wind variability on sea salt and dust emissions in CAM5

    DOE PAGES

    Zhang, Kai; Zhao, Chun; Wan, Hui; ...

    2016-02-12

    This paper evaluates the impact of sub-grid variability of surface wind on sea salt and dust emissions in the Community Atmosphere Model version 5 (CAM5). The basic strategy is to calculate emission fluxes multiple times, using different wind speed samples of a Weibull probability distribution derived from model-predicted grid-box mean quantities. In order to derive the Weibull distribution, the sub-grid standard deviation of surface wind speed is estimated by taking into account four mechanisms: turbulence under neutral and stable conditions, dry convective eddies, moist convective eddies over the ocean, and air motions induced by mesoscale systems and fine-scale topography overmore » land. The contributions of turbulence and dry convective eddy are parameterized using schemes from the literature. Wind variabilities caused by moist convective eddies and fine-scale topography are estimated using empirical relationships derived from an operational weather analysis data set at 15 km resolution. The estimated sub-grid standard deviations of surface wind speed agree well with reference results derived from 1 year of global weather analysis at 15 km resolution and from two regional model simulations with 3 km grid spacing.The wind-distribution-based emission calculations are implemented in CAM5. In terms of computational cost, the increase in total simulation time turns out to be less than 3 %. Simulations at 2° resolution indicate that sub-grid wind variability has relatively small impacts (about 7 % increase) on the global annual mean emission of sea salt aerosols, but considerable influence on the emission of dust. Among the considered mechanisms, dry convective eddies and mesoscale flows associated with topography are major causes of dust emission enhancement. With all the four mechanisms included and without additional adjustment of uncertain parameters in the model, the simulated global and annual mean dust emission increase by about 50 % compared to the default model. By tuning the globally constant dust emission scale factor, the global annual mean dust emission, aerosol optical depth, and top-of-atmosphere radiative fluxes can be adjusted to the level of the default model, but the frequency distribution of dust emission changes, with more contribution from weaker wind events and less contribution from stronger wind events. Lastly, in Africa and Asia, the overall frequencies of occurrence of dust emissions increase, and the seasonal variations are enhanced, while the geographical patterns of the emission frequency show little change.« less

  8. Quantifying the impact of sub-grid surface wind variability on sea salt and dust emissions in CAM5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Kai; Zhao, Chun; Wan, Hui

    This paper evaluates the impact of sub-grid variability of surface wind on sea salt and dust emissions in the Community Atmosphere Model version 5 (CAM5). The basic strategy is to calculate emission fluxes multiple times, using different wind speed samples of a Weibull probability distribution derived from model-predicted grid-box mean quantities. In order to derive the Weibull distribution, the sub-grid standard deviation of surface wind speed is estimated by taking into account four mechanisms: turbulence under neutral and stable conditions, dry convective eddies, moist convective eddies over the ocean, and air motions induced by mesoscale systems and fine-scale topography overmore » land. The contributions of turbulence and dry convective eddy are parameterized using schemes from the literature. Wind variabilities caused by moist convective eddies and fine-scale topography are estimated using empirical relationships derived from an operational weather analysis data set at 15 km resolution. The estimated sub-grid standard deviations of surface wind speed agree well with reference results derived from 1 year of global weather analysis at 15 km resolution and from two regional model simulations with 3 km grid spacing.The wind-distribution-based emission calculations are implemented in CAM5. In terms of computational cost, the increase in total simulation time turns out to be less than 3 %. Simulations at 2° resolution indicate that sub-grid wind variability has relatively small impacts (about 7 % increase) on the global annual mean emission of sea salt aerosols, but considerable influence on the emission of dust. Among the considered mechanisms, dry convective eddies and mesoscale flows associated with topography are major causes of dust emission enhancement. With all the four mechanisms included and without additional adjustment of uncertain parameters in the model, the simulated global and annual mean dust emission increase by about 50 % compared to the default model. By tuning the globally constant dust emission scale factor, the global annual mean dust emission, aerosol optical depth, and top-of-atmosphere radiative fluxes can be adjusted to the level of the default model, but the frequency distribution of dust emission changes, with more contribution from weaker wind events and less contribution from stronger wind events. Lastly, in Africa and Asia, the overall frequencies of occurrence of dust emissions increase, and the seasonal variations are enhanced, while the geographical patterns of the emission frequency show little change.« less

  9. Approximate deconvolution model for the simulation of turbulent gas-solid flows: An a priori analysis

    NASA Astrophysics Data System (ADS)

    Schneiderbauer, Simon; Saeedipour, Mahdi

    2018-02-01

    Highly resolved two-fluid model (TFM) simulations of gas-solid flows in vertical periodic channels have been performed to study closures for the filtered drag force and the Reynolds-stress-like contribution stemming from the convective terms. An approximate deconvolution model (ADM) for the large-eddy simulation of turbulent gas-solid suspensions is detailed and subsequently used to reconstruct those unresolved contributions in an a priori manner. With such an approach, an approximation of the unfiltered solution is obtained by repeated filtering allowing the determination of the unclosed terms of the filtered equations directly. A priori filtering shows that predictions of the ADM model yield fairly good agreement with the fine grid TFM simulations for various filter sizes and different particle sizes. In particular, strong positive correlation (ρ > 0.98) is observed at intermediate filter sizes for all sub-grid terms. Additionally, our study reveals that the ADM results moderately depend on the choice of the filters, such as box and Gaussian filter, as well as the deconvolution order. The a priori test finally reveals that ADM is superior compared to isotropic functional closures proposed recently [S. Schneiderbauer, "A spatially-averaged two-fluid model for dense large-scale gas-solid flows," AIChE J. 63, 3544-3562 (2017)].

  10. Modifications made to ModelMuse to add support for the Saturated-Unsaturated Transport model (SUTRA)

    USGS Publications Warehouse

    Winston, Richard B.

    2014-01-01

    This report (1) describes modifications to ModelMuse,as described in U.S. Geological Survey (USGS) Techniques and Methods (TM) 6–A29 (Winston, 2009), to add support for the Saturated-Unsaturated Transport model (SUTRA) (Voss and Provost, 2002; version of September 22, 2010) and (2) supplements USGS TM 6–A29. Modifications include changes to the main ModelMuse window where the model is designed, addition of methods for generating a finite-element mesh suitable for SUTRA, defining how some functions shouldapply when using a finite-element mesh rather than a finite-difference grid (as originally programmed in ModelMuse), and applying spatial interpolation to angles. In addition, the report describes ways of handling objects on the front view of the model and displaying data. A tabulation contains a summary of the new or modified dialog boxes.

  11. Validation and Development of the GPCP Experimental One-Degree Daily (1DD) Global Precipitation Product

    NASA Technical Reports Server (NTRS)

    Huffman, George J.; Adler, Robert F.; Bolvin, David T.; Einaud, Franco (Technical Monitor)

    2000-01-01

    The One-Degree Daily (1DD) precipitation dataset has been developed for the Global Precipitation Climatology Project (GPCP) and is currently in beta test preparatory to release as an official GPCP product. The 1DD provides a globally-complete, observation-only estimate of precipitation on a daily 1 deg. x 1 deg. grid for the period 1997 through early 2000 (by the time of the conference). In the latitude band 40N-40S the 1DD uses the Threshold-Matched Precipitation Index (TMPI), a GPI-like IR product with the pixel-level T(sub b) threshold and (single) conditional rain rate determined locally for each month by the frequency of precipitation in the GPROF SSM/I product and by, the precipitation amount in the GPCP monthly satellite-gauge (SG) combination. Outside 40N-40S the 1DD uses a scaled TOVS precipitation estimate that has month-by-month adjustments based on the TMPI and the SG. Early validation results are encouraging. The 1DD shows relatively large scatter about the daily validation values in individual grid boxes, as expected for a technique that depends on cloud-sensing schemes such as the TMPI and TOVS. On the other hand, the time series of 1DD shows good correlation with validation in individual boxes. For example, the 1997-1998 time series of 1DD and Oklahoma Mesonet values in a grid box in northeastern Oklahoma have the correlation coefficient = 0.73. Looking more carefully at these two time series, the number of raining days for the 1DD is within 7% of the Mesonet value, while the distribution of daily rain values is very similar. Other tests indicate that area- or time-averaging improve the error characteristics, making the data set highly attractive to users interested in stream flow, short-term regional climatology, and model comparisons. The second generation of the 1DD product is currently under development; it is designed to directly incorporate TRMM and other high-quality precipitation estimates. These data are generally sparse because they are observed by low-orbit satellites, so a fair amount of work must be devoted to analyzing the effect of data boundaries. This work is laying, the groundwork for effective use of the NASA Global Precipitation Mission, which will have full Global coverage by low-orbit passive microwave satellites every three hours.

  12. Effects of cumulus entrainment and multiple cloud types on a January global climate model simulation

    NASA Technical Reports Server (NTRS)

    Yao, Mao-Sung; Del Genio, Anthony D.

    1989-01-01

    An improved version of the GISS Model II cumulus parameterization designed for long-term climate integrations is used to study the effects of entrainment and multiple cloud types on the January climate simulation. Instead of prescribing convective mass as a fixed fraction of the cloud base grid-box mass, it is calculated based on the closure assumption that the cumulus convection restores the atmosphere to a neutral moist convective state at cloud base. This change alone significantly improves the distribution of precipitation, convective mass exchanges, and frequencies in the January climate. The vertical structure of the tropical atmosphere exhibits quasi-equilibrium behavior when this closure is used, even though there is no explicit constraint applied above cloud base.

  13. ISCCP Cloud Properties Associated with Standard Cloud Types Identified in Individual Surface Observations

    NASA Technical Reports Server (NTRS)

    Hahn, Carole J.; Rossow, William B.; Warren, Stephen G.

    1999-01-01

    Individual surface weather observations from land stations and ships are compared with individual cloud retrievals of the International Satellite Cloud Climatology Project (ISCCP), Stage C1, for an 8-year period (1983-1991) to relate cloud optical thicknesses and cloud-top pressures obtained from satellite data to the standard cloud types reported in visual observations from the surface. Each surface report is matched to the corresponding ISCCP-C1 report for the time of observation for the 280x280-km grid-box containing that observation. Classes of the surface reports are identified in which a particular cloud type was reported present, either alone or in combination with other clouds. For each class, cloud amounts from both surface and C1 data, base heights from surface data, and the frequency-distributions of cloud-top pressure (p(sub c) and optical thickness (tau) from C1 data are averaged over 15-degree latitude zones, for land and ocean separately, for 3-month seasons. The frequency distribution of p(sub c) and tau is plotted for each of the surface-defined cloud types occurring both alone and with other clouds. The average cloud-top pressures within a grid-box do not always correspond well with values expected for a reported cloud type, particularly for the higher clouds Ci, Ac, and Cb. In many cases this is because the satellites also detect clouds within the grid-box that are outside the field of view of the surface observer. The highest average cloud tops are found for the most extensive cloud type, Ns, averaging 7 km globally and reaching 9 km in the ITCZ. Ns also has the greatest average retrieved optical thickness, tau approximately equal 20. Cumulonimbus clouds may actually attain far greater heights and depths, but do not fill the grid-box. The tau-p(sub c) distributions show features that distinguish the high, middle, and low clouds reported by the surface observers. However, the distribution patterns for the individual low cloud types (Cu, Sc, St) occurring alone overlap to such an extent that it is not possible to distinguish these cloud types from each other on the basis of tau-p(sub c) values alone. Other cloud types whose tau-p(sub c) distributions are indistinguishable are Cb, Ns, and thick As. However, the tau-p(sub c) distribution patterns for the different low cloud types are nevertheless distinguishable when all occurrences of a low cloud type are included, indicating that the different low types differ in their probabilities of co-occurrence with middle and high clouds.

  14. Numerical and experimental study of the fundamental flow characteristics of a 3D gully box under drainage.

    PubMed

    Lopes, Pedro; Carvalho, Rita F; Leandro, Jorge

    2017-05-01

    Numerical studies regarding the influence of entrapped air on the hydraulic performance of gullies are nonexistent. This is due to the lack of a model that simulates the air-entrainment phenomena and consequently the entrapped air. In this work, we used experimental data to validate an air-entrainment model that uses a Volume-of-Fluid based method to detect the interface and the Shear-stress transport k-ω turbulence model. The air is detected in a sub-grid scale, generated by a source term and transported using a slip velocity formulation. Results are shown in terms of free-surface elevation, velocity profiles, turbulent kinetic energy and discharge coefficients. The air-entrainment model allied to the turbulence model showed a good accuracy in the prediction of the zones of the gully where the air is more concentrated.

  15. SIRTF Tools for DIRT

    NASA Astrophysics Data System (ADS)

    Pound, M. W.; Wolfire, M. G.; Amarnath, N. S.

    2003-12-01

    The Dust InfraRed ToolBox (DIRT - a part of the Web Infrared ToolShed, or WITS, located at http://dustem.astro.umd.edu) is a Java applet for modeling astrophysical processes in circumstellar shells around young and evolved stars. DIRT has been used by the astrophysics community for about 5 years. Users can automatically and efficiently search grids of pre-calculated models to fit their data. A large set of physical parameters and dust types are included in the model database, which contains over 500,000 models. We are adding new functionality to DIRT to support new missions like SIRTF and SOFIA. A new Instrument module allows for plotting of the model points convolved with the spatial and spectral responses of the selected instrument. This lets users better fit data from specific instruments. Currently, we have implemented modules for the Infrared Array Camera (IRAC) and Multiband Imaging Photometer (MIPS) on SIRTF.

  16. A stochastic parameterization for deep convection using cellular automata

    NASA Astrophysics Data System (ADS)

    Bengtsson, L.; Steinheimer, M.; Bechtold, P.; Geleyn, J.

    2012-12-01

    Cumulus parameterizations used in most operational weather and climate models today are based on the mass-flux concept which took form in the early 1970's. In such schemes it is assumed that a unique relationship exists between the ensemble-average of the sub-grid convection, and the instantaneous state of the atmosphere in a vertical grid box column. However, such a relationship is unlikely to be described by a simple deterministic function (Palmer, 2011). Thus, because of the statistical nature of the parameterization challenge, it has been recognized by the community that it is important to introduce stochastic elements to the parameterizations (for instance: Plant and Craig, 2008, Khouider et al. 2010, Frenkel et al. 2011, Bentsson et al. 2011, but the list is far from exhaustive). There are undoubtedly many ways in which stochastisity can enter new developments. In this study we use a two-way interacting cellular automata (CA), as its intrinsic nature possesses many qualities interesting for deep convection parameterization. In the one-dimensional entraining plume approach, there is no parameterization of horizontal transport of heat, moisture or momentum due to cumulus convection. In reality, mass transport due to gravity waves that propagate in the horizontal can trigger new convection, important for the organization of deep convection (Huang, 1988). The self-organizational characteristics of the CA allows for lateral communication between adjacent NWP model grid-boxes, and temporal memory. Thus the CA scheme used in this study contain three interesting components for representation of cumulus convection, which are not present in the traditional one-dimensional bulk entraining plume method: horizontal communication, memory and stochastisity. The scheme is implemented in the high resolution regional NWP model ALARO, and simulations show enhanced organization of convective activity along squall-lines. Probabilistic evaluation demonstrate an enhanced spread in large-scale variables in regions where convective activity is large. A two month extended evaluation of the deterministic behaviour of the scheme indicate a neutral impact on forecast skill. References: Bengtsson, L., H. Körnich, E. Källén, and G. Svensson, 2011: Large-scale dynamical response to sub-grid scale organization provided by cellular automata. Journal of the Atmospheric Sciences, 68, 3132-3144. Frenkel, Y., A. Majda, and B. Khouider, 2011: Using the stochastic multicloud model to improve tropical convective parameterization: A paradigm example. Journal of the Atmospheric Sciences, doi: 10.1175/JAS-D-11-0148.1. Huang, X.-Y., 1988: The organization of moist convection by internal 365 gravity waves. Tellus A, 42, 270-285. Khouider, B., J. Biello, and A. Majda, 2010: A Stochastic Multicloud Model for Tropical Convection. Comm. Math. Sci., 8, 187-216. Palmer, T., 2011: Towards the Probabilistic Earth-System Simulator: A Vision for the Future of Climate and Weather Prediction. Quarterly Journal of the Royal Meteorological Society 138 (2012) 841-861 Plant, R. and G. Craig, 2008: A stochastic parameterization for deep convection based on equilibrium statistics. J. Atmos. Sci., 65, 87-105.

  17. Evaluating soil moisture constraints on surface fluxes in land surface models globally

    NASA Astrophysics Data System (ADS)

    Harris, Phil; Gallego-Elvira, Belen; Taylor, Christopher; Folwell, Sonja; Ghent, Darren; Veal, Karen; Hagemann, Stefan

    2016-04-01

    Soil moisture availability exerts a strong control over land evaporation in many regions. However, global climate models (GCMs) disagree on when and where evaporation is limited by soil moisture. Evaluation of the relevant modelled processes has suffered from a lack of reliable, global observations of land evaporation at the GCM grid box scale. Satellite observations of land surface temperature (LST) offer spatially extensive but indirect information about the surface energy partition and, under certain conditions, about soil moisture availability on evaporation. Specifically, as soil moisture decreases during rain-free dry spells, evaporation may become limited leading to increases in LST and sensible heat flux. We use MODIS Terra and Aqua observations of LST at 1 km from 2000 to 2012 to assess changes in the surface energy partition during dry spells lasting 10 days or longer. The clear-sky LST data are aggregated to a global 0.5° grid before being composited as a function dry spell day across many events in a particular region and season. These composites are then used to calculate a Relative Warming Rate (RWR) between the land surface and near-surface air. This RWR can diagnose the typical strength of short term changes in surface heat fluxes and, by extension, changes in soil moisture limitation on evaporation. Offline land surface model (LSM) simulations offer a relatively inexpensive way to evaluate the surface processes of GCMs. They have the benefits that multiple models, and versions of models, can be compared on a common grid and using unbiased forcing. Here, we use the RWR diagnostic to assess global, offline simulations of several LSMs (e.g., JULES and JSBACH) driven by the WATCH Forcing Data-ERA Interim. Both the observed RWR and the LSMs use the same 0.5° grid, which allows the observed clear-sky sampling inherent in the underlying MODIS LST to be applied to the model outputs directly. This approach avoids some of the difficulties in analysing free-running simulations in which land and atmosphere are coupled and, as such, it provides a flexible intermediate step in the assessment of surface processes in GCMs.

  18. Improving the technique of vitreous cryo-sectioning for cryo-electron tomography: electrostatic charging for section attachment and implementation of an anti-contamination glove box.

    PubMed

    Pierson, Jason; Fernández, José Jesús; Bos, Erik; Amini, Shoaib; Gnaegi, Helmut; Vos, Matthijn; Bel, Bennie; Adolfsen, Freek; Carrascosa, José L; Peters, Peter J

    2010-02-01

    Cryo-electron tomography of vitreous cryo-sections is the most suitable method for exploring the 3D organization of biological samples that are too large to be imaged in an intact state. Producing good quality vitreous cryo-sections, however, is challenging. Here, we focused on the major obstacles to success: contamination in and around the microtome, and attachment of the ribbon of sections to an electron microscopic grid support film. The conventional method for attaching sections to the grid has involved mechanical force generated by a crude stamping or pressing device, but this disrupts the integrity of vitreous cryo-sections. Furthermore, attachment is poor, and parts of the ribbon of sections are often far from the support film. This results in specimen instability during image acquisition and subsequent difficulty with aligning projection images. Here, we have implemented a protective glove box surrounding the cryo-ultramicrotome that reduces the humidity around and within the microtome during sectioning. We also introduce a novel way to attach vitreous cryo-sections to an EM grid support film using electrostatic charging. The ribbon of vitreous cryo-sections remains in place during transfer and storage and is devoid of stamping related artefacts. We illustrate these improvements by exploring the structure of putative cellular 80S ribosomes within 50nm, vitreous cryo-sections of Saccharomyces cerevisiae.

  19. Restorative effects of curcumin on sleep-deprivation induced memory impairments and structural changes of the hippocampus in a rat model.

    PubMed

    Noorafshan, Ali; Karimi, Fatemeh; Kamali, Ali-Mohammad; Karbalay-Doust, Saied; Nami, Mohammad

    2017-11-15

    The present study examined the consequences of rapid eye-movement sleep-deprivation (REM-SD) with or without curcumin treatment. The outcome measures comprised quantitative features in the three-dimensional reconstruction (3DR) CA1 and dentate gyrus in experimental and control animals using stereological procedures. Male rats were arbitrarily assigned to nine groups based on the intervention and treatment administered including: 1-cage control+distilled water, 2-cage control+curcumin (100mg/kg/day), 3-cage control+olive oil, 4-REM-SD+distilled water, 5-REM-SD+curcumin, 6-REM-SD+olive oil, 7-grid-floor control+distilled water, 8-grid-floor control+curcumin, and 9-grid-floor control+olive oil. Animals in the latter three groups were placed on wire-mesh grids in the sleep-deprivation box. REM-SD was induced by an apparatus comprising a water tank and multiple platforms. After a period of 21days, rats were submitted to the novel object-recognition task. Later, their brains were excised and evaluated using stereological methods. Our results indicated a respective 29% and 31% reduction in the total volume of CA1, and dentate gyrus in REM-SD+distilled water group as compared to the grid-floor control+distilled water group (p<0.05). Other than the above, the overall number of the pyramidal cells of CA1 and granular cells of dentate gyrus in the sleep-deprived group were found to be reduced by 48% and 25%, respectively. The REM-SD+distilled water group also exhibited impaired object recognition memory and deformed three-dimensional reconstructions of these regions. The volume, cell number, reconstruction, object recognition time, and body weight were however recovered in the REM-SD+curcumin compared to the REM-SD+distilled water group. This suggests the potential neuro-restorative effects of curcumin in our model. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. International Conference on Numerical Grid Generation in Computational Fluid Dynamics

    DTIC Science & Technology

    1989-04-30

    Joseph M. Juarez DFVLR SM -TS The Aerospace Corp. Bunsenstr-10 PO Box 92957 M5/559 D-3406 Gottingen Los Angeles CA 90009 F R Germany Klaus A. Hoffmann...Washington, D.C. 20332 Troy, NY 12180 Per Nielsen R. Raghunath Graduate Student Research Fellow Laboratory for Applied Math. Physic NOAA / AOML

  1. Using AUVs and Sources of Opportunity to Evaluate Acoustic Propagation

    DTIC Science & Technology

    1999-09-30

    pattern simulated that of a typical minefield survey. The AUV followed a lawn mower pattern with a constant 3-knot speed inside a 500-m square grid box...same lawn mower pattern as used in the previous experiment, except it only surfaced at the east turns. Thirdly, the MFSK modem signal was only

  2. A coarse-grid projection method for accelerating incompressible flow computations

    NASA Astrophysics Data System (ADS)

    San, Omer; Staples, Anne

    2011-11-01

    We present a coarse-grid projection (CGP) algorithm for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. Here, we investigate a particular CGP method for the vorticity-stream function formulation that uses the full weighting operation for mapping from fine to coarse grids, the third-order Runge-Kutta method for time stepping, and finite differences for the spatial discretization. After solving the Poisson equation on a coarsened grid, bilinear interpolation is used to obtain the fine data for consequent time stepping on the full grid. We compute several benchmark flows: the Taylor-Green vortex, a vortex pair merging, a double shear layer, decaying turbulence and the Taylor-Green vortex on a distorted grid. In all cases we use either FFT-based or V-cycle multigrid linear-cost Poisson solvers. Reducing the number of degrees of freedom of the Poisson solver by powers of two accelerates these computations while, for the first level of coarsening, retaining the same level of accuracy in the fine resolution vorticity field.

  3. Orbit and sampling requirements: TRMM experience

    NASA Technical Reports Server (NTRS)

    North, Gerald

    1993-01-01

    The Tropical Rainfall Measuring Mission (TRMM) concept originated in 1984. Its overall goal is to produce datasets that can be used in the improvement of general circulation models. A primary objective is a multi-year data stream of monthly averages of rain rate over 500 km boxes over the tropical oceans. Vertical distributions of the hydrometers, related to latent heat profiles, and the diurnal cycle of rainrates are secondary products believed to be accessible. The mission is sponsored jointly by the U.S. and Japan. TRMM is an approved mission with launch set for 1997. There are many retrieval and ground truth issues still being studied for TRMM, but here we concentrate on sampling since it is the single largest term in the error budget. The TRMM orbit plane is inclined by 35 degrees to the equator, which leads to a precession of the visits to a given grid box through the local hours of the day, requiring three to six weeks to complete the diurnal cycle, depending on latitude. For sampling studies we can consider the swath width to be about 700 km.

  4. Toward Realistic Simulation of low-Level Clouds Using a Multiscale Modeling Framework With a Third-Order Turbulence Closure in its Cloud-Resolving Model Component

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man; Cheng, Anning

    2010-01-01

    This study presents preliminary results from a multiscale modeling framework (MMF) with an advanced third-order turbulence closure in its cloud-resolving model (CRM) component. In the original MMF, the Community Atmosphere Model (CAM3.5) is used as the host general circulation model (GCM), and the System for Atmospheric Modeling with a first-order turbulence closure is used as the CRM for representing cloud processes in each grid box of the GCM. The results of annual and seasonal means and diurnal variability are compared between the modified and original MMFs and the CAM3.5. The global distributions of low-level cloud amounts and precipitation and the amounts of low-level clouds in the subtropics and middle-level clouds in mid-latitude storm track regions in the modified MMF show substantial improvement relative to the original MMF when both are compared to observations. Some improvements can also be seen in the diurnal variability of precipitation.

  5. Outstanding performance of configuration interaction singles and doubles using exact exchange Kohn-Sham orbitals in real-space numerical grid method

    NASA Astrophysics Data System (ADS)

    Lim, Jaechang; Choi, Sunghwan; Kim, Jaewook; Kim, Woo Youn

    2016-12-01

    To assess the performance of multi-configuration methods using exact exchange Kohn-Sham (KS) orbitals, we implemented configuration interaction singles and doubles (CISD) in a real-space numerical grid code. We obtained KS orbitals with the exchange-only optimized effective potential under the Krieger-Li-Iafrate (KLI) approximation. Thanks to the distinctive features of KLI orbitals against Hartree-Fock (HF), such as bound virtual orbitals with compact shapes and orbital energy gaps similar to excitation energies; KLI-CISD for small molecules shows much faster convergence as a function of simulation box size and active space (i.e., the number of virtual orbitals) than HF-CISD. The former also gives more accurate excitation energies with a few dominant configurations than the latter, even with many more configurations. The systematic control of basis set errors is straightforward in grid bases. Therefore, grid-based multi-configuration methods using exact exchange KS orbitals provide a promising new way to make accurate electronic structure calculations.

  6. Power control apparatus and methods for electric vehicles

    DOEpatents

    Gadh, Rajit; Chung, Ching-Yen; Chu, Chi-Cheng; Qiu, Li

    2016-03-22

    Electric vehicle (EV) charging apparatus and methods are described which allow the sharing of charge current between multiple vehicles connected to a single source of charging energy. In addition, this charge sharing can be performed in a grid-friendly manner by lowering current supplied to EVs when necessary in order to satisfy the needs of the grid, or building operator. The apparatus and methods can be integrated into charging stations or can be implemented with a middle-man approach in which a multiple EV charging box, which includes an EV emulator and multiple pilot signal generation circuits, is coupled to a single EV charge station.

  7. Maximum entropy production allows a simple representation of heterogeneity in semiarid ecosystems.

    PubMed

    Schymanski, Stanislaus J; Kleidon, Axel; Stieglitz, Marc; Narula, Jatin

    2010-05-12

    Feedbacks between water use, biomass and infiltration capacity in semiarid ecosystems have been shown to lead to the spontaneous formation of vegetation patterns in a simple model. The formation of patterns permits the maintenance of larger overall biomass at low rainfall rates compared with homogeneous vegetation. This results in a bias of models run at larger scales neglecting subgrid-scale variability. In the present study, we investigate the question whether subgrid-scale heterogeneity can be parameterized as the outcome of optimal partitioning between bare soil and vegetated area. We find that a two-box model reproduces the time-averaged biomass of the patterns emerging in a 100 x 100 grid model if the vegetated fraction is optimized for maximum entropy production (MEP). This suggests that the proposed optimality-based representation of subgrid-scale heterogeneity may be generally applicable to different systems and at different scales. The implications for our understanding of self-organized behaviour and its modelling are discussed.

  8. The impact of global warming on river runoff

    NASA Technical Reports Server (NTRS)

    Miller, James R.; Russell, Gary L.

    1992-01-01

    A global atmospheric model is used to calculate the annual river runoff for 33 of the world's major rivers for the present climate and for a doubled CO2 climate. The model has a horizontal resolution of 4 x 5 deg, but the runoff from each model grid box is quartered and added to the appropriate river drainage basin on a 2 x 2.5 deg resolution. The computed runoff depends on the model's precipitation, evapotranspiration, and soil moisture storage. For the doubled CO2 climate, the runoff increased for 25 of the 33 rivers, and in most cases the increases coincide with increased rainfall within the drainage basins. There were runoff increases in all rivers in high northern latitudes, with a maximum increase of 47 percent. At low latitudes there were both increases and decreases ranging from a 96 increase to a 43 percent decrease. The effect of the simplified model assumptions of land-atmosphere interactions on the results is discussed.

  9. ''A Parallel Adaptive Simulation Tool for Two Phase Steady State Reacting Flows in Industrial Boilers and Furnaces''

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michael J. Bockelie

    2002-01-04

    This DOE SBIR Phase II final report summarizes research that has been performed to develop a parallel adaptive tool for modeling steady, two phase turbulent reacting flow. The target applications for the new tool are full scale, fossil-fuel fired boilers and furnaces such as those used in the electric utility industry, chemical process industry and mineral/metal process industry. The type of analyses to be performed on these systems are engineering calculations to evaluate the impact on overall furnace performance due to operational, process or equipment changes. To develop a Computational Fluid Dynamics (CFD) model of an industrial scale furnace requiresmore » a carefully designed grid that will capture all of the large and small scale features of the flowfield. Industrial systems are quite large, usually measured in tens of feet, but contain numerous burners, air injection ports, flames and localized behavior with dimensions that are measured in inches or fractions of inches. To create an accurate computational model of such systems requires capturing length scales within the flow field that span several orders of magnitude. In addition, to create an industrially useful model, the grid can not contain too many grid points - the model must be able to execute on an inexpensive desktop PC in a matter of days. An adaptive mesh provides a convenient means to create a grid that can capture both fine flow field detail within a very large domain with a ''reasonable'' number of grid points. However, the use of an adaptive mesh requires the development of a new flow solver. To create the new simulation tool, we have combined existing reacting CFD modeling software with new software based on emerging block structured Adaptive Mesh Refinement (AMR) technologies developed at Lawrence Berkeley National Laboratory (LBNL). Specifically, we combined: -physical models, modeling expertise, and software from existing combustion simulation codes used by Reaction Engineering International; -mesh adaption, data management, and parallelization software and technology being developed by users of the BoxLib library at LBNL; and -solution methods for problems formulated on block structured grids that were being developed in collaboration with technical staff members at the University of Utah Center for High Performance Computing (CHPC) and at LBNL. The combustion modeling software used by Reaction Engineering International represents an investment of over fifty man-years of development, conducted over a period of twenty years. Thus, it was impractical to achieve our objective by starting from scratch. The research program resulted in an adaptive grid, reacting CFD flow solver that can be used only on limited problems. In current form the code is appropriate for use on academic problems with simplified geometries. The new solver is not sufficiently robust or sufficiently general to be used in a ''production mode'' for industrial applications. The principle difficulty lies with the multi-level solver technology. The use of multi-level solvers on adaptive grids with embedded boundaries is not yet a mature field and there are many issues that remain to be resolved. From the lessons learned in this SBIR program, we have started work on a new flow solver with an AMR capability. The new code is based on a conventional cell-by-cell mesh refinement strategy used in unstructured grid solvers that employ hexahedral cells. The new solver employs several of the concepts and solution strategies developed within this research program. The formulation of the composite grid problem for the new solver has been designed to avoid the embedded boundary complications encountered in this SBIR project. This follow-on effort will result in a reacting flow CFD solver with localized mesh capability that can be used to perform engineering calculations on industrial problems in a production mode.« less

  10. Global hydrodynamic modelling of flood inundation in continental rivers: How can we achieve it?

    NASA Astrophysics Data System (ADS)

    Yamazaki, D.

    2016-12-01

    Global-scale modelling of river hydrodynamics is essential for understanding global hydrological cycle, and is also required in interdisciplinary research fields . Global river models have been developed continuously for more than two decades, but modelling river flow at a global scale is still a challenging topic because surface water movement in continental rivers is a multi-spatial-scale phenomena. We have to consider the basin-wide water balance (>1000km scale), while hydrodynamics in river channels and floodplains is regulated by much smaller-scale topography (<100m scale). For example, heavy precipitation in upstream regions may later cause flooding in farthest downstream reaches. In order to realistically simulate the timing and amplitude of flood wave propagation for a long distance, consideration of detailed local topography is unavoidable. I have developed the global hydrodynamic model CaMa-Flood to overcome this scale-discrepancy of continental river flow. The CaMa-Flood divides river basins into multiple "unit-catchments", and assumes the water level is uniform within each unit-catchment. One unit-catchment is assigned to each grid-box defined at the typical spatial resolution of global climate models (10 100 km scale). Adopting a uniform water level in a >10km river segment seems to be a big assumption, but it is actually a good approximation for hydrodynamic modelling of continental rivers. The number of grid points required for global hydrodynamic simulations is largely reduced by this "unit-catchment assumption". Alternative to calculating 2-dimensional floodplain flows as in regional flood models, the CaMa-Flood treats floodplain inundation in a unit-catchment as a sub-grid physics. The water level and inundated area in each unit-catchment are diagnosed from water volume using topography parameters derived from high-resolution digital elevation models. Thus, the CaMa-Flood is at least 1000 times computationally more efficient compared to regional flood inundation models while the reality of simulated flood dynamics is kept. I will explain in detail how the CaMa-Flood model has been constructed from high-resolution topography datasets, and how the model can be used for various interdisciplinary applications.

  11. Analyses and forecasts with LAWS winds

    NASA Technical Reports Server (NTRS)

    Wang, Muyin; Paegle, Jan

    1994-01-01

    Horizontal fluxes of atmospheric water vapor are studied for summer months during 1989 and 1992 over North and South America based on analyses from European Center for Medium Range Weather Forecasts, US National Meteorological Center, and United Kingdom Meteorological Office. The calculations are performed over 20 deg by 20 deg box-shaped midlatitude domains located to the east of the Rocky Mountains in North America, and to the east of the Andes Mountains in South America. The fluxes are determined from operational center gridded analyses of wind and moisture. Differences in the monthly mean moisture flux divergence determined from these analyses are as large as 7 cm/month precipitable water equivalent over South America, and 3 cm/month over North America. Gridded analyses at higher spatial and temporal resolution exhibit better agreement in the moisture budget study. However, significant discrepancies of the moisture flux divergence computed from different gridded analyses still exist. The conclusion is more pessimistic than Rasmusson's estimate based on station data. Further analysis reveals that the most significant sources of error result from model surface elevation fields, gaps in the data archive, and uncertainties in the wind and specific humidity analyses. Uncertainties in the wind analyses are the most important problem. The low-level jets, in particular, are substantially different in the different data archives. Part of the reason for this may be due to the way the different analysis models parameterized physical processes affecting low-level jets. The results support the inference that the noise/signal ratio of the moisture budget may be improved more rapidly by providing better wind observations and analyses than by providing better moisture data.

  12. PRISM 8 degrees X 10 degrees North Hemisphere paleoclimate reconstruction; digital data

    USGS Publications Warehouse

    Barron, John A.; Cronin, Thomas M.; Dowsett, Harry J.; Fleming, Farley R.; Holtz, Thomas R.; Ishman, Scott E.; Poore, Richard Z.; Thompson, Robert S.; Willard, Debra A.

    1994-01-01

    The PRISM 8?x10? data set represents several years of investigation by PRISM (Pliocene Research, Interpretation, and Synoptic Mapping) Project members. One of the goals of PRISM is to produce time-slice reconstructions of intervals of warmer than modern climate within the Pliocene Epoch. The first of these was chosen to be at 3.0 Ma (time scale of Berggren et al., 1985) and is published in Global and Planetary Change (Dowsett et al., 1994). This document contains the actual data sets and a brief explanation of how they were constructed. For paleoenvironmental interpretations and discussion of each data set, see Dowsett et al., in press. The data sets includes sea level, land ice distribution, vegetation or land cover, sea surface temperature and sea-ice cover matrices. This reconstruction of Middle Pliocene climate is organized as a series of datasets representing different environmental attributes. The data sets are designed for use with the GISS Model II atmospheric general circulation model (GCM) using an 8?x10? resolution (Hansen et al., 1983). The first step in documenting the Pliocene climate involves assigning an appropriate fraction of land versus ocean to each grid box. Following grid cell by grid cell, land versus ocean allocations, winter and summer sea ice coverage of ocean areas are assigned and then winter and summer sea surface temperatures are assigned to open ocean areas. Average land ice cover is recorded for land areas and then land areas not covered by ice are assigned proportions of six vegetation or land cover categories modified from Hansen et al. (1983).

  13. Direct comparisons of ice cloud macro- and microphysical properties simulated by the Community Atmosphere Model version 5 with HIPPO aircraft observations

    NASA Astrophysics Data System (ADS)

    Wu, Chenglai; Liu, Xiaohong; Diao, Minghui; Zhang, Kai; Gettelman, Andrew; Lu, Zheng; Penner, Joyce E.; Lin, Zhaohui

    2017-04-01

    In this study we evaluate cloud properties simulated by the Community Atmosphere Model version 5 (CAM5) using in situ measurements from the HIAPER Pole-to-Pole Observations (HIPPO) campaign for the period of 2009 to 2011. The modeled wind and temperature are nudged towards reanalysis. Model results collocated with HIPPO flight tracks are directly compared with the observations, and model sensitivities to the representations of ice nucleation and growth are also examined. Generally, CAM5 is able to capture specific cloud systems in terms of vertical configuration and horizontal extension. In total, the model reproduces 79.8 % of observed cloud occurrences inside model grid boxes and even higher (94.3 %) for ice clouds (T ≤ -40 °C). The missing cloud occurrences in the model are primarily ascribed to the fact that the model cannot account for the high spatial variability of observed relative humidity (RH). Furthermore, model RH biases are mostly attributed to the discrepancies in water vapor, rather than temperature. At the micro-scale of ice clouds, the model captures the observed increase of ice crystal mean sizes with temperature, albeit with smaller sizes than the observations. The model underestimates the observed ice number concentration (Ni) and ice water content (IWC) for ice crystals larger than 75 µm in diameter. Modeled IWC and Ni are more sensitive to the threshold diameter for autoconversion of cloud ice to snow (Dcs), while simulated ice crystal mean size is more sensitive to ice nucleation parameterizations than to Dcs. Our results highlight the need for further improvements to the sub-grid RH variability and ice nucleation and growth in the model.

  14. Operational 0-3 h probabilistic quantitative precipitation forecasts: Recent performance and potential enhancements

    NASA Astrophysics Data System (ADS)

    Sokol, Z.; Kitzmiller, D.; Pešice, P.; Guan, S.

    2009-05-01

    The NOAA National Weather Service has maintained an automated, centralized 0-3 h prediction system for probabilistic quantitative precipitation forecasts since 2001. This advective-statistical system (ADSTAT) produces probabilities that rainfall will exceed multiple threshold values up to 50 mm at some location within a 40-km grid box. Operational characteristics and development methods for the system are described. Although development data were stratified by season and time of day, ADSTAT utilizes only a single set of nation-wide equations that relate predictor variables derived from radar reflectivity, lightning, satellite infrared temperatures, and numerical prediction model output to rainfall occurrence. A verification study documented herein showed that the operational ADSTAT reliably models regional variations in the relative frequency of heavy rain events. This was true even in the western United States, where no regional-scale, gridded hourly precipitation data were available during the development period in the 1990s. An effort was recently launched to improve the quality of ADSTAT forecasts by regionalizing the prediction equations and to adapt the model for application in the Czech Republic. We have experimented with incorporating various levels of regional specificity in the probability equations. The geographic localization study showed that in the warm season, regional climate differences and variations in the diurnal temperature cycle have a marked effect on the predictor-predictand relationships, and thus regionalization would lead to better statistical reliability in the forecasts.

  15. Standardized UXO Technology Demonstration Site Blind Grid Scoring Record No. 806 (U.S. Geological Survey, TMGS Magnetometer/Towed Array)

    DTIC Science & Technology

    2007-05-01

    BOX 25046, FEDERAL CENTER, M.S. 964 DENVER, CO 80225-0046 TECHNOLOGY TYPE/PLATFORM: TMGS MAGNETOMETER/TOWED ARRAY PREPARED BY: U.S. ARMY...GEOLOGICAL SURVEY, TMGS MAGNETOMETER/TOWED ARRAY) 8-CO-160-UXO-021 Karwatka, Michael... TMGS Magnetometer/Towed Array, MEC Unclassified Unclassified Unclassified SAR (Page ii Blank) i ACKNOWLEDGMENTS

  16. 75 FR 11533 - Public Utility District No. 1 of Snohomish County, WA; Notice of Technical Meeting To Discuss...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-11

    ... supplied by OpenHydro Group Ltd., mounted on completely submerged gravity foundations; (2) two 250-meter service cables connected at a subsea junction box or spliced to a 0.5-kilometer subsea transmission cable... building; (4) a 140-meter long buried cable from the control building to the grid; and (5) appurtenant...

  17. High Power Hydrogen Injector with Beam Focusing for Plasma Heating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deichuli, P.P.; Ivanov, A.A.; Korepanov, S.A.

    2005-01-15

    High power neutral beam injector has been developed with the atom energy of 25 keV, a current of 60 A, and several milliseconds pulse duration. Six of these injectors will be used for upgrade of the atomic injection system at central cell of a Gas Dynamic Trap (GDT) device and 2 injectors are planned for SHIP experiment.The injector ion source is based on an arc discharge plasma box. The plasma emitter is produced by a 1 kA arc discharge in hydrogen. A multipole magnetic field produced with permanent magnets at the periphery of the plasma box is used to increasemore » its efficiency and improve homogeneity of the plasma emitter. The ion beam is extracted by a 4-electrodes ion optical system (IOS). Initial beam diameter is 200 mm. The grids of the IOS have a spherical curvature for geometrical focusing of the beam. The optimal IOS geometry and grid potentials were found with the numerical simulation to provide precise beam formation. The measured angular divergence of the beam is 0.02 rad, which corresponds to the 2.5 cm Gaussian radius of the beam profile measured at focal point.« less

  18. 3D RISM theory with fast reciprocal-space electrostatics.

    PubMed

    Heil, Jochen; Kast, Stefan M

    2015-03-21

    The calculation of electrostatic solute-solvent interactions in 3D RISM ("three-dimensional reference interaction site model") integral equation theory is recast in a form that allows for a computational treatment analogous to the "particle-mesh Ewald" formalism as used for molecular simulations. In addition, relations that connect 3D RISM correlation functions and interaction potentials with thermodynamic quantities such as the chemical potential and average solute-solvent interaction energy are reformulated in a way that calculations of expensive real-space electrostatic terms on the 3D grid are completely avoided. These methodical enhancements allow for both, a significant speedup particularly for large solute systems and a smoother convergence of predicted thermodynamic quantities with respect to box size, as illustrated for several benchmark systems.

  19. Three-dimensional radiochromic film dosimetry for volumetric modulated arc therapy using a spiral water phantom.

    PubMed

    Tanooka, Masao; Doi, Hiroshi; Miura, Hideharu; Inoue, Hiroyuki; Niwa, Yasue; Takada, Yasuhiro; Fujiwara, Masayuki; Sakai, Toshiyuki; Sakamoto, Kiyoshi; Kamikonya, Norihiko; Hirota, Shozo

    2013-11-01

    We validated 3D radiochromic film dosimetry for volumetric modulated arc therapy (VMAT) using a newly developed spiral water phantom. The phantom consists of a main body and an insert box, each of which has an acrylic wall thickness of 3 mm and is filled with water. The insert box includes a spiral film box used for dose-distribution measurement, and a film holder for positioning a radiochromic film. The film holder has two parallel walls whose facing inner surfaces are equipped with spiral grooves in a mirrored configuration. The film is inserted into the spiral grooves by its side edges and runs along them to be positioned on a spiral plane. Dose calculation was performed by applying clinical VMAT plans to the spiral water phantom using a commercial Monte Carlo-based treatment-planning system, Monaco, whereas dose was measured by delivering the VMAT beams to the phantom. The calculated dose distributions were resampled on the spiral plane, and the dose distributions recorded on the film were scanned. Comparisons between the calculated and measured dose distributions yielded an average gamma-index pass rate of 87.0% (range, 91.2-84.6%) in nine prostate VMAT plans under 3 mm/3% criteria with a dose-calculation grid size of 2 mm. The pass rates were increased beyond 90% (average, 91.1%; range, 90.1-92.0%) when the dose-calculation grid size was decreased to 1 mm. We have confirmed that 3D radiochromic film dosimetry using the spiral water phantom is a simple and cost-effective approach to VMAT dose verification.

  20. Design of power cable grounding wire anti-theft monitoring system

    NASA Astrophysics Data System (ADS)

    An, Xisheng; Lu, Peng; Wei, Niansheng; Hong, Gang

    2018-01-01

    In order to prevent the serious consequences of the power grid failure caused by the power cable grounding wire theft, this paper presents a GPRS based power cable grounding wire anti-theft monitoring device system, which includes a camera module, a sensor module, a micro processing system module, and a data monitoring center module, a mobile terminal module. Our design utilize two kinds of methods for detecting and reporting comprehensive image, it can effectively solve the problem of power and cable grounding wire box theft problem, timely follow-up grounded cable theft events, prevent the occurrence of electric field of high voltage transmission line fault, improve the reliability of the safe operation of power grid.

  1. A new 3D multi-fluid dust model: a study of the effects of activity and nucleus rotation on the dust grains' behavior in the cometary environment

    NASA Astrophysics Data System (ADS)

    Shou, Y.; Combi, M. R.; Toth, G.; Fougere, N.; Tenishev, V.; Huang, Z.; Jia, X.; Hansen, K. C.; Gombosi, T. I.; Bieler, A. M.; Rubin, M.

    2016-12-01

    Cometary dust observations may deepen our understanding of the role of dust in the formation of comets and in altering the cometary environment. Models including dust grains are in demand to interpret observations and test hypotheses. Several existing models have taken into account the gas-dust interaction, varying sizes of dust grains and the cometary gravitational force. In this work, we develop a multi-fluid dust model based on BATS-R-US in the University of Michigan's Space Weather Modeling Framework (SWMF). This model not only incorporates key features of previous dust models, but also has the capability of simulating time-dependent phenomena. Since the model is running in the rotating comet reference frame with a real shaped nucleus in the computational domain, the fictitious centrifugal and Coriolis forces are included. The boundary condition on the nucleus surface can be set according to the distribution of activity and the solar illumination. The Sun, which drives sublimation and the radiation pressure force, revolves around the comet in this frame. A newly developed numerical mesh is also used to resolve the real shaped nucleus in the center and to facilitate prescription of the outer boundary conditions that accommodate the rotating frame. The inner part of the grid is a box composed of Cartesian cells and the outer surface is a smooth sphere, with stretched cells filled in between the box and the sphere. The effects of the rotating nucleus and the activity region on the surface are discussed and preliminary results are presented. This work has been partially supported by grant NNX14AG84G from the NASA Planetary Atmospheres Program, and US Rosetta contracts JPL #1266313, JPL #1266314 and JPL #1286489.

  2. Tracking a Superstorm

    NASA Image and Video Library

    2017-12-08

    Oct. 29, 2012 – A day before landfall, Sandy intensified into a Category 2 superstorm nearly 1,000 miles wide. Credit: NASA's Goddard Space Flight Center and NASA Center for Climate Simulation Video and images courtesy of NASA/GSFC/William Putman -- A NASA computer model simulates the astonishing track and forceful winds of Hurricane Sandy. Hurricane Sandy pummeled the East Coast late in 2012’s Atlantic hurricane season, causing 159 deaths and $70 billion in damages. Days before landfall, forecasts of its trajectory were still being made. Some computer models showed that a trough in the jet stream would kick the monster storm away from land and out to sea. Among the earliest to predict its true course was NASA’s GEOS-5 global atmosphere model. The model works by dividing Earth’s atmosphere into a virtual grid of stacked boxes. A supercomputer then solves mathematical equations inside each box to create a weather forecast predicting Sandy’s structure, path and other traits. The NASA model not only produced an accurate track of Sandy, but also captured fine-scale details of the storm’s changing intensity and winds. Watch the video to see it for yourself. For more information, please visit: gmao.gsfc.nasa.gov/research/atmosphericassim/tracking_hur... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  3. Balancing Conflicting Requirements for Grid and Particle Decomposition in Continuum-Lagrangian Solvers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sitaraman, Hariswaran; Grout, Ray

    2015-10-30

    The load balancing strategies for hybrid solvers that involve grid based partial differential equation solution coupled with particle tracking are presented in this paper. A typical Message Passing Interface (MPI) based parallelization of grid based solves are done using a spatial domain decomposition while particle tracking is primarily done using either of the two techniques. One of the techniques is to distribute the particles to MPI ranks to whose grid they belong to while the other is to share the particles equally among all ranks, irrespective of their spatial location. The former technique provides spatial locality for field interpolation butmore » cannot assure load balance in terms of number of particles, which is achieved by the latter. The two techniques are compared for a case of particle tracking in a homogeneous isotropic turbulence box as well as a turbulent jet case. We performed a strong scaling study for more than 32,000 cores, which results in particle densities representative of anticipated exascale machines. The use of alternative implementations of MPI collectives and efficient load equalization strategies are studied to reduce data communication overheads.« less

  4. Combined analysis of field and model data: A case study of the phosphate dynamics in the German Bight in summer 1994

    NASA Astrophysics Data System (ADS)

    Pohlmann, Th.; Raabe, Th.; Doerffer, R.; Beddig, S.; Brockmann, U.; Dick, S.; Engel, M.; Hesse, K.-J.; König, P.; Mayer, B.; Moll, A.; Murphy, D.; Puls, W.; Rick, H.-J.; Schmidt-Nia, R.; Schönfeld, W.; Sündermann, J.

    1999-09-01

    The intention of this paper is to analyse a specific phenomenon observed during the KUSTOS campaigns in order to demonstrate the general capability of the KUSTOS and TRANSWATT approach, i.e. the combination of field and modelling activities in an interdisciplinary framework. The selected phenomenon is the increase in phosphate concentrations off the peninsula of Eiderstedt on the North Frisian coast sampled during four subsequent station grids of the KUSTOS summer campaign in 1994. First of all, a characterisation of the observed summer situation is given. The phosphate increase is described in detail in relation to the dynamics of other nutrients. In a second step, a first-order estimate of the dispersion of phosphate is discussed. The estimate is based on the box model approach and will focus on the effects of the river Elbe and Wadden Sea inputs on phosphate dynamics. Thirdly, a fully three-dimensional model system is presented, which was implemented in order to analyse the phosphate development. The model system is discussed briefly, with emphasis on phosphorus-related processes. The reliability of one of the model components, i.e. the hydrodynamical model, is demonstrated by means of a comparison of model results with observed current data. Thereafter, results of the German Bight seston model are employed to interpret the observed phosphate increase. From this combined analysis, it was possible to conclude that the phosphate increase during the first three surveys was due to internal transformation processes within the phosphorus cycle. On the other hand, the higher phosphate concentrations measured in the last station grid survey were caused by a horizontal transport of phosphate being remobilised in the Wadden Sea.

  5. Advanced analysis of forest fire clustering

    NASA Astrophysics Data System (ADS)

    Kanevski, Mikhail; Pereira, Mario; Golay, Jean

    2017-04-01

    Analysis of point pattern clustering is an important topic in spatial statistics and for many applications: biodiversity, epidemiology, natural hazards, geomarketing, etc. There are several fundamental approaches used to quantify spatial data clustering using topological, statistical and fractal measures. In the present research, the recently introduced multi-point Morisita index (mMI) is applied to study the spatial clustering of forest fires in Portugal. The data set consists of more than 30000 fire events covering the time period from 1975 to 2013. The distribution of forest fires is very complex and highly variable in space. mMI is a multi-point extension of the classical two-point Morisita index. In essence, mMI is estimated by covering the region under study by a grid and by computing how many times more likely it is that m points selected at random will be from the same grid cell than it would be in the case of a complete random Poisson process. By changing the number of grid cells (size of the grid cells), mMI characterizes the scaling properties of spatial clustering. From mMI, the data intrinsic dimension (fractal dimension) of the point distribution can be estimated as well. In this study, the mMI of forest fires is compared with the mMI of random patterns (RPs) generated within the validity domain defined as the forest area of Portugal. It turns out that the forest fires are highly clustered inside the validity domain in comparison with the RPs. Moreover, they demonstrate different scaling properties at different spatial scales. The results obtained from the mMI analysis are also compared with those of fractal measures of clustering - box counting and sand box counting approaches. REFERENCES Golay J., Kanevski M., Vega Orozco C., Leuenberger M., 2014: The multipoint Morisita index for the analysis of spatial patterns. Physica A, 406, 191-202. Golay J., Kanevski M. 2015: A new estimator of intrinsic dimension based on the multipoint Morisita index. Pattern Recognition, 48, 4070-4081.

  6. A coarse-grid projection method for accelerating incompressible flow computations

    NASA Astrophysics Data System (ADS)

    San, Omer; Staples, Anne E.

    2013-01-01

    We present a coarse-grid projection (CGP) method for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. The CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. After solving the Poisson equation on a coarsened grid, an interpolation scheme is used to obtain the fine data for subsequent time stepping on the full grid. A particular version of the method is applied here to the vorticity-stream function, primitive variable, and vorticity-velocity formulations of incompressible Navier-Stokes equations. We compute several benchmark flow problems on two-dimensional Cartesian and non-Cartesian grids, as well as a three-dimensional flow problem. The method is found to accelerate these computations while retaining a level of accuracy close to that of the fine resolution field, which is significantly better than the accuracy obtained for a similar computation performed solely using a coarse grid. A linear acceleration rate is obtained for all the cases we consider due to the linear-cost elliptic Poisson solver used, with reduction factors in computational time between 2 and 42. The computational savings are larger when a suboptimal Poisson solver is used. We also find that the computational savings increase with increasing distortion ratio on non-Cartesian grids, making the CGP method a useful tool for accelerating generalized curvilinear incompressible flow solvers.

  7. Ignorance is a bliss: Mathematical structure of many-box models

    NASA Astrophysics Data System (ADS)

    Tylec, Tomasz I.; Kuś, Marek

    2018-03-01

    We show that the propositional system of a many-box model is always a set-representable effect algebra. In particular cases of 2-box and 1-box models, it is an orthomodular poset and an orthomodular lattice, respectively. We discuss the relation of the obtained results with the so-called Local Orthogonality principle. We argue that non-classical properties of box models are the result of a dual enrichment of the set of states caused by the impoverishment of the set of propositions. On the other hand, quantum mechanical models always have more propositions as well as more states than the classical ones. Consequently, we show that the box models cannot be considered as generalizations of quantum mechanical models and seeking additional principles that could allow us to "recover quantum correlations" in box models are, at least from the fundamental point of view, pointless.

  8. Analysis of UK and European NOx and VOC emission scenarios in the Defra model intercomparison exercise

    NASA Astrophysics Data System (ADS)

    Derwent, Richard; Beevers, Sean; Chemel, Charles; Cooke, Sally; Francis, Xavier; Fraser, Andrea; Heal, Mathew R.; Kitwiroon, Nutthida; Lingard, Justin; Redington, Alison; Sokhi, Ranjeet; Vieno, Massimo

    2014-09-01

    Simple emission scenarios have been implemented in eight United Kingdom air quality models with the aim of assessing how these models compared when addressing whether photochemical ozone formation in southern England was NOx- or VOC-sensitive and whether ozone precursor sources in the UK or in the Rest of Europe (RoE) were the most important during July 2006. The suite of models included three Eulerian-grid models (three implementations of one of these models), a Lagrangian atmospheric dispersion model and two moving box air parcel models. The assignments as to NOx- or VOC-sensitive and to UK- versus RoE-dominant, turned out to be highly variable and often contradictory between the individual models. However, when the assignments were filtered by model performance on each day, many of the contradictions could be eliminated. Nevertheless, no one model was found to be the 'best' model on all days, indicating that no single air quality model could currently be relied upon to inform policymakers robustly in terms of NOx- versus VOC-sensitivity and UK- versus RoE-dominance on each day. It is important to maintain a diversity in model approaches.

  9. Three-dimensional radiochromic film dosimetry for volumetric modulated arc therapy using a spiral water phantom

    PubMed Central

    Tanooka, Masao; Doi, Hiroshi; Miura, Hideharu; Inoue, Hiroyuki; Niwa, Yasue; Takada, Yasuhiro; Fujiwara, Masayuki; Sakai, Toshiyuki; Sakamoto, Kiyoshi; Kamikonya, Norihiko; Hirota, Shozo

    2013-01-01

    We validated 3D radiochromic film dosimetry for volumetric modulated arc therapy (VMAT) using a newly developed spiral water phantom. The phantom consists of a main body and an insert box, each of which has an acrylic wall thickness of 3 mm and is filled with water. The insert box includes a spiral film box used for dose-distribution measurement, and a film holder for positioning a radiochromic film. The film holder has two parallel walls whose facing inner surfaces are equipped with spiral grooves in a mirrored configuration. The film is inserted into the spiral grooves by its side edges and runs along them to be positioned on a spiral plane. Dose calculation was performed by applying clinical VMAT plans to the spiral water phantom using a commercial Monte Carlo-based treatment-planning system, Monaco, whereas dose was measured by delivering the VMAT beams to the phantom. The calculated dose distributions were resampled on the spiral plane, and the dose distributions recorded on the film were scanned. Comparisons between the calculated and measured dose distributions yielded an average gamma-index pass rate of 87.0% (range, 91.2–84.6%) in nine prostate VMAT plans under 3 mm/3% criteria with a dose-calculation grid size of 2 mm. The pass rates were increased beyond 90% (average, 91.1%; range, 90.1–92.0%) when the dose-calculation grid size was decreased to 1 mm. We have confirmed that 3D radiochromic film dosimetry using the spiral water phantom is a simple and cost-effective approach to VMAT dose verification. PMID:23685667

  10. I/O Parallelization for the Goddard Earth Observing System Data Assimilation System (GEOS DAS)

    NASA Technical Reports Server (NTRS)

    Lucchesi, Rob; Sawyer, W.; Takacs, L. L.; Lyster, P.; Zero, J.

    1998-01-01

    The National Aeronautics and Space Administration (NASA) Data Assimilation Office (DAO) at the Goddard Space Flight Center (GSFC) has developed the GEOS DAS, a data assimilation system that provides production support for NASA missions and will support NASA's Earth Observing System (EOS) in the coming years. The GEOS DAS will be used to provide background fields of meteorological quantities to EOS satellite instrument teams for use in their data algorithms as well as providing assimilated data sets for climate studies on decadal time scales. The DAO has been involved in prototyping parallel implementations of the GEOS DAS for a number of years and is now embarking on an effort to convert the production version from shared-memory parallelism to distributed-memory parallelism using the portable Message-Passing Interface (MPI). The GEOS DAS consists of two main components, an atmospheric General Circulation Model (GCM) and a Physical-space Statistical Analysis System (PSAS). The GCM operates on data that are stored on a regular grid while PSAS works with observational data that are scattered irregularly throughout the atmosphere. As a result, the two components have different data decompositions. The GCM is decomposed horizontally as a checkerboard with all vertical levels of each box existing on the same processing element(PE). The dynamical core of the GCM can also operate on a rotated grid, which requires communication-intensive grid transformations during GCM integration. PSAS groups observations on PEs in a more irregular and dynamic fashion.

  11. Using the CIFIST grid of CO5BOLD 3D model atmospheres to study the effects of stellar granulation on photometric colours. I. Grids of 3D corrections in the UBVRI, 2MASS, HIPPARCOS, Gaia, and SDSS systems

    NASA Astrophysics Data System (ADS)

    Bonifacio, P.; Caffau, E.; Ludwig, H.-G.; Steffen, M.; Castelli, F.; Gallagher, A. J.; Kučinskas, A.; Prakapavičius, D.; Cayrel, R.; Freytag, B.; Plez, B.; Homeier, D.

    2018-03-01

    Context. The atmospheres of cool stars are temporally and spatially inhomogeneous due to the effects of convection. The influence of this inhomogeneity, referred to as granulation, on colours has never been investigated over a large range of effective temperatures and gravities. Aim. We aim to study, in a quantitative way, the impact of granulation on colours. Methods: We use the CIFIST (Cosmological Impact of the FIrst Stars) grid of CO5BOLD (COnservative COde for the COmputation of COmpressible COnvection in a BOx of L Dimensions, L = 2, 3) hydrodynamical models to compute emerging fluxes. These in turn are used to compute theoretical colours in the UBV RI, 2MASS, HIPPARCOS, Gaia and SDSS systems. Every CO5BOLD model has a corresponding one dimensional (1D) plane-parallel LHD (Lagrangian HydroDynamics) model computed for the same atmospheric parameters, which we used to define a "3D correction" that can be applied to colours computed from fluxes computed from any 1D model atmosphere code. As an example, we illustrate these corrections applied to colours computed from ATLAS models. Results: The 3D corrections on colours are generally small, of the order of a few hundredths of a magnitude, yet they are far from negligible. We find that ignoring granulation effects can lead to underestimation of Teff by up to 200 K and overestimation of gravity by up to 0.5 dex, when using colours as diagnostics. We have identified a major shortcoming in how scattering is treated in the current version of the CIFIST grid, which could lead to offsets of the order 0.01 mag, especially for colours involving blue and UV bands. We have investigated the Gaia and HIPPARCOS photometric systems and found that the (G - Hp), (BP - RP) diagram is immune to the effects of granulation. In addition, we point to the potential of the RVS photometry as a metallicity diagnostic. Conclusions: Our investigation shows that the effects of granulation should not be neglected if one wants to use colours as diagnostics of the stellar parameters of F, G, K stars. A limitation is that scattering is treated as true absorption in our current computations, thus our 3D corrections are likely an upper limit to the true effect. We are already computing the next generation of the CIFIST grid, using an approximate treatment of scattering. The appendix tables are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/611/A68

  12. Laparoscopic surgical box model training for surgical trainees with no prior laparoscopic experience.

    PubMed

    Nagendran, Myura; Toon, Clare D; Davidson, Brian R; Gurusamy, Kurinchi Selvan

    2014-01-17

    Surgical training has traditionally been one of apprenticeship, where the surgical trainee learns to perform surgery under the supervision of a trained surgeon. This is time consuming, costly, and of variable effectiveness. Training using a box model physical simulator - either a video box or a mirrored box - is an option to supplement standard training. However, the impact of this modality on trainees with no prior laparoscopic experience is unknown. To compare the benefits and harms of box model training versus no training, another box model, animal model, or cadaveric model training for surgical trainees with no prior laparoscopic experience. We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, and Science Citation Index Expanded to May 2013. We included all randomised clinical trials comparing box model trainers versus no training in surgical trainees with no prior laparoscopic experience. We also included trials comparing different methods of box model training. Two authors independently identified trials and collected data. We analysed the data with both the fixed-effect and the random-effects models using Review Manager for analysis. For each outcome, we calculated the standardised mean difference (SMD) with 95% confidence intervals (CI) based on intention-to-treat analysis whenever possible. Twenty-five trials contributed data to the quantitative synthesis in this review. All but one trial were at high risk of bias. Overall, 16 trials (464 participants) provided data for meta-analysis of box training (248 participants) versus no supplementary training (216 participants). All the 16 trials in this comparison used video trainers. Overall, 14 trials (382 participants) provided data for quantitative comparison of different methods of box training. There were no trials comparing box model training versus animal model or cadaveric model training. Box model training versus no training: The meta-analysis showed that the time taken for task completion was significantly shorter in the box trainer group than the control group (8 trials; 249 participants; SMD -0.48 seconds; 95% CI -0.74 to -0.22). Compared with the control group, the box trainer group also had lower error score (3 trials; 69 participants; SMD -0.69; 95% CI -1.21 to -0.17), better accuracy score (3 trials; 73 participants; SMD 0.67; 95% CI 0.18 to 1.17), and better composite performance scores (SMD 0.65; 95% CI 0.42 to 0.88). Three trials reported movement distance but could not be meta-analysed as they were not in a format for meta-analysis. There was significantly lower movement distance in the box model training compared with no training in one trial, and there were no significant differences in the movement distance between the two groups in the other two trials. None of the remaining secondary outcomes such as mortality and morbidity were reported in the trials when animal models were used for assessment of training, error in movements, and trainee satisfaction. Different methods of box training: One trial (36 participants) found significantly shorter time taken to complete the task when box training was performed using a simple cardboard box trainer compared with the standard pelvic trainer (SMD -3.79 seconds; 95% CI -4.92 to -2.65). There was no significant difference in the time taken to complete the task in the remaining three comparisons (reverse alignment versus forward alignment box training; box trainer suturing versus box trainer drills; and single incision versus multiport box model training). There were no significant differences in the error score between the two groups in any of the comparisons (box trainer suturing versus box trainer drills; single incision versus multiport box model training; Z-maze box training versus U-maze box training). The only trial that reported accuracy score found significantly higher accuracy score with Z-maze box training than U-maze box training (1 trial; 16 participants; SMD 1.55; 95% CI 0.39 to 2.71). One trial (36 participants) found significantly higher composite score with simple cardboard box trainer compared with conventional pelvic trainer (SMD 0.87; 95% CI 0.19 to 1.56). Another trial (22 participants) found significantly higher composite score with reverse alignment compared with forward alignment box training (SMD 1.82; 95% CI 0.79 to 2.84). There were no significant differences in the composite score between the intervention and control groups in any of the remaining comparisons. None of the secondary outcomes were adequately reported in the trials. The results of this review are threatened by both risks of systematic errors (bias) and risks of random errors (play of chance). Laparoscopic box model training appears to improve technical skills compared with no training in trainees with no previous laparoscopic experience. The impacts of this decreased time on patients and healthcare funders in terms of improved outcomes or decreased costs are unknown. There appears to be no significant differences in the improvement of technical skills between different methods of box model training. Further well-designed trials of low risk of bias and random errors are necessary. Such trials should assess the impacts of box model training on surgical skills in both the short and long term, as well as clinical outcomes when the trainee becomes competent to operate on patients.

  13. Augmented twin-nonlinear two-box behavioral models for multicarrier LTE power amplifiers.

    PubMed

    Hammi, Oualid

    2014-01-01

    A novel class of behavioral models is proposed for LTE-driven Doherty power amplifiers with strong memory effects. The proposed models, labeled augmented twin-nonlinear two-box models, are built by cascading a highly nonlinear memoryless function with a mildly nonlinear memory polynomial with cross terms. Experimental validation on gallium nitride based Doherty power amplifiers illustrates the accuracy enhancement and complexity reduction achieved by the proposed models. When strong memory effects are observed, the augmented twin-nonlinear two-box models can improve the normalized mean square error by up to 3 dB for the same number of coefficients when compared to state-of-the-art twin-nonlinear two-box models. Furthermore, the augmented twin-nonlinear two-box models lead to the same performance as previously reported twin-nonlinear two-box models while requiring up to 80% less coefficients.

  14. Description and Evaluation of IAP-AACM: A Global-regional Aerosol Chemistry Model for the Earth System Model CAS-ESM

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Chen, X.

    2017-12-01

    We present a first description and evaluation of the IAP Atmospheric Aerosol Chemistry Model (IAP-AACM) which has been integrated into the earth system model CAS-ESM. In this way it is possible to research into interaction of clouds and aerosol by its two-way coupling with the IAP Atmospheric General Circulation Model (IAP-AGCM). The model has a nested global-regional grid based on the Global Environmental Atmospheric Transport Model (GEATM) and the Nested Air Quality Prediction Modeling System (NAQPMS). The AACM provides two optional gas chemistry schemes, the CBM-Z gas chemistry as well as a sulfur oxidize box designed specifically for the CAS-ESM. Now the model driven by AGCM has been applied to a 1-year simulation of tropospheric chemistry both on global and regional scales for 2014, and been evaluated against various observation datasets, including aerosol precursor gas concentration, aerosol mass and number concentrations. Furthermore, global budgets in AACM are compared with other global aerosol models. Generally, the AACM simulations are within the range of other global aerosol model predictions, and the model has a reasonable agreement with observations of gases and particles concentration both on global and regional scales.

  15. Distribution and Validation of CERES Irradiance Global Data Products Via Web Based Tools

    NASA Technical Reports Server (NTRS)

    Rutan, David; Mitrescu, Cristian; Doelling, David; Kato, Seiji

    2016-01-01

    The CERES SYN1deg product provides climate quality 3-hourly globally gridded and temporally complete maps of top of atmosphere, in atmosphere, and surface fluxes. This product requires efficient release to the public and validation to maintain quality assurance. The CERES team developed web-tools for the distribution of both the global gridded products and grid boxes that contain long term validation sites that maintain high quality flux observations at the Earth's surface. These are found at: http://ceres.larc.nasa.gov/order_data.php. In this poster we explore the various tools available to users to sub-set, download, and validate using surface observations the SYN1Deg and Surface-EBAF products. We also analyze differences found in long-term records from well-maintained land surface sites such as the ARM central facility and high quality buoy radiometers, which due to their isolated nature cannot be maintained in a similar manner to their land based counterparts.

  16. Assessment of extreme value distributions for maximum temperature in the Mediterranean area

    NASA Astrophysics Data System (ADS)

    Beck, Alexander; Hertig, Elke; Jacobeit, Jucundus

    2015-04-01

    Extreme maximum temperatures highly affect the natural as well as the societal environment Heat stress has great effects on flora, fauna and humans and culminates in heat related morbidity and mortality. Agriculture and different industries are severely affected by extreme air temperatures. Even more under climate change conditions, it is necessary to detect potential hazards which arise from changes in the distributional parameters of extreme values, and this is especially relevant for the Mediterranean region which is characterized as a climate change hot spot. Therefore statistical approaches are developed to estimate these parameters with a focus on non-stationarities emerging in the relationship between regional climate variables and their large-scale predictors like sea level pressure, geopotential heights, atmospheric temperatures and relative humidity. Gridded maximum temperature data from the daily E-OBS dataset (Haylock et al., 2008) with a spatial resolution of 0.25° x 0.25° from January 1950 until December 2012 are the predictands for the present analyses. A s-mode principal component analysis (PCA) has been performed in order to reduce data dimension and to retain different regions of similar maximum temperature variability. The grid box with the highest PC-loading represents the corresponding principal component. A central part of the analyses is the model development for temperature extremes under the use of extreme value statistics. A combined model is derived consisting of a Generalized Pareto Distribution (GPD) model and a quantile regression (QR) model which determines the GPD location parameters. The QR model as well as the scale parameters of the GPD model are conditioned by various large-scale predictor variables. In order to account for potential non-stationarities in the predictors-temperature relationships, a special calibration and validation scheme is applied, respectively. Haylock, M. R., N. Hofstra, A. M. G. Klein Tank, E. J. Klok, P. D. Jones, and M. New (2008), A European daily high-resolution gridded data set of surface temperature and precipitation for 1950 - 2006, J. Geophys. Res., 113, D20119, doi:10.1029/2008JD010201.

  17. Modeling the effects of structure on seismic anisotropy in the Chester gneiss dome, southeast Vermont

    NASA Astrophysics Data System (ADS)

    Saif, S.; Brownlee, S. J.

    2017-12-01

    Compositional and structural heterogeneity in the continental crust are factors that contribute to the complex expression of crustal seismic anisotropy. Understanding deformation and flow in the crust using seismic anisotropy has thus proven difficult. Seismic anisotropy is affected by rock microstructure and mineralogy, and a number of studies have begun to characterize the full elastic tensors of crustal rocks in an attempt to increase our understanding of these intrinsic factors. However, there is still a large gap in length-scale between laboratory characterization on the scale of centimeters and seismic wavelengths on the order of kilometers. To address this length-scale gap we are developing a 3D crustal model that will help us determine the effects of rotating laboratory-scale elastic tensors into field-scale structures. The Chester gneiss dome in southeast Vermont is our primary focus. The model combines over 2000 structural data points from field measurements and published USGS structural data with elastic tensors of Chester dome rocks derived from electron backscatter diffraction data. We created a uniformly spaced grid by averaging structural measurements together in equally spaced grid boxes. The surface measurements are then projected into the third dimension using existing subsurface interpretations. A measured elastic tensor for the specific rock type is rotated according to its unique structural input at each point in the model. The goal is to use this model to generate artificial seismograms using existing numerical wave propagation codes. Once completed, the model input can be varied to examine the effects of different subsurface structure interpretations, as well as heterogeneity in rock composition and elastic tensors. Our goal is to be able to make predictions for how specific structures will appear in seismic data, and how that appearance changes with variations in rock composition.

  18. Augmented Twin-Nonlinear Two-Box Behavioral Models for Multicarrier LTE Power Amplifiers

    PubMed Central

    2014-01-01

    A novel class of behavioral models is proposed for LTE-driven Doherty power amplifiers with strong memory effects. The proposed models, labeled augmented twin-nonlinear two-box models, are built by cascading a highly nonlinear memoryless function with a mildly nonlinear memory polynomial with cross terms. Experimental validation on gallium nitride based Doherty power amplifiers illustrates the accuracy enhancement and complexity reduction achieved by the proposed models. When strong memory effects are observed, the augmented twin-nonlinear two-box models can improve the normalized mean square error by up to 3 dB for the same number of coefficients when compared to state-of-the-art twin-nonlinear two-box models. Furthermore, the augmented twin-nonlinear two-box models lead to the same performance as previously reported twin-nonlinear two-box models while requiring up to 80% less coefficients. PMID:24624047

  19. Crash energy absorption of two-segment crash box with holes under frontal load

    NASA Astrophysics Data System (ADS)

    Choiron, Moch. Agus; Sudjito, Hidayati, Nafisah Arina

    2016-03-01

    Crash box is one of the passive safety components which designed as an impact energy absorber during collision. Crash box designs have been developed in order to obtain the optimum crashworthiness performance. Circular cross section was first investigated with one segment design, it rather influenced by its length which is being sensitive to the buckling occurrence. In this study, the two-segment crash box design with additional holes is investigated and deformation behavior and crash energy absorption are observed. The crash box modelling is performed by finite element analysis. The crash test components were impactor, crash box, and fixed rigid base. Impactor and the fixed base material are modelled as a rigid, and crash box material as bilinear isotropic hardening. Crash box length of 100 mm and frontal crash velocity of 16 km/jam are selected. Crash box material of Aluminum Alloy is used. Based on simulation results, it can be shown that holes configuration with 2 holes and ¾ length locations have the largest crash energy absorption. This condition associated with deformation pattern, this crash box model produces axisymmetric mode than other models.

  20. A white-box model of S-shaped and double S-shaped single-species population growth

    PubMed Central

    Kalmykov, Lev V.

    2015-01-01

    Complex systems may be mechanistically modelled by white-box modeling with using logical deterministic individual-based cellular automata. Mathematical models of complex systems are of three types: black-box (phenomenological), white-box (mechanistic, based on the first principles) and grey-box (mixtures of phenomenological and mechanistic models). Most basic ecological models are of black-box type, including Malthusian, Verhulst, Lotka–Volterra models. In black-box models, the individual-based (mechanistic) mechanisms of population dynamics remain hidden. Here we mechanistically model the S-shaped and double S-shaped population growth of vegetatively propagated rhizomatous lawn grasses. Using purely logical deterministic individual-based cellular automata we create a white-box model. From a general physical standpoint, the vegetative propagation of plants is an analogue of excitation propagation in excitable media. Using the Monte Carlo method, we investigate a role of different initial positioning of an individual in the habitat. We have investigated mechanisms of the single-species population growth limited by habitat size, intraspecific competition, regeneration time and fecundity of individuals in two types of boundary conditions and at two types of fecundity. Besides that, we have compared the S-shaped and J-shaped population growth. We consider this white-box modeling approach as a method of artificial intelligence which works as automatic hyper-logical inference from the first principles of the studied subject. This approach is perspective for direct mechanistic insights into nature of any complex systems. PMID:26038717

  1. Microbiological testing of raw, boxed beef in the context of hazard analysis critical control point at a high-line-speed abattoir.

    PubMed

    Jericho, K W; Kozub, G C; Gannon, V P; Taylor, C M

    2000-12-01

    The efficacy of cold storage of raw, bagged, boxed beef was assessed microbiologically at a high-line-speed abattoir (270 carcasses per h). At the time of this study, plant management was in the process of creating a hazard analysis critical control point plan for all processes. Aerobic bacteria, coliforms, and type 1 Escherichia coli were enumerated (5 by 5-cm excision samples, hydrophobic grid membrane filter technology) before and after cold storage of this final product produced at six fabrication tables. In addition, the temperature-function integration technique (TFIT) was used to calculate the potential number of generations of E. coli during the first 24 or 48 h of storage of the boxed beef. Based on the temperature histories (total of 60 boxes, resulting from 12 product cuts, five boxes from each of two fabrication tables on each of 6 sampling days, and six types of fabrication tables), TFIT did not predict any growth of E. coli (with or without lag) for the test period. This was verified by E. coli mean log10 values of 0.65 to 0.42 cm2 (P > 0.05) determined by culture before and after the cooling process, respectively. Counts of aerobic bacteria and coliforms were significantly reduced (P < 0.001 and P < 0.05, respectively) during the initial period of the cooling process. There were significant microbiological differences (P < 0.05) between table-cut units.

  2. On the effect of model parameters on forecast objects

    NASA Astrophysics Data System (ADS)

    Marzban, Caren; Jones, Corinne; Li, Ning; Sandgathe, Scott

    2018-04-01

    Many physics-based numerical models produce a gridded, spatial field of forecasts, e.g., a temperature map. The field for some quantities generally consists of spatially coherent and disconnected objects. Such objects arise in many problems, including precipitation forecasts in atmospheric models, eddy currents in ocean models, and models of forest fires. Certain features of these objects (e.g., location, size, intensity, and shape) are generally of interest. Here, a methodology is developed for assessing the impact of model parameters on the features of forecast objects. The main ingredients of the methodology include the use of (1) Latin hypercube sampling for varying the values of the model parameters, (2) statistical clustering algorithms for identifying objects, (3) multivariate multiple regression for assessing the impact of multiple model parameters on the distribution (across the forecast domain) of object features, and (4) methods for reducing the number of hypothesis tests and controlling the resulting errors. The final output of the methodology is a series of box plots and confidence intervals that visually display the sensitivities. The methodology is demonstrated on precipitation forecasts from a mesoscale numerical weather prediction model.

  3. An Isopycnal Box Model with predictive deep-ocean structure for biogeochemical cycling applications

    NASA Astrophysics Data System (ADS)

    Goodwin, Philip

    2012-07-01

    To simulate global ocean biogeochemical tracer budgets a model must accurately determine both the volume and surface origins of each water-mass. Water-mass volumes are dynamically linked to the ocean circulation in General Circulation Models, but at the cost of high computational load. In computationally efficient Box Models the water-mass volumes are simply prescribed and do not vary when the circulation transport rates or water mass densities are perturbed. A new computationally efficient Isopycnal Box Model is presented in which the sub-surface box volumes are internally calculated from the prescribed circulation using a diffusive conceptual model of the thermocline, in which upwelling of cold dense water is balanced by a downward diffusion of heat. The volumes of the sub-surface boxes are set so that the density stratification satisfies an assumed link between diapycnal diffusivity, κd, and buoyancy frequency, N: κd = c/(Nα), where c and α are user prescribed parameters. In contrast to conventional Box Models, the volumes of the sub-surface ocean boxes in the Isopycnal Box Model are dynamically linked to circulation, and automatically respond to circulation perturbations. This dynamical link allows an important facet of ocean biogeochemical cycling to be simulated in a highly computationally efficient model framework.

  4. Periodic shunted arrays for the control of noise radiation in an enclosure

    NASA Astrophysics Data System (ADS)

    Casadei, Filippo; Dozio, Lorenzo; Ruzzene, Massimo; Cunefare, Kenneth A.

    2010-08-01

    This work presents numerical and experimental investigations of the application of a periodic array of resistive-inductive (RL) shunted piezoelectric patches for the attenuation of broadband noise radiated by a flexible plate in an enclosed cavity. A 4×4 lay-out of piezoelectric patches is bonded to the surface of a rectangular plate fully clamped to the top face of a rectangular cavity. Each piezo-patch is shunted through a single RL circuit, and all shunting circuits are tuned at the same frequency. The response of the resulting periodic structure is characterized by frequency bandgaps where vibrations and associated noise are strongly attenuated. The location and extent of induced bandgaps are predicted by the application of Bloch theorem on a unit cell of the periodic assembly, and they are controlled by proper selection of the shunting circuit impedance. A coupled piezo-structural-acoustic finite element model is developed to evaluate the noise reduction performance. Strong attenuation of multiple panel-controlled modes is observed over broad frequency bands. The proposed concept is tested on an aluminum plate mounted in a wooden box and driven by a shaker. Experimental results are presented in terms of pressure responses recorded using a grid of microphones placed inside the acoustic box.

  5. Crash energy absorption of two-segment crash box with holes under frontal load

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choiron, Moch Agus, E-mail: agus-choiron@ub.ac.id; Sudjito,; Hidayati, Nafisah Arina

    Crash box is one of the passive safety components which designed as an impact energy absorber during collision. Crash box designs have been developed in order to obtain the optimum crashworthiness performance. Circular cross section was first investigated with one segment design, it rather influenced by its length which is being sensitive to the buckling occurrence. In this study, the two-segment crash box design with additional holes is investigated and deformation behavior and crash energy absorption are observed. The crash box modelling is performed by finite element analysis. The crash test components were impactor, crash box, and fixed rigid base.more » Impactor and the fixed base material are modelled as a rigid, and crash box material as bilinear isotropic hardening. Crash box length of 100 mm and frontal crash velocity of 16 km/jam are selected. Crash box material of Aluminum Alloy is used. Based on simulation results, it can be shown that holes configuration with 2 holes and ¾ length locations have the largest crash energy absorption. This condition associated with deformation pattern, this crash box model produces axisymmetric mode than other models.« less

  6. A Finite Element Solution of Lateral Periodic Poisson–Boltzmann Model for Membrane Channel Proteins

    PubMed Central

    Xu, Jingjie; Lu, Benzhuo

    2018-01-01

    Membrane channel proteins control the diffusion of ions across biological membranes. They are closely related to the processes of various organizational mechanisms, such as: cardiac impulse, muscle contraction and hormone secretion. Introducing a membrane region into implicit solvation models extends the ability of the Poisson–Boltzmann (PB) equation to handle membrane proteins. The use of lateral periodic boundary conditions can properly simulate the discrete distribution of membrane proteins on the membrane plane and avoid boundary effects, which are caused by the finite box size in the traditional PB calculations. In this work, we: (1) develop a first finite element solver (FEPB) to solve the PB equation with a two-dimensional periodicity for membrane channel proteins, with different numerical treatments of the singular charges distributions in the channel protein; (2) add the membrane as a dielectric slab in the PB model, and use an improved mesh construction method to automatically identify the membrane channel/pore region even with a tilt angle relative to the z-axis; and (3) add a non-polar solvation energy term to complete the estimation of the total solvation energy of a membrane protein. A mesh resolution of about 0.25 Å (cubic grid space)/0.36 Å (tetrahedron edge length) is found to be most accurate in linear finite element calculation of the PB solvation energy. Computational studies are performed on a few exemplary molecules. The results indicate that all factors, the membrane thickness, the length of periodic box, membrane dielectric constant, pore region dielectric constant, and ionic strength, have individually considerable influence on the solvation energy of a channel protein. This demonstrates the necessity to treat all of those effects in the PB model for membrane protein simulations. PMID:29495644

  7. A Finite Element Solution of Lateral Periodic Poisson-Boltzmann Model for Membrane Channel Proteins.

    PubMed

    Ji, Nan; Liu, Tiantian; Xu, Jingjie; Shen, Longzhu Q; Lu, Benzhuo

    2018-02-28

    Membrane channel proteins control the diffusion of ions across biological membranes. They are closely related to the processes of various organizational mechanisms, such as: cardiac impulse, muscle contraction and hormone secretion. Introducing a membrane region into implicit solvation models extends the ability of the Poisson-Boltzmann (PB) equation to handle membrane proteins. The use of lateral periodic boundary conditions can properly simulate the discrete distribution of membrane proteins on the membrane plane and avoid boundary effects, which are caused by the finite box size in the traditional PB calculations. In this work, we: (1) develop a first finite element solver (FEPB) to solve the PB equation with a two-dimensional periodicity for membrane channel proteins, with different numerical treatments of the singular charges distributions in the channel protein; (2) add the membrane as a dielectric slab in the PB model, and use an improved mesh construction method to automatically identify the membrane channel/pore region even with a tilt angle relative to the z -axis; and (3) add a non-polar solvation energy term to complete the estimation of the total solvation energy of a membrane protein. A mesh resolution of about 0.25 Å (cubic grid space)/0.36 Å (tetrahedron edge length) is found to be most accurate in linear finite element calculation of the PB solvation energy. Computational studies are performed on a few exemplary molecules. The results indicate that all factors, the membrane thickness, the length of periodic box, membrane dielectric constant, pore region dielectric constant, and ionic strength, have individually considerable influence on the solvation energy of a channel protein. This demonstrates the necessity to treat all of those effects in the PB model for membrane protein simulations.

  8. Laparoscopic surgical box model training for surgical trainees with limited prior laparoscopic experience.

    PubMed

    Gurusamy, Kurinchi Selvan; Nagendran, Myura; Toon, Clare D; Davidson, Brian R

    2014-03-01

    Surgical training has traditionally been one of apprenticeship, where the surgical trainee learns to perform surgery under the supervision of a trained surgeon. This is time consuming, costly, and of variable effectiveness. Training using a box model physical simulator is an option to supplement standard training. However, the value of this modality on trainees with limited prior laparoscopic experience is unknown. To compare the benefits and harms of box model training for surgical trainees with limited prior laparoscopic experience versus standard surgical training or supplementary animal model training. We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, and Science Citation Index Expanded to May 2013. We planned to include all randomised clinical trials comparing box model trainers versus other forms of training including standard laparoscopic training and supplementary animal model training in surgical trainees with limited prior laparoscopic experience. We also planned to include trials comparing different methods of box model training. Two authors independently identified trials and collected data. We analysed the data with both the fixed-effect and the random-effects models using Review Manager 5. For each outcome, we calculated the risk ratio (RR), mean difference (MD), or standardised mean difference (SMD) with 95% confidence intervals (CI) based on intention-to-treat analysis whenever possible. We identified eight trials that met the inclusion criteria. One trial including 17 surgical trainees did not contribute to the meta-analysis. We included seven trials (249 surgical trainees belonging to various postgraduate years ranging from year one to four) in which the participants were randomised to supplementary box model training (122 trainees) versus standard training (127 trainees). Only one trial (50 trainees) was at low risk of bias. The box trainers used in all the seven trials were video trainers. Six trials were conducted in USA and one trial in Canada. The surgeries in which the final assessments were made included laparoscopic total extraperitoneal hernia repairs, laparoscopic cholecystectomy, laparoscopic tubal ligation, laparoscopic partial salpingectomy, and laparoscopic bilateral mid-segment salpingectomy. The final assessments were made on a single operative procedure.There were no deaths in three trials (0/82 (0%) supplementary box model training versus 0/86 (0%) standard training; RR not estimable; very low quality evidence). The other trials did not report mortality. The estimated effect on serious adverse events was compatible with benefit and harm (three trials; 168 patients; 0/82 (0%) supplementary box model training versus 1/86 (1.1%) standard training; RR 0.36; 95% CI 0.02 to 8.43; very low quality evidence). None of the trials reported patient quality of life. The operating time was significantly shorter in the supplementary box model training group versus the standard training group (1 trial; 50 patients; MD -6.50 minutes; 95% CI -10.85 to -2.15). The proportion of patients who were discharged as day-surgery was significantly higher in the supplementary box model training group versus the standard training group (1 trial; 50 patients; 24/24 (100%) supplementary box model training versus 15/26 (57.7%) standard training; RR 1.71; 95% CI 1.23 to 2.37). None of the trials reported trainee satisfaction. The operating performance was significantly better in the supplementary box model training group versus the standard training group (seven trials; 249 trainees; SMD 0.84; 95% CI 0.57 to 1.10).None of the trials compared box model training versus animal model training or versus different methods of box model training. There is insufficient evidence to determine whether laparoscopic box model training reduces mortality or morbidity. There is very low quality evidence that it improves technical skills compared with standard surgical training in trainees with limited previous laparoscopic experience. It may also decrease operating time and increase the proportion of patients who were discharged as day-surgery in the first total extraperitoneal hernia repair after box model training. However, the duration of the benefit of box model training is unknown. Further well-designed trials of low risk of bias and random errors are necessary. Such trials should assess the long-term impact of box model training on clinical outcomes and compare box training with other forms of training.

  9. Numerical investigation of supersonic turbulent boundary layers with high wall temperature

    NASA Technical Reports Server (NTRS)

    Guo, Y.; Adams, N. A.

    1994-01-01

    A direct numerical approach has been developed to simulate supersonic turbulent boundary layers. The mean flow quantities are obtained by solving the parabolized Reynolds-averaged Navier-Stokes equations (globally). Fluctuating quantities are computed locally with a temporal direct numerical simulation approach, in which nonparallel effects of boundary layers are partially modeled. Preliminary numerical results obtained at the free-stream Mach numbers 3, 4.5, and 6 with hot-wall conditions are presented. Approximately 5 million grid points are used in all three cases. The numerical results indicate that compressibility effects on turbulent kinetic energy, in terms of dilatational dissipation and pressure-dilatation correlation, are small. Due to the hot-wall conditions the results show significant low Reynolds number effects and large streamwise streaks. Further simulations with a bigger computational box or a cold-wall condition are desirable.

  10. Surface Temperature Data Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, James; Ruedy, Reto

    2012-01-01

    Small global mean temperature changes may have significant to disastrous consequences for the Earth's climate if they persist for an extended period. Obtaining global means from local weather reports is hampered by the uneven spatial distribution of the reliably reporting weather stations. Methods had to be developed that minimize as far as possible the impact of that situation. This software is a method of combining temperature data of individual stations to obtain a global mean trend, overcoming/estimating the uncertainty introduced by the spatial and temporal gaps in the available data. Useful estimates were obtained by the introduction of a special grid, subdividing the Earth's surface into 8,000 equal-area boxes, using the existing data to create virtual stations at the center of each of these boxes, and combining temperature anomalies (after assessing the radius of high correlation) rather than temperatures.

  11. A novel storage system for cryoEM samples.

    PubMed

    Scapin, Giovanna; Prosise, Winifred W; Wismer, Michael K; Strickland, Corey

    2017-07-01

    We present here a new CryoEM grid boxes storage system designed to simplify sample labeling, tracking and retrieval. The system is based on the crystal pucks widely used by the X-ray crystallographic community for storage and shipping of crystals. This system is suitable for any cryoEM laboratory, but especially for large facilities that will need accurate tracking of large numbers of samples coming from different sources. Copyright © 2017. Published by Elsevier Inc.

  12. A First Look at Surface Meteorology in the Arctic System Reanalysis

    NASA Astrophysics Data System (ADS)

    Slater, A. G.; Serreze, M. C.; Asr-Team, A.

    2010-12-01

    The Arctic System Reanalysis (ASR) is a joint venture between several universities (Ohio-State Uni., Uni. Colorado, Uni. Illinois UC, Uni. Alaska) and NCAR. It is a regional reanalysis that will span the period 2000-2010, possibly continuing into the future. Compared to current regional or global reanalyses it will have a spatial resolution twice that of prior efforts; a final product is expected to be an equal area projection of 15km grid boxes. The domain encompasses all the Arctic Ocean drainage areas. Several new reanalysis applications have been implemented, with some being Arctic specific - for example satellite derived sea ice age is translated into thickness and MODIS surface albedo is to be ingested. A preliminary ASR run has been performed for the period June 2007 - December 2008 at a reduced resolution of 30km. Here we make a comparison of all recent reanalysis products (NARR, MERRA, ERA-I, CFSRR) to both the ASR and observations at 350 surface stations in the Western Arctic; there is a major focus on Alaska. An intercomparison of surface variables (which are perhaps the most used reanalysis data) has been undertaken including temperature, humidity and solar radiation. Results indicate that the level of discrepancy between reanalysis data and observations is of similar magnitude as it is between all the reanalysis products; possibly suggesting that we have reached the limit of repersentativeness when comparing grid boxes to point measurements.

  13. 2D modeling of electromagnetic waves in cold plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crombé, K.; Van Eester, D.; Koch, R.

    2014-02-12

    The consequences of sheath (rectified) electric fields, resulting from the different mobility of electrons and ions as a response to radio frequency (RF) fields, are a concern for RF antenna design as it can cause damage to antenna parts, limiters and other in-vessel components. As a first step to a more complete description, the usual cold plasma dielectric description has been adopted, and the density profile was assumed to be known as input. Ultimately, the relevant equations describing the wave-particle interaction both on the fast and slow timescale will need to be tackled but prior to doing so was feltmore » as a necessity to get a feeling of the wave dynamics involved. Maxwell's equations are solved for a cold plasma in a 2D antenna box with strongly varying density profiles crossing also lower hybrid and ion-ion hybrid resonance layers. Numerical modelling quickly becomes demanding on computer power, since a fine grid spacing is required to capture the small wavelengths effects of strongly evanescent modes.« less

  14. Global and Local Stress Analyses of McDonnell Douglas Stitched/RFI Composite Wing Stub Box

    NASA Technical Reports Server (NTRS)

    Wang, John T.

    1996-01-01

    This report contains results of structural analyses performed in support of the NASA structural testing of an all-composite stitched/RFI (resin film infusion) wing stub box. McDonnell Douglas Aerospace Company designed and fabricated the wing stub box. The analyses used a global/local approach. The global model contains the entire test article. It includes the all-composite stub box, a metallic load-transition box and a metallic wing-tip extension box. The two metallic boxes are connected to the inboard and outboard ends of the composite wing stub box, respectively. The load-transition box was attached to a steel and concrete vertical reaction structure and a load was applied at the tip of the extension box to bend the wing stub box upward. The local model contains an upper cover region surrounding three stringer runouts. In that region, a large nonlinear deformation was identified by the global analyses. A more detailed mesh was used for the local model to obtain more accurate analysis results near stringer runouts. Numerous analysis results such as deformed shapes, displacements at selected locations, and strains at critical locations are included in this report.

  15. The Development of Storm Surge Ensemble Prediction System and Case Study of Typhoon Meranti in 2016

    NASA Astrophysics Data System (ADS)

    Tsai, Y. L.; Wu, T. R.; Terng, C. T.; Chu, C. H.

    2017-12-01

    Taiwan is under the threat of storm surge and associated inundation, which is located at a potentially severe storm generation zone. The use of ensemble prediction can help forecasters to know the characteristic of storm surge under the uncertainty of track and intensity. In addition, it can help the deterministic forecasting. In this study, the kernel of ensemble prediction system is based on COMCOT-SURGE (COrnell Multi-grid COupled Tsunami Model - Storm Surge). COMCOT-SURGE solves nonlinear shallow water equations in Open Ocean and coastal regions with the nested-grid scheme and adopts wet-dry-cell treatment to calculate potential inundation area. In order to consider tide-surge interaction, the global TPXO 7.1 tide model provides the tidal boundary conditions. After a series of validations and case studies, COMCOT-SURGE has become an official operating system of Central Weather Bureau (CWB) in Taiwan. In this study, the strongest typhoon in 2016, Typhoon Meranti, is chosen as a case study. We adopt twenty ensemble members from CWB WRF Ensemble Prediction System (CWB WEPS), which differs from parameters of microphysics, boundary layer, cumulus, and surface. From box-and-whisker results, maximum observed storm surges were located in the interval of the first and third quartile at more than 70 % gauge locations, e.g. Toucheng, Chengkung, and Jiangjyun. In conclusion, the ensemble prediction can effectively help forecasters to predict storm surge especially under the uncertainty of storm track and intensity

  16. ON JOINT DETERMINISTIC GRID MODELING AND SUB-GRID VARIABILITY CONCEPTUAL FRAMEWORK FOR MODEL EVALUATION

    EPA Science Inventory

    The general situation, (but exemplified in urban areas), where a significant degree of sub-grid variability (SGV) exists in grid models poses problems when comparing gridbased air quality modeling results with observations. Typically, grid models ignore or parameterize processes ...

  17. Does box model training improve surgical dexterity and economy of movement during virtual reality laparoscopy? A randomised trial.

    PubMed

    Clevin, Lotte; Grantcharov, Teodor P

    2008-01-01

    Laparoscopic box model trainers have been used in training curricula for a long time, however data on their impact on skills acquisition is still limited. Our aim was to validate a low cost box model trainer as a tool for the training of skills relevant to laparoscopic surgery. Randomised, controlled trial (Canadian Task Force Classification I). University Hospital. Sixteen gynaecologic residents with limited laparoscopic experience were randomised to a group that received a structured box model training curriculum, and a control group. Performance before and after the training was assessed in a virtual reality laparoscopic trainer (LapSim and was based on objective parameters, registered by the computer system (time, error, and economy of motion scores). Group A showed significantly greater improvement in all performance parameters compared with the control group: economy of movement (p=0.001), time (p=0.001) and tissue damage (p=0.036), confirming the positive impact of box-trainer curriculum on laparoscopic skills acquisition. Structured laparoscopic skill training on a low cost box model trainer improves performance as assessed using the VR system. Trainees who used the box model trainer showed significant improvement compared to the control group. Box model trainers are valid tools for laparoscopic skills training and should be implemented in the comprehensive training curricula in gynaecology.

  18. A Calculus for Boxes and Traits in a Java-Like Setting

    NASA Astrophysics Data System (ADS)

    Bettini, Lorenzo; Damiani, Ferruccio; de Luca, Marco; Geilmann, Kathrin; Schäfer, Jan

    The box model is a component model for the object-oriented paradigm, that defines components (the boxes) with clear encapsulation boundaries. Having well-defined boundaries is crucial in component-based software development, because it enables to argue about the interference and interaction between a component and its context. In general, boxes contain several objects and inner boxes, of which some are local to the box and cannot be accessed from other boxes and some can be accessible by other boxes. A trait is a set of methods divorced from any class hierarchy. Traits can be composed together to form classes or other traits. We present a calculus for boxes and traits. Traits are units of fine-grained reuse, whereas boxes can be seen as units of coarse-grained reuse. The calculus is equipped with an ownership type system and allows us to combine coarse- and fine-grained reuse of code by maintaining encapsulation of components.

  19. Multigrid direct numerical simulation of the whole process of flow transition in 3-D boundary layers

    NASA Technical Reports Server (NTRS)

    Liu, Chaoqun; Liu, Zhining

    1993-01-01

    A new technology was developed in this study which provides a successful numerical simulation of the whole process of flow transition in 3-D boundary layers, including linear growth, secondary instability, breakdown, and transition at relatively low CPU cost. Most other spatial numerical simulations require high CPU cost and blow up at the stage of flow breakdown. A fourth-order finite difference scheme on stretched and staggered grids, a fully implicit time marching technique, a semi-coarsening multigrid based on the so-called approximate line-box relaxation, and a buffer domain for the outflow boundary conditions were all used for high-order accuracy, good stability, and fast convergence. A new fine-coarse-fine grid mapping technique was developed to keep the code running after the laminar flow breaks down. The computational results are in good agreement with linear stability theory, secondary instability theory, and some experiments. The cost for a typical case with 162 x 34 x 34 grid is around 2 CRAY-YMP CPU hours for 10 T-S periods.

  20. Heterogeneous collaborative sensor network for electrical management of an automated house with PV energy.

    PubMed

    Castillo-Cagigal, Manuel; Matallanas, Eduardo; Gutiérrez, Alvaro; Monasterio-Huelin, Félix; Caamaño-Martín, Estefaná; Masa-Bote, Daniel; Jiménez-Leube, Javier

    2011-01-01

    In this paper we present a heterogeneous collaborative sensor network for electrical management in the residential sector. Improving demand-side management is very important in distributed energy generation applications. Sensing and control are the foundations of the "Smart Grid" which is the future of large-scale energy management. The system presented in this paper has been developed on a self-sufficient solar house called "MagicBox" equipped with grid connection, PV generation, lead-acid batteries, controllable appliances and smart metering. Therefore, there is a large number of energy variables to be monitored that allow us to precisely manage the energy performance of the house by means of collaborative sensors. The experimental results, performed on a real house, demonstrate the feasibility of the proposed collaborative system to reduce the consumption of electrical power and to increase energy efficiency.

  1. Processing of Cloud Databases for the Development of an Automated Global Cloud Climatology

    DTIC Science & Technology

    1991-06-30

    cloud amounts in each DOE grid box. The actual population values were coded into one- and two- digit codes primarily for printing purposes. For example...IPIALES 72652 43.07 -95.53 0423 PICKSTOWNE S.D. 80110 6.22 -75.60 1498 MEDELLIN 72424 37.90 -85.97 0233 FT. KNOX KY 80069 7.00 -74.72 0610 AMALFI...12 According to Lund, Grantham, and Davis (1980), the quality of the whole sky photographs used in producing the WSP digital data ensemble was

  2. Parameterization of GCM subgrid nonprecipitating cumulus and stratocumulus clouds using stochastic/phenomenological methods. Annual technical progress report, 1 December 1992--30 November 1993

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stull, R.B.

    1993-08-27

    This document is a progress report to the USDOE Atmospheric Radiation and Measurement Program (ARM). The overall project goal is to relate subgrid-cumulus-cloud formation, coverage, and population characteristics to statistical properties of surface-layer air, which in turn are modulated by heterogeneous land-usage within GCM-grid-box-size regions. The motivation is to improve the understanding and prediction of climate change by more accurately describing radiative and cloud processes.

  3. Evaluating and Improving Wind Forecasts over South China: The Role of Orographic Parameterization in the GRAPES Model

    NASA Astrophysics Data System (ADS)

    Zhong, Shuixin; Chen, Zitong; Xu, Daosheng; Zhang, Yanxia

    2018-06-01

    Unresolved small-scale orographic (SSO) drags are parameterized in a regional model based on the Global/Regional Assimilation and Prediction System for the Tropical Mesoscale Model (GRAPES TMM). The SSO drags are represented by adding a sink term in the momentum equations. The maximum height of the mountain within the grid box is adopted in the SSO parameterization (SSOP) scheme as compensation for the drag. The effects of the unresolved topography are parameterized as the feedbacks to the momentum tendencies on the first model level in planetary boundary layer (PBL) parameterization. The SSOP scheme has been implemented and coupled with the PBL parameterization scheme within the model physics package. A monthly simulation is designed to examine the performance of the SSOP scheme over the complex terrain areas located in the southwest of Guangdong. The verification results show that the surface wind speed bias has been much alleviated by adopting the SSOP scheme, in addition to reduction of the wind bias in the lower troposphere. The target verification over Xinyi shows that the simulations with the SSOP scheme provide improved wind estimation over the complex regions in the southwest of Guangdong.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clough, Katy; Figueras, Pau; Finkel, Hal

    In this work, we introduce GRChombo: a new numerical relativity code which incorporates full adaptive mesh refinement (AMR) using block structured Berger-Rigoutsos grid generation. The code supports non-trivial 'many-boxes-in-many-boxes' mesh hierarchies and massive parallelism through the message passing interface. GRChombo evolves the Einstein equation using the standard BSSN formalism, with an option to turn on CCZ4 constraint damping if required. The AMR capability permits the study of a range of new physics which has previously been computationally infeasible in a full 3 + 1 setting, while also significantly simplifying the process of setting up the mesh for these problems. Wemore » show that GRChombo can stably and accurately evolve standard spacetimes such as binary black hole mergers and scalar collapses into black holes, demonstrate the performance characteristics of our code, and discuss various physics problems which stand to benefit from the AMR technique.« less

  5. [Spatial distribution pattern and fractal analysis of Larix chinensis populations in Qinling Mountain].

    PubMed

    Guo, Hua; Wang, Xiaoan; Xiao, Yaping

    2005-02-01

    In this paper, the fractal characters of Larix chinensis populations in Qinling Mountain were studied by contiguous grid quadrate sampling method and by boxing-counting dimension and information dimension. The results showed that the high boxing-counting dimension (1.8087) and information dimension (1.7931) reflected a higher spatial occupational degree of L. chinensis populations. Judged by the dispersal index and Morisita's pattern index, L. chinensis populations clumped at three different age stages (0-25, 25-50 and over 50 years). From Greig-Smiths' mean variance analysis, the figure of pattern scale showed that L. chinensis populations clumped in 128 m2 and 512 m2, and the different age groups clumped in different scales. The pattern intensities decreased with increasing age, and tended to reduce with increasing area when detected by Kershaw's PI index. The spatial pattern characters of L. chinensis populations may be their responses to environmental factors.

  6. Verification of land-atmosphere coupling in forecast models, reanalyses and land surface models using flux site observations.

    PubMed

    Dirmeyer, Paul A; Chen, Liang; Wu, Jiexia; Shin, Chul-Su; Huang, Bohua; Cash, Benjamin A; Bosilovich, Michael G; Mahanama, Sarith; Koster, Randal D; Santanello, Joseph A; Ek, Michael B; Balsamo, Gianpaolo; Dutra, Emanuel; Lawrence, D M

    2018-02-01

    We confront four model systems in three configurations (LSM, LSM+GCM, and reanalysis) with global flux tower observations to validate states, surface fluxes, and coupling indices between land and atmosphere. Models clearly under-represent the feedback of surface fluxes on boundary layer properties (the atmospheric leg of land-atmosphere coupling), and may over-represent the connection between soil moisture and surface fluxes (the terrestrial leg). Models generally under-represent spatial and temporal variability relative to observations, which is at least partially an artifact of the differences in spatial scale between model grid boxes and flux tower footprints. All models bias high in near-surface humidity and downward shortwave radiation, struggle to represent precipitation accurately, and show serious problems in reproducing surface albedos. These errors create challenges for models to partition surface energy properly and errors are traceable through the surface energy and water cycles. The spatial distribution of the amplitude and phase of annual cycles (first harmonic) are generally well reproduced, but the biases in means tend to reflect in these amplitudes. Interannual variability is also a challenge for models to reproduce. Our analysis illuminates targets for coupled land-atmosphere model development, as well as the value of long-term globally-distributed observational monitoring.

  7. The eGo grid model: An open source approach towards a model of German high and extra-high voltage power grids

    NASA Astrophysics Data System (ADS)

    Mueller, Ulf Philipp; Wienholt, Lukas; Kleinhans, David; Cussmann, Ilka; Bunke, Wolf-Dieter; Pleßmann, Guido; Wendiggensen, Jochen

    2018-02-01

    There are several power grid modelling approaches suitable for simulations in the field of power grid planning. The restrictive policies of grid operators, regulators and research institutes concerning their original data and models lead to an increased interest in open source approaches of grid models based on open data. By including all voltage levels between 60 kV (high voltage) and 380kV (extra high voltage), we dissolve the common distinction between transmission and distribution grid in energy system models and utilize a single, integrated model instead. An open data set for primarily Germany, which can be used for non-linear, linear and linear-optimal power flow methods, was developed. This data set consists of an electrically parameterised grid topology as well as allocated generation and demand characteristics for present and future scenarios at high spatial and temporal resolution. The usability of the grid model was demonstrated by the performance of exemplary power flow optimizations. Based on a marginal cost driven power plant dispatch, being subject to grid restrictions, congested power lines were identified. Continuous validation of the model is nescessary in order to reliably model storage and grid expansion in progressing research.

  8. Alteration of Box-Jenkins methodology by implementing genetic algorithm method

    NASA Astrophysics Data System (ADS)

    Ismail, Zuhaimy; Maarof, Mohd Zulariffin Md; Fadzli, Mohammad

    2015-02-01

    A time series is a set of values sequentially observed through time. The Box-Jenkins methodology is a systematic method of identifying, fitting, checking and using integrated autoregressive moving average time series model for forecasting. Box-Jenkins method is an appropriate for a medium to a long length (at least 50) time series data observation. When modeling a medium to a long length (at least 50), the difficulty arose in choosing the accurate order of model identification level and to discover the right parameter estimation. This presents the development of Genetic Algorithm heuristic method in solving the identification and estimation models problems in Box-Jenkins. Data on International Tourist arrivals to Malaysia were used to illustrate the effectiveness of this proposed method. The forecast results that generated from this proposed model outperformed single traditional Box-Jenkins model.

  9. About the Need of Combining Power Market and Power Grid Model Results for Future Energy System Scenarios

    NASA Astrophysics Data System (ADS)

    Mende, Denis; Böttger, Diana; Löwer, Lothar; Becker, Holger; Akbulut, Alev; Stock, Sebastian

    2018-02-01

    The European power grid infrastructure faces various challenges due to the expansion of renewable energy sources (RES). To conduct investigations on interactions between power generation and the power grid, models for the power market as well as for the power grid are necessary. This paper describes the basic functionalities and working principles of both types of models as well as steps to couple power market results and the power grid model. The combination of these models is beneficial in terms of gaining realistic power flow scenarios in the grid model and of being able to pass back results of the power flow and restrictions to the market model. Focus is laid on the power grid model and possible application examples like algorithms in grid analysis, operation and dynamic equipment modelling.

  10. A Micro-Grid Simulator Tool (SGridSim) using Effective Node-to-Node Complex Impedance (EN2NCI) Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Udhay Ravishankar; Milos manic

    2013-08-01

    This paper presents a micro-grid simulator tool useful for implementing and testing multi-agent controllers (SGridSim). As a common engineering practice it is important to have a tool that simplifies the modeling of the salient features of a desired system. In electric micro-grids, these salient features are the voltage and power distributions within the micro-grid. Current simplified electric power grid simulator tools such as PowerWorld, PowerSim, Gridlab, etc, model only the power distribution features of a desired micro-grid. Other power grid simulators such as Simulink, Modelica, etc, use detailed modeling to accommodate the voltage distribution features. This paper presents a SGridSimmore » micro-grid simulator tool that simplifies the modeling of both the voltage and power distribution features in a desired micro-grid. The SGridSim tool accomplishes this simplified modeling by using Effective Node-to-Node Complex Impedance (EN2NCI) models of components that typically make-up a micro-grid. The term EN2NCI models means that the impedance based components of a micro-grid are modeled as single impedances tied between their respective voltage nodes on the micro-grid. Hence the benefit of the presented SGridSim tool are 1) simulation of a micro-grid is performed strictly in the complex-domain; 2) faster simulation of a micro-grid by avoiding the simulation of detailed transients. An example micro-grid model was built using the SGridSim tool and tested to simulate both the voltage and power distribution features with a total absolute relative error of less than 6%.« less

  11. Opening Pandora's Box: The impact of open system modeling on interpretations of anoxia

    NASA Astrophysics Data System (ADS)

    Hotinski, Roberta M.; Kump, Lee R.; Najjar, Raymond G.

    2000-06-01

    The geologic record preserves evidence that vast regions of ancient oceans were once anoxic, with oxygen levels too low to sustain animal life. Because anoxic conditions have been postulated to foster deposition of petroleum source rocks and have been implicated as a kill mechanism in extinction events, the genesis of such anoxia has been an area of intense study. Most previous models of ocean oxygen cycling proposed, however, have either been qualitative or used closed-system approaches. We reexamine the question of anoxia in open-system box models in order to test the applicability of closed-system results over long timescales and find that open and closed-system modeling results may differ significantly on both short and long timescales. We also compare a scenario with basinwide diffuse upwelling (a three-box model) to a model with upwelling concentrated in the Southern Ocean (a four-box model). While a three-box modeling approach shows that only changes in high-latitude convective mixing rate and character of deepwater sources are likely to cause anoxia, four-box model experiments indicate that slowing of thermohaline circulation, a reduction in wind-driven upwelling, and changes in high-latitude export production may also cause dysoxia or anoxia in part of the deep ocean on long timescales. These results suggest that box models must capture the open-system and vertically stratified nature of the ocean to allow meaningful interpretations of long-lived episodes of anoxia.

  12. A method for deterministic statistical downscaling of daily precipitation at a monsoonal site in Eastern China

    NASA Astrophysics Data System (ADS)

    Liu, Yonghe; Feng, Jinming; Liu, Xiu; Zhao, Yadi

    2017-12-01

    Statistical downscaling (SD) is a method that acquires the local information required for hydrological impact assessment from large-scale atmospheric variables. Very few statistical and deterministic downscaling models for daily precipitation have been conducted for local sites influenced by the East Asian monsoon. In this study, SD models were constructed by selecting the best predictors and using generalized linear models (GLMs) for Feixian, a site in the Yishu River Basin and Shandong Province. By calculating and mapping Spearman rank correlation coefficients between the gridded standardized values of five large-scale variables and daily observed precipitation, different cyclonic circulation patterns were found for monsoonal precipitation in summer (June-September) and winter (November-December and January-March); the values of the gridded boxes with the highest absolute correlations for observed precipitation were selected as predictors. Data for predictors and predictands covered the period 1979-2015, and different calibration and validation periods were divided when fitting and validating the models. Meanwhile, the bootstrap method was also used to fit the GLM. All the above thorough validations indicated that the models were robust and not sensitive to different samples or different periods. Pearson's correlations between downscaled and observed precipitation (logarithmically transformed) on a daily scale reached 0.54-0.57 in summer and 0.56-0.61 in winter, and the Nash-Sutcliffe efficiency between downscaled and observed precipitation reached 0.1 in summer and 0.41 in winter. The downscaled precipitation partially reflected exact variations in winter and main trends in summer for total interannual precipitation. For the number of wet days, both winter and summer models were able to reflect interannual variations. Other comparisons were also made in this study. These results demonstrated that when downscaling, it is appropriate to combine a correlation-based predictor selection across a spatial domain with GLM modeling.

  13. Conventional box model training improves laparoscopic skills during salpingectomy on LapSim: a randomized trial.

    PubMed

    Akdemir, Ali; Ergenoğlu, Ahmet Mete; Yeniel, Ahmet Özgür; Sendağ, Fatih

    2013-01-01

    Box model trainers have been used for many years to facilitate the improvement of laparoscopic skills. However, there are limited data available on box trainers and their impact on skill acquisition, assessed by virtual reality systems. Twenty-two Postgraduate Year 1 gynecology residents with no laparoscopic experience were randomly divided into one group that received structured box model training and a control group. All residents performed a salpingectomy on LapSim before and after the training. Performances before and after the training were assessed using LapSim and were recorded using objective parameters, registered by a computer system (time, damage, and economy of motion scores). There were initially no differences between the two groups. The box trainer group showed significantly greater improvement in time (p=0.01) and economy of motion scores (p=0.001) compared with the control group post-training. The present study confirmed the positive effect of low cost box model training on laparoscopic skill acquisition as assessed using LapSim. Novice surgeons should obtain practice on box trainers and teaching centers should make efforts to establish training laboratories.

  14. On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models

    NASA Astrophysics Data System (ADS)

    Xu, S.; Wang, B.; Liu, J.

    2015-10-01

    In this article we propose two grid generation methods for global ocean general circulation models. Contrary to conventional dipolar or tripolar grids, the proposed methods are based on Schwarz-Christoffel conformal mappings that map areas with user-prescribed, irregular boundaries to those with regular boundaries (i.e., disks, slits, etc.). The first method aims at improving existing dipolar grids. Compared with existing grids, the sample grid achieves a better trade-off between the enlargement of the latitudinal-longitudinal portion and the overall smooth grid cell size transition. The second method addresses more modern and advanced grid design requirements arising from high-resolution and multi-scale ocean modeling. The generated grids could potentially achieve the alignment of grid lines to the large-scale coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the grids are orthogonal curvilinear, they can be easily utilized by the majority of ocean general circulation models that are based on finite difference and require grid orthogonality. The proposed grid generation algorithms can also be applied to the grid generation for regional ocean modeling where complex land-sea distribution is present.

  15. Spatial Representativeness of Surface-Measured Variations of Downward Solar Radiation

    NASA Astrophysics Data System (ADS)

    Schwarz, M.; Folini, D.; Hakuba, M. Z.; Wild, M.

    2017-12-01

    When using time series of ground-based surface solar radiation (SSR) measurements in combination with gridded data, the spatial and temporal representativeness of the point observations must be considered. We use SSR data from surface observations and high-resolution (0.05°) satellite-derived data to infer the spatiotemporal representativeness of observations for monthly and longer time scales in Europe. The correlation analysis shows that the squared correlation coefficients (R2) between SSR times series decrease linearly with increasing distance between the surface observations. For deseasonalized monthly mean time series, R2 ranges from 0.85 for distances up to 25 km between the stations to 0.25 at distances of 500 km. A decorrelation length (i.e., the e-folding distance of R2) on the order of 400 km (with spread of 100-600 km) was found. R2 from correlations between point observations and colocated grid box area means determined from satellite data were found to be 0.80 for a 1° grid. To quantify the error which arises when using a point observation as a surrogate for the area mean SSR of larger surroundings, we calculated a spatial sampling error (SSE) for a 1° grid of 8 (3) W/m2 for monthly (annual) time series. The SSE based on a 1° grid, therefore, is of the same magnitude as the measurement uncertainty. The analysis generally reveals that monthly mean (or longer temporally aggregated) point observations of SSR capture the larger-scale variability well. This finding shows that comparing time series of SSR measurements with gridded data is feasible for those time scales.

  16. Comparative hybrid and digital simulation studies of the behaviour of a wind generator equipped with a static frequency converter

    NASA Astrophysics Data System (ADS)

    Dube, B.; Lefebvre, S.; Perocheau, A.; Nakra, H. L.

    1988-01-01

    This paper describes the comparative results obtained from digital and hybrid simulation studies on a variable speed wind generator interconnected to the utility grid. The wind generator is a vertical-axis Darrieus type coupled to a synchronous machine by a gear-box; the synchronous machine is connected to the AC utility grid through a static frequency converter. Digital simulation results have been obtained using CSMP software; these results are compared with those obtained from a real-time hybrid simulator that in turn uses a part of the IREQ HVDC simulator. The agreement between hybrid and digital simulation results is generally good. The results demonstrate that the digital simulation reproduces the dynamic behavior of the system in a satisfactory manner and thus constitutes a valid tool for the design of the control systems of the wind generator.

  17. Rainfall statistics, stationarity, and climate change.

    PubMed

    Sun, Fubao; Roderick, Michael L; Farquhar, Graham D

    2018-03-06

    There is a growing research interest in the detection of changes in hydrologic and climatic time series. Stationarity can be assessed using the autocorrelation function, but this is not yet common practice in hydrology and climate. Here, we use a global land-based gridded annual precipitation (hereafter P ) database (1940-2009) and find that the lag 1 autocorrelation coefficient is statistically significant at around 14% of the global land surface, implying nonstationary behavior (90% confidence). In contrast, around 76% of the global land surface shows little or no change, implying stationary behavior. We use these results to assess change in the observed P over the most recent decade of the database. We find that the changes for most (84%) grid boxes are within the plausible bounds of no significant change at the 90% CI. The results emphasize the importance of adequately accounting for natural variability when assessing change. Copyright © 2018 the Author(s). Published by PNAS.

  18. Rainfall statistics, stationarity, and climate change

    NASA Astrophysics Data System (ADS)

    Sun, Fubao; Roderick, Michael L.; Farquhar, Graham D.

    2018-03-01

    There is a growing research interest in the detection of changes in hydrologic and climatic time series. Stationarity can be assessed using the autocorrelation function, but this is not yet common practice in hydrology and climate. Here, we use a global land-based gridded annual precipitation (hereafter P) database (1940–2009) and find that the lag 1 autocorrelation coefficient is statistically significant at around 14% of the global land surface, implying nonstationary behavior (90% confidence). In contrast, around 76% of the global land surface shows little or no change, implying stationary behavior. We use these results to assess change in the observed P over the most recent decade of the database. We find that the changes for most (84%) grid boxes are within the plausible bounds of no significant change at the 90% CI. The results emphasize the importance of adequately accounting for natural variability when assessing change.

  19. Nonuniform grid implicit spatial finite difference method for acoustic wave modeling in tilted transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Chu, Chunlei; Stoffa, Paul L.

    2012-01-01

    Discrete earth models are commonly represented by uniform structured grids. In order to ensure accurate numerical description of all wave components propagating through these uniform grids, the grid size must be determined by the slowest velocity of the entire model. Consequently, high velocity areas are always oversampled, which inevitably increases the computational cost. A practical solution to this problem is to use nonuniform grids. We propose a nonuniform grid implicit spatial finite difference method which utilizes nonuniform grids to obtain high efficiency and relies on implicit operators to achieve high accuracy. We present a simple way of deriving implicit finite difference operators of arbitrary stencil widths on general nonuniform grids for the first and second derivatives and, as a demonstration example, apply these operators to the pseudo-acoustic wave equation in tilted transversely isotropic (TTI) media. We propose an efficient gridding algorithm that can be used to convert uniformly sampled models onto vertically nonuniform grids. We use a 2D TTI salt model to demonstrate its effectiveness and show that the nonuniform grid implicit spatial finite difference method can produce highly accurate seismic modeling results with enhanced efficiency, compared to uniform grid explicit finite difference implementations.

  20. SimpleBox 4.0: Improving the model while keeping it simple….

    PubMed

    Hollander, Anne; Schoorl, Marian; van de Meent, Dik

    2016-04-01

    Chemical behavior in the environment is often modeled with multimedia fate models. SimpleBox is one often-used multimedia fate model, firstly developed in 1986. Since then, two updated versions were published. Based on recent scientific developments and experience with SimpleBox 3.0, a new version of SimpleBox was developed and is made public here: SimpleBox 4.0. In this new model, eight major changes were implemented: removal of the local scale and vegetation compartments, addition of lake compartments and deep ocean compartments (including the thermohaline circulation), implementation of intermittent rain instead of drizzle and of depth dependent soil concentrations, adjustment of the partitioning behavior for organic acids and bases as well as of the value for enthalpy of vaporization. In this paper, the effects of the model changes in SimpleBox 4.0 on the predicted steady-state concentrations of chemical substances were explored for different substance groups (neutral organic substances, acids, bases, metals) in a standard emission scenario. In general, the largest differences between the predicted concentrations in the new and the old model are caused by the implementation of layered ocean compartments. Undesirable high model complexity caused by vegetation compartments and a local scale were removed to enlarge the simplicity and user friendliness of the model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Comparison of postbuckling model and finite element model with compression strength of corrugated boxes

    Treesearch

    Thomas J. Urbanik; Edmond P. Saliklis

    2002-01-01

    Conventional compression strength formulas for corrugated fiberboard boxes are limited to geometry and material that produce an elastic postbuckling failure. Inelastic postbuckling can occur in squatty boxes and trays, but a mechanistic rationale for unifying observed strength data is lacking. This study employs a finite element model, instead of actual experiments, to...

  2. An adaptive mesh refinement-multiphase lattice Boltzmann flux solver for simulation of complex binary fluid flows

    NASA Astrophysics Data System (ADS)

    Yuan, H. Z.; Wang, Y.; Shu, C.

    2017-12-01

    This paper presents an adaptive mesh refinement-multiphase lattice Boltzmann flux solver (AMR-MLBFS) for effective simulation of complex binary fluid flows at large density ratios. In this method, an AMR algorithm is proposed by introducing a simple indicator on the root block for grid refinement and two possible statuses for each block. Unlike available block-structured AMR methods, which refine their mesh by spawning or removing four child blocks simultaneously, the present method is able to refine its mesh locally by spawning or removing one to four child blocks independently when the refinement indicator is triggered. As a result, the AMR mesh used in this work can be more focused on the flow region near the phase interface and its size is further reduced. In each block of mesh, the recently proposed MLBFS is applied for the solution of the flow field and the level-set method is used for capturing the fluid interface. As compared with existing AMR-lattice Boltzmann models, the present method avoids both spatial and temporal interpolations of density distribution functions so that converged solutions on different AMR meshes and uniform grids can be obtained. The proposed method has been successfully validated by simulating a static bubble immersed in another fluid, a falling droplet, instabilities of two-layered fluids, a bubble rising in a box, and a droplet splashing on a thin film with large density ratios and high Reynolds numbers. Good agreement with the theoretical solution, the uniform-grid result, and/or the published data has been achieved. Numerical results also show its effectiveness in saving computational time and virtual memory as compared with computations on uniform meshes.

  3. Software Surface Modeling and Grid Generation Steering Committee

    NASA Technical Reports Server (NTRS)

    Smith, Robert E. (Editor)

    1992-01-01

    It is a NASA objective to promote improvements in the capability and efficiency of computational fluid dynamics. Grid generation, the creation of a discrete representation of the solution domain, is an essential part of computational fluid dynamics. However, grid generation about complex boundaries requires sophisticated surface-model descriptions of the boundaries. The surface modeling and the associated computation of surface grids consume an extremely large percentage of the total time required for volume grid generation. Efficient and user friendly software systems for surface modeling and grid generation are critical for computational fluid dynamics to reach its potential. The papers presented here represent the state-of-the-art in software systems for surface modeling and grid generation. Several papers describe improved techniques for grid generation.

  4. The NASA POWER SSE: Deriving the Direct Normal Counterpart from the CERES SYN1deg Hourly Global Horizontal Irradiance during Early 2000 to Near Present

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Stackhouse, P. W., Jr.; Westberg, D. J.

    2017-12-01

    The NASA Prediction of Worldwide Energy Resource (POWER) Surface meteorology and Solar Energy (SSE) provides solar direct normal irradiance (DNI) data as well as a variety of other solar parameters. The currently available DNIs are monthly means on a quasi-equal-area grid system with grid boxes roughly equivalent to 1 degree longitude by 1 degree latitude around the equator from July 1983 to June 2005, and the data were derived from the GEWEX Surface Radiation Budget (SRB) monthly mean global horizontal irradiance (GHI, Release 3) and regression analysis of the Baseline Surface Radiation Network (BSRN) data. To improve the quality of the DNI data and push the temporal coverage of the data to near present, we have applied a modified version of the DIRINDEX global-to-beam model to the GEWEX SRB (Release 3) all-sky and clear-sky 3-hourly GHI data and derived their DNI counterparts for the period from July 1983 to December 2007. The results have been validated against the BSRN data. To further expand the data in time to near present, we are now applying the DIRINDEX model to the Clouds and the Earth's Radiant Energy System (CERES) data. The CERES SYN1deg (Edition 4A) offers hourly all-sky and clear-sky GHIs on a 1 degree longitude by 1 degree latitude grid system from March 2000 to October 2016 as of this writing. Comparisons of the GHIs with their BSRN counterparts show remarkable agreements. Besides the GHIs, the inputs will also include the atmospheric water vapor and surface pressure from the Modern Era Retrospective-Analysis for Research and Applications (MERRA) and the aerosol optical depth from the Max-Planck Institute Climatology (MAC-v1). Based on the performance of the DIRINDEX model with the GEWEX SRB GHI data, we expect at least equally good or even better results. In this paper, we will show the derived hourly, daily, and monthly mean DNIs from the CERES SYN1deg hourly GHIs from March 2000 to October 2016 and how they compare with the BSRN data.

  5. The Community Intercomparison Suite (CIS)

    NASA Astrophysics Data System (ADS)

    Watson-Parris, Duncan; Schutgens, Nick; Cook, Nick; Kipling, Zak; Kershaw, Phil; Gryspeerdt, Ed; Lawrence, Bryan; Stier, Philip

    2017-04-01

    Earth observations (both remote and in-situ) create vast amounts of data providing invaluable constraints for the climate science community. Efficient exploitation of these complex and highly heterogeneous datasets has been limited however by the lack of suitable software tools, particularly for comparison of gridded and ungridded data, thus reducing scientific productivity. CIS (http://cistools.net) is an open-source, command line tool and Python library which allows the straight-forward quantitative analysis, intercomparison and visualisation of remote sensing, in-situ and model data. The CIS can read gridded and ungridded remote sensing, in-situ and model data - and many other data sources 'out-of-the-box', such as ESA Aerosol and Cloud CCI product, MODIS, Cloud CCI, Cloudsat, AERONET. Perhaps most importantly however CIS also employs a modular plugin architecture to allow for the reading of limitless different data types. Users are able to write their own plugins for reading the data sources which they are familiar with, and share them within the community, allowing all to benefit from their expertise. To enable the intercomparison of this data the CIS provides a number of operations including: the aggregation of ungridded and gridded datasets to coarser representations using a number of different built in averaging kernels; the subsetting of data to reduce its extent or dimensionality; the co-location of two distinct datasets onto a single set of co-ordinates; the visualisation of the input or output data through a number of different plots and graphs; the evaluation of arbitrary mathematical expressions against any number of datasets; and a number of other supporting functions such as a statistical comparison of two co-located datasets. These operations can be performed efficiently on local machines or large computing clusters - and is already available on the JASMIN computing facility. A case-study using the GASSP collection of in-situ aerosol observations will demonstrate the power of using CIS to perform model evaluations. The use of an open-source, community developed tool in this way opens up a huge amount of data which would previously have been inaccessible to many users, while also providing replicable, repeatable analysis which scientists and policy-makers alike can trust and understand.

  6. Variation in aerosol nucleation and growth in coal-fired power plant plumes due to background aerosol, meteorology and emissions: sensitivity analysis and parameterization.

    NASA Astrophysics Data System (ADS)

    Stevens, R. G.; Lonsdale, C. L.; Brock, C. A.; Reed, M. K.; Crawford, J. H.; Holloway, J. S.; Ryerson, T. B.; Huey, L. G.; Nowak, J. B.; Pierce, J. R.

    2012-04-01

    New-particle formation in the plumes of coal-fired power plants and other anthropogenic sulphur sources may be an important source of particles in the atmosphere. It remains unclear, however, how best to reproduce this formation in global and regional aerosol models with grid-box lengths that are 10s of kilometres and larger. The predictive power of these models is thus limited by the resultant uncertainties in aerosol size distributions. In this presentation, we focus on sub-grid sulphate aerosol processes within coal-fired power plant plumes: the sub-grid oxidation of SO2 with condensation of H2SO4 onto newly-formed and pre-existing particles. Based on the results of the System for Atmospheric Modelling (SAM), a Large-Eddy Simulation/Cloud-Resolving Model (LES/CRM) with online TwO Moment Aerosol Sectional (TOMAS) microphysics, we develop a computationally efficient, but physically based, parameterization that predicts the characteristics of aerosol formed within coal-fired power plant plumes based on parameters commonly available in global and regional-scale models. Given large-scale mean meteorological parameters, emissions from the power plant, mean background condensation sink, and the desired distance from the source, the parameterization will predict the fraction of the emitted SO2 that is oxidized to H2SO4, the fraction of that H2SO4 that forms new particles instead of condensing onto preexisting particles, the median diameter of the newly-formed particles, and the number of newly-formed particles per kilogram SO2 emitted. We perform a sensitivity analysis of these characteristics of the aerosol size distribution to the meteorological parameters, the condensation sink, and the emissions. In general, new-particle formation and growth is greatly reduced during polluted conditions due to the large preexisting aerosol surface area for H2SO4 condensation and particle coagulation. The new-particle formation and growth rates are also a strong function of the amount of sunlight and NOx since both control OH concentrations. Decreases in NOx emissions without simultaneous decreases in SO2 emissions increase new-particle formation and growth due to increased oxidation of SO2. The parameterization we describe here should allow for more accurate predictions of aerosol size distributions and a greater confidence in the effects of aerosols in climate and health studies.

  7. Evaluation of numerical models by FerryBox and Fixed Platform in-situ data in the southern North Sea

    NASA Astrophysics Data System (ADS)

    Haller, M.; Janssen, F.; Siddorn, J.; Petersen, W.; Dick, S.

    2015-02-01

    FerryBoxes installed on ships of opportunity (SoO) provide high-frequency surface biogeochemical measurements along selected tracks on a regular basis. Within the European FerryBox Community, several FerryBoxes are operated by different institutions. Here we present a comparison of model simulations applied to the North Sea with FerryBox temperature and salinity data from a transect along the southern North Sea and a more detailed analysis at three different positions located off the English East coast, at the Oyster Ground and in the German Bight. In addition to the FerryBox data, data from a Fixed Platform of the MARNET network are applied. Two operational hydrodynamic models have been evaluated for different time periods: results of BSHcmod v4 are analysed for 2009-2012, while simulations of FOAM AMM7 NEMO have been available from MyOcean data base for 2011 and 2012. The simulation of water temperatures is satisfying; however, limitations of the models exist, especially near the coast in the southern North Sea, where both models are underestimating salinity. Statistical errors differ between the models and the measured parameters, as the root mean square error (rmse) accounts for BSHcmod v4 to 0.92 K, for AMM7 only to 0.44 K. For salinity, BSHcmod is slightly better than AMM7 (0.98 and 1.1 psu, respectively). The study results reveal weaknesses of both models, in terms of variability, absolute levels and limited spatial resolution. In coastal areas, where the simulation of the transition zone between the coasts and the open ocean is still a demanding task for operational modelling, FerryBox data, combined with other observations with differing temporal and spatial scales serve as an invaluable tool for model evaluation and optimization. The optimization of hydrodynamical models with high frequency regional datasets, like the FerryBox data, is beneficial for their subsequent integration in ecosystem modelling.

  8. Globally-Gridded Interpolated Night-Time Marine Air Temperatures 1900-2014

    NASA Astrophysics Data System (ADS)

    Junod, R.; Christy, J. R.

    2016-12-01

    Over the past century, climate records have pointed to an increase in global near-surface average temperature. Near-surface air temperature over the oceans is a relatively unused parameter in understanding the current state of climate, but is useful as an independent temperature metric over the oceans and serves as a geographical and physical complement to near-surface air temperature over land. Though versions of this dataset exist (i.e. HadMAT1 and HadNMAT2), it has been strongly recommended that various groups generate climate records independently. This University of Alabama in Huntsville (UAH) study began with the construction of monthly night-time marine air temperature (UAHNMAT) values from the early-twentieth century through to the present era. Data from the International Comprehensive Ocean and Atmosphere Data Set (ICOADS) were used to compile a time series of gridded UAHNMAT, (20S-70N). This time series was homogenized to correct for the many biases such as increasing ship height, solar deck heating, etc. The time series of UAHNMAT, once adjusted to a standard reference height, is gridded to 1.25° pentad grid boxes and interpolated using the kriging interpolation technique. This study will present results which quantify the variability and trends and compare to current trends of other related datasets that include HadNMAT2 and sea-surface temperatures (HadISST & ERSSTv4).

  9. Schwarz-Christoffel Conformal Mapping based Grid Generation for Global Oceanic Circulation Models

    NASA Astrophysics Data System (ADS)

    Xu, Shiming

    2015-04-01

    We propose new grid generation algorithms for global ocean general circulation models (OGCMs). Contrary to conventional, analytical forms based dipolar or tripolar grids, the new algorithm are based on Schwarz-Christoffel (SC) conformal mapping with prescribed boundary information. While dealing with the conventional grid design problem of pole relocation, it also addresses more advanced issues of computational efficiency and the new requirements on OGCM grids arisen from the recent trend of high-resolution and multi-scale modeling. The proposed grid generation algorithm could potentially achieve the alignment of grid lines to coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the generated grids are still orthogonal curvilinear, they can be readily 10 utilized in existing Bryan-Cox-Semtner type ocean models. The proposed methodology can also be applied to the grid generation task for regional ocean modeling when complex land-ocean distribution is present.

  10. A Simulation of an Energy-Efficient Home.

    ERIC Educational Resources Information Center

    McLeod, Richard J.; And Others

    1981-01-01

    A shoe box is converted into a model home to demonstrate the energy efficiency of various insulation measures. Included are instructions for constructing the model home from a shoe box, insulating the shoe box, several activities involving different insulation measures, extensions of the experiment, and post-lab discussion topics. (DS)

  11. Allometric Scaling and Resource Limitations Model of Total Aboveground Biomass in Forest Stands: Site-scale Test of Model

    NASA Astrophysics Data System (ADS)

    CHOI, S.; Shi, Y.; Ni, X.; Simard, M.; Myneni, R. B.

    2013-12-01

    Sparseness in in-situ observations has precluded the spatially explicit and accurate mapping of forest biomass. The need for large-scale maps has raised various approaches implementing conjugations between forest biomass and geospatial predictors such as climate, forest type, soil property, and topography. Despite the improved modeling techniques (e.g., machine learning and spatial statistics), a common limitation is that biophysical mechanisms governing tree growth are neglected in these black-box type models. The absence of a priori knowledge may lead to false interpretation of modeled results or unexplainable shifts in outputs due to the inconsistent training samples or study sites. Here, we present a gray-box approach combining known biophysical processes and geospatial predictors through parametric optimizations (inversion of reference measures). Total aboveground biomass in forest stands is estimated by incorporating the Forest Inventory and Analysis (FIA) and Parameter-elevation Regressions on Independent Slopes Model (PRISM). Two main premises of this research are: (a) The Allometric Scaling and Resource Limitations (ASRL) theory can provide a relationship between tree geometry and local resource availability constrained by environmental conditions; and (b) The zeroth order theory (size-frequency distribution) can expand individual tree allometry into total aboveground biomass at the forest stand level. In addition to the FIA estimates, two reference maps from the National Biomass and Carbon Dataset (NBCD) and U.S. Forest Service (USFS) were produced to evaluate the model. This research focuses on a site-scale test of the biomass model to explore the robustness of predictors, and to potentially improve models using additional geospatial predictors such as climatic variables, vegetation indices, soil properties, and lidar-/radar-derived altimetry products (or existing forest canopy height maps). As results, the optimized ASRL estimates satisfactorily resemble the FIA aboveground biomass in terms of data distribution, overall agreement, and spatial similarity across scales. Uncertainties are quantified (ranged from 0.2 to 0.4) by taking into account the spatial mismatch (FIA plot vs. PRISM grid), heterogeneity (species composition), and an example bias scenario (= 0.2) in the root system extents.

  12. A Review of the Ginzburg-Syrovatskii's Galactic Cosmic-Ray Propagation Model and its Leaky-Box Limit

    NASA Technical Reports Server (NTRS)

    Barghouty, A. F.

    2012-01-01

    Phenomenological models of galactic cosmic-ray propagation are based on a diffusion equation known as the Ginzburg-Syrovatskii s equation, or variants (or limits) of this equation. Its one-dimensional limit in a homogeneous volume, known as the leaky-box limit or model, is sketched here. The justification, utility, limitations, and a typical numerical implementation of the leaky-box model are examined in some detail.

  13. Chemistry of Stream Sediments and Surface Waters in New England

    USGS Publications Warehouse

    Robinson, Gilpin R.; Kapo, Katherine E.; Grossman, Jeffrey N.

    2004-01-01

    Summary -- This online publication portrays regional data for pH, alkalinity, and specific conductance for stream waters and a multi-element geochemical dataset for stream sediments collected in the New England states of Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont. A series of interpolation grid maps portray the chemistry of the stream waters and sediments in relation to bedrock geology, lithology, drainage basins, and urban areas. A series of box plots portray the statistical variation of the chemical data grouped by lithology and other features.

  14. Grid-based lattice summation of electrostatic potentials by assembled rank-structured tensor approximation

    NASA Astrophysics Data System (ADS)

    Khoromskaia, Venera; Khoromskij, Boris N.

    2014-12-01

    Our recent method for low-rank tensor representation of sums of the arbitrarily positioned electrostatic potentials discretized on a 3D Cartesian grid reduces the 3D tensor summation to operations involving only 1D vectors however retaining the linear complexity scaling in the number of potentials. Here, we introduce and study a novel tensor approach for fast and accurate assembled summation of a large number of lattice-allocated potentials represented on 3D N × N × N grid with the computational requirements only weakly dependent on the number of summed potentials. It is based on the assembled low-rank canonical tensor representations of the collected potentials using pointwise sums of shifted canonical vectors representing the single generating function, say the Newton kernel. For a sum of electrostatic potentials over L × L × L lattice embedded in a box the required storage scales linearly in the 1D grid-size, O(N) , while the numerical cost is estimated by O(NL) . For periodic boundary conditions, the storage demand remains proportional to the 1D grid-size of a unit cell, n = N / L, while the numerical cost reduces to O(N) , that outperforms the FFT-based Ewald-type summation algorithms of complexity O(N3 log N) . The complexity in the grid parameter N can be reduced even to the logarithmic scale O(log N) by using data-sparse representation of canonical N-vectors via the quantics tensor approximation. For justification, we prove an upper bound on the quantics ranks for the canonical vectors in the overall lattice sum. The presented approach is beneficial in applications which require further functional calculus with the lattice potential, say, scalar product with a function, integration or differentiation, which can be performed easily in tensor arithmetics on large 3D grids with 1D cost. Numerical tests illustrate the performance of the tensor summation method and confirm the estimated bounds on the tensor ranks.

  15. An objective decision model of power grid environmental protection based on environmental influence index and energy-saving and emission-reducing index

    NASA Astrophysics Data System (ADS)

    Feng, Jun-shu; Jin, Yan-ming; Hao, Wei-hua

    2017-01-01

    Based on modelling the environmental influence index of power transmission and transformation project and energy-saving and emission-reducing index of source-grid-load of power system, this paper establishes an objective decision model of power grid environmental protection, with constraints of power grid environmental protection objectives being legal and economical, and considering both positive and negative influences of grid on the environmental in all-life grid cycle. This model can be used to guide the programming work of power grid environmental protection. A numerical simulation of Jiangsu province’s power grid environmental protection objective decision model has been operated, and the results shows that the maximum goal of energy-saving and emission-reducing benefits would be reached firstly as investment increasing, and then the minimum goal of environmental influence.

  16. Fast Geostatistical Inversion using Randomized Matrix Decompositions and Sketchings for Heterogeneous Aquifer Characterization

    NASA Astrophysics Data System (ADS)

    O'Malley, D.; Le, E. B.; Vesselinov, V. V.

    2015-12-01

    We present a fast, scalable, and highly-implementable stochastic inverse method for characterization of aquifer heterogeneity. The method utilizes recent advances in randomized matrix algebra and exploits the structure of the Quasi-Linear Geostatistical Approach (QLGA), without requiring a structured grid like Fast-Fourier Transform (FFT) methods. The QLGA framework is a more stable version of Gauss-Newton iterates for a large number of unknown model parameters, but provides unbiased estimates. The methods are matrix-free and do not require derivatives or adjoints, and are thus ideal for complex models and black-box implementation. We also incorporate randomized least-square solvers and data-reduction methods, which speed up computation and simulate missing data points. The new inverse methodology is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. Inversion results based on series of synthetic problems with steady-state and transient calibration data are presented.

  17. Cryptonite: A Secure and Performant Data Repository on Public Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumbhare, Alok; Simmhan, Yogesh; Prasanna, Viktor

    2012-06-29

    Cloud storage has become immensely popular for maintaining synchronized copies of files and for sharing documents with collaborators. However, there is heightened concern about the security and privacy of Cloud-hosted data due to the shared infrastructure model and an implicit trust in the service providers. Emerging needs of secure data storage and sharing for domains like Smart Power Grids, which deal with sensitive consumer data, require the persistence and availability of Cloud storage but with client-controlled security and encryption, low key management overhead, and minimal performance costs. Cryptonite is a secure Cloud storage repository that addresses these requirements using amore » StrongBox model for shared key management.We describe the Cryptonite service and desktop client, discuss performance optimizations, and provide an empirical analysis of the improvements. Our experiments shows that Cryptonite clients achieve a 40% improvement in file upload bandwidth over plaintext storage using the Azure Storage Client API despite the added security benefits, while our file download performance is 5 times faster than the baseline for files greater than 100MB.« less

  18. Surface Modeling and Grid Generation of Orbital Sciences X34 Vehicle. Phase 1

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    1997-01-01

    The surface modeling and grid generation requirements, motivations, and methods used to develop Computational Fluid Dynamic volume grids for the X34-Phase 1 are presented. The requirements set forth by the Aerothermodynamics Branch at the NASA Langley Research Center serve as the basis for the final techniques used in the construction of all volume grids, including grids for parametric studies of the X34. The Integrated Computer Engineering and Manufacturing code for Computational Fluid Dynamics (ICEM/CFD), the Grid Generation code (GRIDGEN), the Three-Dimensional Multi-block Advanced Grid Generation System (3DMAGGS) code, and Volume Grid Manipulator (VGM) code are used to enable the necessary surface modeling, surface grid generation, volume grid generation, and grid alterations, respectively. All volume grids generated for the X34, as outlined in this paper, were used for CFD simulations within the Aerothermodynamics Branch.

  19. Evaluation of grid generation technologies from an applied perspective

    NASA Technical Reports Server (NTRS)

    Hufford, Gary S.; Harrand, Vincent J.; Patel, Bhavin C.; Mitchell, Curtis R.

    1995-01-01

    An analysis of the grid generation process from the point of view of an applied CFD engineer is given. Issues addressed include geometric modeling, structured grid generation, unstructured grid generation, hybrid grid generation and use of virtual parts libraries in large parametric analysis projects. The analysis is geared towards comparing the effective turn around time for specific grid generation and CFD projects. The conclusion was made that a single grid generation methodology is not universally suited for all CFD applications due to both limitations in grid generation and flow solver technology. A new geometric modeling and grid generation tool, CFD-GEOM, is introduced to effectively integrate the geometric modeling process to the various grid generation methodologies including structured, unstructured, and hybrid procedures. The full integration of the geometric modeling and grid generation allows implementation of extremely efficient updating procedures, a necessary requirement for large parametric analysis projects. The concept of using virtual parts libraries in conjunction with hybrid grids for large parametric analysis projects is also introduced to improve the efficiency of the applied CFD engineer.

  20. Surface reflectance drives nest box temperature profiles and thermal suitability for target wildlife.

    PubMed

    Griffiths, Stephen R; Rowland, Jessica A; Briscoe, Natalie J; Lentini, Pia E; Handasyde, Kathrine A; Lumsden, Linda F; Robert, Kylie A

    2017-01-01

    Thermal properties of tree hollows play a major role in survival and reproduction of hollow-dependent fauna. Artificial hollows (nest boxes) are increasingly being used to supplement the loss of natural hollows; however, the factors that drive nest box thermal profiles have received surprisingly little attention. We investigated how differences in surface reflectance influenced temperature profiles of nest boxes painted three different colors (dark-green, light-green, and white: total solar reflectance 5.9%, 64.4%, and 90.3% respectively) using boxes designed for three groups of mammals: insectivorous bats, marsupial gliders and brushtail possums. Across the three different box designs, dark-green (low reflectance) boxes experienced the highest average and maximum daytime temperatures, had the greatest magnitude of variation in daytime temperatures within the box, and were consistently substantially warmer than light-green boxes (medium reflectance), white boxes (high reflectance), and ambient air temperatures. Results from biophysical model simulations demonstrated that variation in diurnal temperature profiles generated by painting boxes either high or low reflectance colors could have significant ecophysiological consequences for animals occupying boxes, with animals in dark-green boxes at high risk of acute heat-stress and dehydration during extreme heat events. Conversely in cold weather, our modelling indicated that there are higher cumulative energy costs for mammals, particularly smaller animals, occupying light-green boxes. Given their widespread use as a conservation tool, we suggest that before boxes are installed, consideration should be given to the effect of color on nest box temperature profiles, and the resultant thermal suitability of boxes for wildlife, particularly during extremes in weather. Managers of nest box programs should consider using several different colors and installing boxes across a range of both orientations and shade profiles (i.e., levels of canopy cover), to ensure target animals have access to artificial hollows with a broad range of thermal profiles, and can therefore choose boxes with optimal thermal conditions across different seasons.

  1. Surface reflectance drives nest box temperature profiles and thermal suitability for target wildlife

    PubMed Central

    Rowland, Jessica A.; Briscoe, Natalie J.; Lentini, Pia E.; Handasyde, Kathrine A.; Lumsden, Linda F.; Robert, Kylie A.

    2017-01-01

    Thermal properties of tree hollows play a major role in survival and reproduction of hollow-dependent fauna. Artificial hollows (nest boxes) are increasingly being used to supplement the loss of natural hollows; however, the factors that drive nest box thermal profiles have received surprisingly little attention. We investigated how differences in surface reflectance influenced temperature profiles of nest boxes painted three different colors (dark-green, light-green, and white: total solar reflectance 5.9%, 64.4%, and 90.3% respectively) using boxes designed for three groups of mammals: insectivorous bats, marsupial gliders and brushtail possums. Across the three different box designs, dark-green (low reflectance) boxes experienced the highest average and maximum daytime temperatures, had the greatest magnitude of variation in daytime temperatures within the box, and were consistently substantially warmer than light-green boxes (medium reflectance), white boxes (high reflectance), and ambient air temperatures. Results from biophysical model simulations demonstrated that variation in diurnal temperature profiles generated by painting boxes either high or low reflectance colors could have significant ecophysiological consequences for animals occupying boxes, with animals in dark-green boxes at high risk of acute heat-stress and dehydration during extreme heat events. Conversely in cold weather, our modelling indicated that there are higher cumulative energy costs for mammals, particularly smaller animals, occupying light-green boxes. Given their widespread use as a conservation tool, we suggest that before boxes are installed, consideration should be given to the effect of color on nest box temperature profiles, and the resultant thermal suitability of boxes for wildlife, particularly during extremes in weather. Managers of nest box programs should consider using several different colors and installing boxes across a range of both orientations and shade profiles (i.e., levels of canopy cover), to ensure target animals have access to artificial hollows with a broad range of thermal profiles, and can therefore choose boxes with optimal thermal conditions across different seasons. PMID:28472147

  2. Determination of sample size for higher volatile data using new framework of Box-Jenkins model with GARCH: A case study on gold price

    NASA Astrophysics Data System (ADS)

    Roslindar Yaziz, Siti; Zakaria, Roslinazairimah; Hura Ahmad, Maizah

    2017-09-01

    The model of Box-Jenkins - GARCH has been shown to be a promising tool for forecasting higher volatile time series. In this study, the framework of determining the optimal sample size using Box-Jenkins model with GARCH is proposed for practical application in analysing and forecasting higher volatile data. The proposed framework is employed to daily world gold price series from year 1971 to 2013. The data is divided into 12 different sample sizes (from 30 to 10200). Each sample is tested using different combination of the hybrid Box-Jenkins - GARCH model. Our study shows that the optimal sample size to forecast gold price using the framework of the hybrid model is 1250 data of 5-year sample. Hence, the empirical results of model selection criteria and 1-step-ahead forecasting evaluations suggest that the latest 12.25% (5-year data) of 10200 data is sufficient enough to be employed in the model of Box-Jenkins - GARCH with similar forecasting performance as by using 41-year data.

  3. A photosynthesis-based two-leaf canopy stomatal ...

    EPA Pesticide Factsheets

    A coupled photosynthesis-stomatal conductance model with single-layer sunlit and shaded leaf canopy scaling is implemented and evaluated in a diagnostic box model with the Pleim-Xiu land surface model (PX LSM) and ozone deposition model components taken directly from the meteorology and air quality modeling system—WRF/CMAQ (Weather Research and Forecast model and Community Multiscale Air Quality model). The photosynthesis-based model for PX LSM (PX PSN) is evaluated at a FLUXNET site for implementation against different parameterizations and the current PX LSM approach with a simple Jarvis function (PX Jarvis). Latent heat flux (LH) from PX PSN is further evaluated at five FLUXNET sites with different vegetation types and landscape characteristics. Simulated ozone deposition and flux from PX PSN are evaluated at one of the sites with ozone flux measurements. Overall, the PX PSN simulates LH as well as the PX Jarvis approach. The PX PSN, however, shows distinct advantages over the PX Jarvis approach for grassland that likely result from its treatment of C3 and C4 plants for CO2 assimilation. Simulations using Moderate Resolution Imaging Spectroradiometer (MODIS) leaf area index (LAI) rather than LAI measured at each site assess how the model would perform with grid averaged data used in WRF/CMAQ. MODIS LAI estimates degrade model performance at all sites but one site having exceptionally old and tall trees. Ozone deposition velocity and ozone flux along with LH

  4. Computer simulations and experimental study on crash box of automobile in low speed collision

    NASA Astrophysics Data System (ADS)

    Liu, Yanjie; Ding, Lin; Yan, Shengyuan; Yang, Yongsheng

    2008-11-01

    Based on the problems of energy-absorbing components in the automobile low speed collision process, according to crash box frontal crash test in low speed as the example, the simulation analysis of crash box impact process was carried out by Hyper Mesh and LS-DYNA. Each parameter on the influence modeling was analyzed by mathematics analytical solution and test comparison, which guaranteed that the model was accurate. Combination of experiment and simulation result had determined the weakness part of crash box structure crashworthiness aspect, and improvement method of crash box crashworthiness was discussed. Through numerical simulation of the impact process of automobile crash box, the obtained analysis result was used to optimize the design of crash box. It was helpful to improve the vehicles structure and decrease the collision accident loss at most. And it was also provided a useful method for the further research on the automobile collision.

  5. Dissecting children's observational learning of complex actions through selective video displays.

    PubMed

    Flynn, Emma; Whiten, Andrew

    2013-10-01

    Children can learn how to use complex objects by watching others, yet the relative importance of different elements they may observe, such as the interactions of the individual parts of the apparatus, a model's movements, and desirable outcomes, remains unclear. In total, 140 3-year-olds and 140 5-year-olds participated in a study where they observed a video showing tools being used to extract a reward item from a complex puzzle box. Conditions varied according to the elements that could be seen in the video: (a) the whole display, including the model's hands, the tools, and the box; (b) the tools and the box but not the model's hands; (c) the model's hands and the tools but not the box; (d) only the end state with the box opened; and (e) no demonstration. Children's later attempts at the task were coded to establish whether they imitated the hierarchically organized sequence of the model's actions, the action details, and/or the outcome. Children's successful retrieval of the reward from the box and the replication of hierarchical sequence information were reduced in all but the whole display condition. Only once children had attempted the task and witnessed a second demonstration did the display focused on the tools and box prove to be better for hierarchical sequence information than the display focused on the tools and hands only. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models

    NASA Astrophysics Data System (ADS)

    Xu, S.; Wang, B.; Liu, J.

    2015-02-01

    In this article we propose two conformal mapping based grid generation algorithms for global ocean general circulation models (OGCMs). Contrary to conventional, analytical forms based dipolar or tripolar grids, the new algorithms are based on Schwarz-Christoffel (SC) conformal mapping with prescribed boundary information. While dealing with the basic grid design problem of pole relocation, these new algorithms also address more advanced issues such as smoothed scaling factor, or the new requirements on OGCM grids arisen from the recent trend of high-resolution and multi-scale modeling. The proposed grid generation algorithm could potentially achieve the alignment of grid lines to coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the generated grids are still orthogonal curvilinear, they can be readily utilized in existing Bryan-Cox-Semtner type ocean models. The proposed methodology can also be applied to the grid generation task for regional ocean modeling where complex land-ocean distribution is present.

  7. The CEO's role in business model reinvention.

    PubMed

    Govindarajan, Vijay; Trimble, Chris

    2011-01-01

    Fending off new competitors is a perennial struggle for established companies. Govindarajan and Trimble, of Dartmouth's Tuck School of Business, explain why: Many corporations become too comfortable with their existing business models and neglect the necessary work of radically reinventing them. The authors map out an alternative in their "three boxes" framework. They argue that while a CEO manages the present (box 1), he or she must also selectively forget the past (box 2) in order to create the future (box 3). Infosys chairman N.R. Narayana Murthy mastered the three boxes to reinvigorate his company and greatly increased its changes of enduring for generations.

  8. The power of structural modeling of sub-grid scales - application to astrophysical plasmas

    NASA Astrophysics Data System (ADS)

    Georgiev Vlaykov, Dimitar; Grete, Philipp

    2015-08-01

    In numerous astrophysical phenomena the dynamical range can span 10s of orders of magnitude. This implies more than billions of degrees-of-freedom and precludes direct numerical simulations from ever being a realistic possibility. A physical model is necessary to capture the unresolved physics occurring at the sub-grid scales (SGS).Structural modeling is a powerful concept which renders itself applicable to various physical systems. It stems from the idea of capturing the structure of the SGS terms in the evolution equations based on the scale-separation mechanism and independently of the underlying physics. It originates in the hydrodynamics field of large-eddy simulations. We apply it to the study of astrophysical MHD.Here, we present a non-linear SGS model for compressible MHD turbulence. The model is validated a priori at the tensorial, vectorial and scalar levels against of set of high-resolution simulations of stochastically forced homogeneous isotropic turbulence in a periodic box. The parameter space spans 2 decades in sonic Mach numbers (0.2 - 20) and approximately one decade in magnetic Mach number ~(1-8). This covers the super-Alfvenic sub-, trans-, and hyper-sonic regimes, with a range of plasma beta from 0.05 to 25. The Reynolds number is of the order of 103.At the tensor level, the model components correlate well with the turbulence ones, at the level of 0.8 and above. Vectorially, the alignment with the true SGS terms is encouraging with more than 50% of the model within 30° of the data. At the scalar level we look at the dynamics of the SGS energy and cross-helicity. The corresponding SGS flux terms have median correlations of ~0.8. Physically, the model represents well the two directions of the energy cascade.In comparison, traditional functional models exhibit poor local correlations with the data already at the scalar level. Vectorially, they are indifferent to the anisotropy of the SGS terms. They often struggle to represent the energy backscatter from small to large scales as well as the turbulent dynamo mechanism.Overall, the new model surpasses the traditional ones in all tests by a large margin.

  9. Photochemical grid model performance with varying horizontal grid resolution and sub-grid plume treatment for the Martins Creek near-field SO2 study

    NASA Astrophysics Data System (ADS)

    Baker, Kirk R.; Hawkins, Andy; Kelly, James T.

    2014-12-01

    Near source modeling is needed to assess primary and secondary pollutant impacts from single sources and single source complexes. Source-receptor relationships need to be resolved from tens of meters to tens of kilometers. Dispersion models are typically applied for near-source primary pollutant impacts but lack complex photochemistry. Photochemical models provide a realistic chemical environment but are typically applied using grid cell sizes that may be larger than the distance between sources and receptors. It is important to understand the impacts of grid resolution and sub-grid plume treatments on photochemical modeling of near-source primary pollution gradients. Here, the CAMx photochemical grid model is applied using multiple grid resolutions and sub-grid plume treatment for SO2 and compared with a receptor mesonet largely impacted by nearby sources approximately 3-17 km away in a complex terrain environment. Measurements are compared with model estimates of SO2 at 4- and 1-km resolution, both with and without sub-grid plume treatment and inclusion of finer two-way grid nests. Annual average estimated SO2 mixing ratios are highest nearest the sources and decrease as distance from the sources increase. In general, CAMx estimates of SO2 do not compare well with the near-source observations when paired in space and time. Given the proximity of these sources and receptors, accuracy in wind vector estimation is critical for applications that pair pollutant predictions and observations in time and space. In typical permit applications, predictions and observations are not paired in time and space and the entire distributions of each are directly compared. Using this approach, model estimates using 1-km grid resolution best match the distribution of observations and are most comparable to similar studies that used dispersion and Lagrangian modeling systems. Model-estimated SO2 increases as grid cell size decreases from 4 km to 250 m. However, it is notable that the 1-km model estimates using 1-km meteorological model input are higher than the 1-km model simulation that used interpolated 4-km meteorology. The inclusion of sub-grid plume treatment did not improve model skill in predicting SO2 in time and space and generally acts to keep emitted mass aloft.

  10. Grid2: A Program for Rapid Estimation of the Jovian Radiation Environment

    NASA Technical Reports Server (NTRS)

    Evans, R. W.; Brinza, D. E.

    2014-01-01

    Grid2 is a program that utilizes the Galileo Interim Radiation Electron model 2 (GIRE2) Jovian radiation model to compute fluences and doses for Jupiter missions. (Note: The iterations of these two softwares have been GIRE and GIRE2; likewise Grid and Grid2.) While GIRE2 is an important improvement over the original GIRE radiation model, the GIRE2 model can take as long as a day or more to compute these quantities for a complete mission. Grid2 fits the results of the detailed GIRE2 code with a set of grids in local time and position thereby greatly speeding up the execution of the model-minutes as opposed to days. The Grid2 model covers the time period from 1971 to 2050 and distances of 1.03 to 30 Jovian diameters (Rj). It is available as a direct-access database through a FORTRAN interface program. The new database is only slightly larger than the original grid version: 1.5 gigabytes (GB) versus 1.2 GB.

  11. Coupled Particle Transport and Pattern Formation in a Nonlinear Leaky-Box Model

    NASA Technical Reports Server (NTRS)

    Barghouty, A. F.; El-Nemr, K. W.; Baird, J. K.

    2009-01-01

    Effects of particle-particle coupling on particle characteristics in nonlinear leaky-box type descriptions of the acceleration and transport of energetic particles in space plasmas are examined in the framework of a simple two-particle model based on the Fokker-Planck equation in momentum space. In this model, the two particles are assumed coupled via a common nonlinear source term. In analogy with a prototypical mathematical system of diffusion-driven instability, this work demonstrates that steady-state patterns with strong dependence on the magnetic turbulence but a rather weak one on the coupled particles attributes can emerge in solutions of a nonlinearly coupled leaky-box model. The insight gained from this simple model may be of wider use and significance to nonlinearly coupled leaky-box type descriptions in general.

  12. INITIAL APPL;ICATION OF THE ADAPTIVE GRID AIR POLLUTION MODEL

    EPA Science Inventory

    The paper discusses an adaptive-grid algorithm used in air pollution models. The algorithm reduces errors related to insufficient grid resolution by automatically refining the grid scales in regions of high interest. Meanwhile the grid scales are coarsened in other parts of the d...

  13. Uncertainty propagation of p-boxes using sparse polynomial chaos expansions

    NASA Astrophysics Data System (ADS)

    Schöbi, Roland; Sudret, Bruno

    2017-06-01

    In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions to surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.

  14. Uncertainty propagation of p-boxes using sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schöbi, Roland, E-mail: schoebi@ibk.baug.ethz.ch; Sudret, Bruno, E-mail: sudret@ibk.baug.ethz.ch

    2017-06-15

    In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions tomore » surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.« less

  15. Grid2: A Program for Rapid Estimation of the Jovian Radiation Environment: A Numeric Implementation of the GIRE2 Jovian Radiation Model for Estimating Trapped Radiation for Mission Concept Studies

    NASA Technical Reports Server (NTRS)

    Evans, R. W.; Brinza, D. E.

    2014-01-01

    Grid2 is a program that utilizes the Galileo Interim Radiation Electron model 2 (GIRE2) Jovian radiation model to compute fluences and doses for Jupiter missions. (Note: The iterations of these two softwares have been GIRE and GIRE2; likewise Grid and Grid2.) While GIRE2 is an important improvement over the original GIRE radiation model, the GIRE2 model can take as long as a day or more to compute these quantities for a complete mission. Grid2 fits the results of the detailed GIRE2 code with a set of grids in local time and position thereby greatly speeding up the execution of the model--minutes as opposed to days. The Grid2 model covers the time period from 1971 to 2050and distances of 1.03 to 30 Jovian diameters (Rj). It is available as a direct-access database through a FORTRAN interface program. The new database is only slightly larger than the original grid version: 1.5 gigabytes (GB) versus 1.2 GB.

  16. Grid Transmission Expansion Planning Model Based on Grid Vulnerability

    NASA Astrophysics Data System (ADS)

    Tang, Quan; Wang, Xi; Li, Ting; Zhang, Quanming; Zhang, Hongli; Li, Huaqiang

    2018-03-01

    Based on grid vulnerability and uniformity theory, proposed global network structure and state vulnerability factor model used to measure different grid models. established a multi-objective power grid planning model which considering the global power network vulnerability, economy and grid security constraint. Using improved chaos crossover and mutation genetic algorithm to optimize the optimal plan. For the problem of multi-objective optimization, dimension is not uniform, the weight is not easy given. Using principal component analysis (PCA) method to comprehensive assessment of the population every generation, make the results more objective and credible assessment. the feasibility and effectiveness of the proposed model are validated by simulation results of Garver-6 bus system and Garver-18 bus.

  17. EPA EcoBox

    EPA Pesticide Factsheets

    This tool box of ecological risk assessment (Eco-box) includes over 400+ links to tools, models, and databases found within EPA and our Government partners designed that can aid risk assessors with performing exposure assessments.

  18. Grid-size dependence of Cauchy boundary conditions used to simulate stream-aquifer interactions

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2010-01-01

    This work examines the simulation of stream–aquifer interactions as grids are refined vertically and horizontally and suggests that traditional methods for calculating conductance can produce inappropriate values when the grid size is changed. Instead, different grid resolutions require different estimated values. Grid refinement strategies considered include global refinement of the entire model and local refinement of part of the stream. Three methods of calculating the conductance of the Cauchy boundary conditions are investigated. Single- and multi-layer models with narrow and wide streams produced stream leakages that differ by as much as 122% as the grid is refined. Similar results occur for globally and locally refined grids, but the latter required as little as one-quarter the computer execution time and memory and thus are useful for addressing some scale issues of stream–aquifer interactions. Results suggest that existing grid-size criteria for simulating stream–aquifer interactions are useful for one-layer models, but inadequate for three-dimensional models. The grid dependence of the conductance terms suggests that values for refined models using, for example, finite difference or finite-element methods, cannot be determined from previous coarse-grid models or field measurements. Our examples demonstrate the need for a method of obtaining conductances that can be translated to different grid resolutions and provide definitive test cases for investigating alternative conductance formulations.

  19. The eGo grid model: An open-source and open-data based synthetic medium-voltage grid model for distribution power supply systems

    NASA Astrophysics Data System (ADS)

    Amme, J.; Pleßmann, G.; Bühler, J.; Hülk, L.; Kötter, E.; Schwaegerl, P.

    2018-02-01

    The increasing integration of renewable energy into the electricity supply system creates new challenges for distribution grids. The planning and operation of distribution systems requires appropriate grid models that consider the heterogeneity of existing grids. In this paper, we describe a novel method to generate synthetic medium-voltage (MV) grids, which we applied in our DIstribution Network GeneratOr (DINGO). DINGO is open-source software and uses freely available data. Medium-voltage grid topologies are synthesized based on location and electricity demand in defined demand areas. For this purpose, we use GIS data containing demand areas with high-resolution spatial data on physical properties, land use, energy, and demography. The grid topology is treated as a capacitated vehicle routing problem (CVRP) combined with a local search metaheuristics. We also consider the current planning principles for MV distribution networks, paying special attention to line congestion and voltage limit violations. In the modelling process, we included power flow calculations for validation. The resulting grid model datasets contain 3608 synthetic MV grids in high resolution, covering all of Germany and taking local characteristics into account. We compared the modelled networks with real network data. In terms of number of transformers and total cable length, we conclude that the method presented in this paper generates realistic grids that could be used to implement a cost-optimised electrical energy system.

  20. SIRTF Tools for DIRT

    NASA Astrophysics Data System (ADS)

    Pound, M. W.; Wolfire, M. G.; Amarnath, N. S.

    2004-07-01

    The Dust InfraRed ToolBox (DIRT - a part of the Web Infrared ToolShed, or WITS {http://dustem.astro.umd.edu}) is a Java applet for modeling astrophysical processes in circumstellar shells around young and evolved stars. DIRT has been used by the astrophysics community for about 5 years. Users can automatically and efficiently search grids of pre-calculated models to fit their data. A large set of physical parameters and dust types are included in the model database, which contains over 500,000 models. We are adding new functionality to DIRT to support new missions like SIRTF and SOFIA. A new Instrument module allows for plotting of the model points convolved with the spatial and spectral responses of the selected instrument. This lets users better fit data from specific instruments. Currently, we have implemented modules for the Infrared Array Camera (IRAC) and Multiband Imaging Photometer (MIPS) on SIRTF. The models are based on the dust radiation transfer code of Wolfire & Cassinelli (1986) which accounts for multiple grain sizes and compositions. The model outputs are averaged over the instrument bands using the same weighting (νFν = constant) as the SIRTF data pipeline which allows the SIRTF data products to be compared directly with the model database. This work was supported in part by a NASA AISRP grant NAG 5-10751 and the SIRTF Legacy Science Program provided by NASA through an award issued by JPL under NASA contract 1407.

  1. Unconventional bearing capacity analysis and optimization of multicell box girders.

    PubMed

    Tepic, Jovan; Doroslovacki, Rade; Djelosevic, Mirko

    2014-01-01

    This study deals with unconventional bearing capacity analysis and the procedure of optimizing a two-cell box girder. The generalized model which enables the local stress-strain analysis of multicell girders was developed based on the principle of cross-sectional decomposition. The applied methodology is verified using the experimental data (Djelosevic et al., 2012) for traditionally formed box girders. The qualitative and quantitative evaluation of results obtained for the two-cell box girder is realized based on comparative analysis using the finite element method (FEM) and the ANSYS v12 software. The deflection function obtained by analytical and numerical methods was found consistent provided that the maximum deviation does not exceed 4%. Multicell box girders are rationally designed support structures characterized by much lower susceptibility of their cross-sectional elements to buckling and higher specific capacity than traditionally formed box girders. The developed local stress model is applied for optimizing the cross section of a two-cell box carrier. The author points to the advantages of implementing the model of local stresses in the optimization process and concludes that the technological reserve of bearing capacity amounts to 20% at the same girder weight and constant load conditions.

  2. Three-dimensional local grid refinement for block-centered finite-difference groundwater models using iteratively coupled shared nodes: A new method of interpolation and analysis of errors

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2004-01-01

    This paper describes work that extends to three dimensions the two-dimensional local-grid refinement method for block-centered finite-difference groundwater models of Mehl and Hill [Development and evaluation of a local grid refinement method for block-centered finite-difference groundwater models using shared nodes. Adv Water Resour 2002;25(5):497-511]. In this approach, the (parent) finite-difference grid is discretized more finely within a (child) sub-region. The grid refinement method sequentially solves each grid and uses specified flux (parent) and specified head (child) boundary conditions to couple the grids. Iteration achieves convergence between heads and fluxes of both grids. Of most concern is how to interpolate heads onto the boundary of the child grid such that the physics of the parent-grid flow is retained in three dimensions. We develop a new two-step, "cage-shell" interpolation method based on the solution of the flow equation on the boundary of the child between nodes shared with the parent grid. Error analysis using a test case indicates that the shared-node local grid refinement method with cage-shell boundary head interpolation is accurate and robust, and the resulting code is used to investigate three-dimensional local grid refinement of stream-aquifer interactions. Results reveal that (1) the parent and child grids interact to shift the true head and flux solution to a different solution where the heads and fluxes of both grids are in equilibrium, (2) the locally refined model provided a solution for both heads and fluxes in the region of the refinement that was more accurate than a model without refinement only if iterations are performed so that both heads and fluxes are in equilibrium, and (3) the accuracy of the coupling is limited by the parent-grid size - A coarse parent grid limits correct representation of the hydraulics in the feedback from the child grid.

  3. No significant impact of Foxf1 siRNA treatment in acute and chronic CCl4 liver injury.

    PubMed

    Abshagen, Kerstin; Rotberg, Tobias; Genz, Berit; Vollmar, Brigitte

    2017-08-01

    Chronic liver injury of any etiology is the main trigger of fibrogenic responses and thought to be mediated by hepatic stellate cells. Herein, activating transcription factors like forkhead box f1 are described to stimulate pro-fibrogenic genes in hepatic stellate cells. By using a liver-specific siRNA delivery system (DBTC), we evaluated whether forkhead box f1 siRNA treatment exhibit beneficial effects in murine models of acute and chronic CCl 4 -induced liver injury. Systemic administration of DBTC-forkhead box f1 siRNA in mice was only sufficient to silence forkhead box f1 in acute CCl 4 model, but was not able to attenuate liver injury as measured by liver enzymes and necrotic liver cell area. Therapeutic treatment of mice with DBTC-forkhead box f1 siRNA upon chronic CCl 4 exposition failed to inhibit forkhead box f1 expression and hence lacked to diminish hepatic stellate cells activation or fibrosis development. As a conclusion, DBTC-forkhead box f1 siRNA reduced forkhead box f1 expression in a model of acute but not chronic toxic liver injury and showed no positive effects in either of these mice models. Impact statement As liver fibrosis is a worldwide health problem, antifibrotic therapeutic strategies are urgently needed. Therefore, further developments of new technologies including validation in different experimental models of liver disease are essential. Since activation of hepatic stellate cells is a key event upon liver injury, the activating transcription factor forkhead box f1 (Foxf1) represents a potential target gene. Previously, we evaluated Foxf1 silencing by a liver-specific siRNA delivery system (DBTC), exerting beneficial effects in cholestasis. The present study was designed to confirm the therapeutic potential of Foxf1 siRNA in models of acute and chronic CCl 4 -induced liver injury. DBTC-Foxf1 siRNA was only sufficient to silence Foxf1 in acute CCl 4 model and did not ameliorate liver injury or fibrogenesis. This underlines the significance of the experimental model used. Each model displays specific characteristics in the pathogenic nature, time course and severity of fibrosis and the optimal time point for starting a therapy.

  4. Spiking Neurons in a Hierarchical Self-Organizing Map Model Can Learn to Develop Spatial and Temporal Properties of Entorhinal Grid Cells and Hippocampal Place Cells

    PubMed Central

    Pilly, Praveen K.; Grossberg, Stephen

    2013-01-01

    Medial entorhinal grid cells and hippocampal place cells provide neural correlates of spatial representation in the brain. A place cell typically fires whenever an animal is present in one or more spatial regions, or places, of an environment. A grid cell typically fires in multiple spatial regions that form a regular hexagonal grid structure extending throughout the environment. Different grid and place cells prefer spatially offset regions, with their firing fields increasing in size along the dorsoventral axes of the medial entorhinal cortex and hippocampus. The spacing between neighboring fields for a grid cell also increases along the dorsoventral axis. This article presents a neural model whose spiking neurons operate in a hierarchy of self-organizing maps, each obeying the same laws. This spiking GridPlaceMap model simulates how grid cells and place cells may develop. It responds to realistic rat navigational trajectories by learning grid cells with hexagonal grid firing fields of multiple spatial scales and place cells with one or more firing fields that match neurophysiological data about these cells and their development in juvenile rats. The place cells represent much larger spaces than the grid cells, which enable them to support navigational behaviors. Both self-organizing maps amplify and learn to categorize the most frequent and energetic co-occurrences of their inputs. The current results build upon a previous rate-based model of grid and place cell learning, and thus illustrate a general method for converting rate-based adaptive neural models, without the loss of any of their analog properties, into models whose cells obey spiking dynamics. New properties of the spiking GridPlaceMap model include the appearance of theta band modulation. The spiking model also opens a path for implementation in brain-emulating nanochips comprised of networks of noisy spiking neurons with multiple-level adaptive weights for controlling autonomous adaptive robots capable of spatial navigation. PMID:23577130

  5. What Is More Important for Fourth-Grade Primary School Students for Transforming Their Potential into Achievement: The Individual or the Environmental Box in Multidimensional Conceptions of Giftedness?

    ERIC Educational Resources Information Center

    Stoeger, Heidrun; Steinbach, Julia; Obergriesser, Stefanie; Matthes, Benjamin

    2014-01-01

    Multidimensional models of giftedness specify individual and environmental moderators or catalysts that help transform potential into achievement. However, these models do not state whether the importance of the "individual boxes" and the "environmental boxes" changes during this process. The present study examines whether,…

  6. Evaluation of the UnTRIM model for 3-D tidal circulation

    USGS Publications Warehouse

    Cheng, R.T.; Casulli, V.; ,

    2001-01-01

    A family of numerical models, known as the TRIM models, shares the same modeling philosophy for solving the shallow water equations. A characteristic analysis of the shallow water equations points out that the numerical instability is controlled by the gravity wave terms in the momentum equations and by the transport terms in the continuity equation. A semi-implicit finite-difference scheme has been formulated so that these terms and the vertical diffusion terms are treated implicitly and the remaining terms explicitly to control the numerical stability and the computations are carried out over a uniform finite-difference computational mesh without invoking horizontal or vertical coordinate transformations. An unstructured grid version of TRIM model is introduced, or UnTRIM (pronounces as "you trim"), which preserves these basic numerical properties and modeling philosophy, only the computations are carried out over an unstructured orthogonal grid. The unstructured grid offers the flexibilities in representing complex study areas so that fine grid resolution can be placed in regions of interest, and coarse grids are used to cover the remaining domain. Thus, the computational efforts are concentrated in areas of importance, and an overall computational saving can be achieved because the total number of grid-points is dramatically reduced. To use this modeling approach, an unstructured grid mesh must be generated to properly reflect the properties of the domain of the investigation. The new modeling flexibility in grid structure is accompanied by new challenges associated with issues of grid generation. To take full advantage of this new model flexibility, the model grid generation should be guided by insights into the physics of the problems; and the insights needed may require a higher degree of modeling skill.

  7. Algebraic multigrid preconditioning within parallel finite-element solvers for 3-D electromagnetic modelling problems in geophysics

    NASA Astrophysics Data System (ADS)

    Koldan, Jelena; Puzyrev, Vladimir; de la Puente, Josep; Houzeaux, Guillaume; Cela, José María

    2014-06-01

    We present an elaborate preconditioning scheme for Krylov subspace methods which has been developed to improve the performance and reduce the execution time of parallel node-based finite-element (FE) solvers for 3-D electromagnetic (EM) numerical modelling in exploration geophysics. This new preconditioner is based on algebraic multigrid (AMG) that uses different basic relaxation methods, such as Jacobi, symmetric successive over-relaxation (SSOR) and Gauss-Seidel, as smoothers and the wave front algorithm to create groups, which are used for a coarse-level generation. We have implemented and tested this new preconditioner within our parallel nodal FE solver for 3-D forward problems in EM induction geophysics. We have performed series of experiments for several models with different conductivity structures and characteristics to test the performance of our AMG preconditioning technique when combined with biconjugate gradient stabilized method. The results have shown that, the more challenging the problem is in terms of conductivity contrasts, ratio between the sizes of grid elements and/or frequency, the more benefit is obtained by using this preconditioner. Compared to other preconditioning schemes, such as diagonal, SSOR and truncated approximate inverse, the AMG preconditioner greatly improves the convergence of the iterative solver for all tested models. Also, when it comes to cases in which other preconditioners succeed to converge to a desired precision, AMG is able to considerably reduce the total execution time of the forward-problem code-up to an order of magnitude. Furthermore, the tests have confirmed that our AMG scheme ensures grid-independent rate of convergence, as well as improvement in convergence regardless of how big local mesh refinements are. In addition, AMG is designed to be a black-box preconditioner, which makes it easy to use and combine with different iterative methods. Finally, it has proved to be very practical and efficient in the parallel context.

  8. Metabolic Cages for a Space Flight Model in the Rat

    NASA Technical Reports Server (NTRS)

    Harper, Jennifer S.; Mulenburg, Gerald M.; Evans, Juli; Navidi, Meena; Wolinsky, Ira; Arnaud, Sara B.

    1994-01-01

    A variety of space flight models are available to mimic the physiologic changes seen in the rat during weightlessness. The model reported by Wronski and Morey-Holton has been widely used by many investigators, in musculoskeletal physiologic studies especially, resulting in accumulation of an extensive database that enables scientists to mimic space flight effects in the 1-g environment of Earth. However, information on nutrition or gastrointestinal and renal function in this space flight model is limited by the difficulty in acquiring uncontaminated metabolic specimens for analysis. In the Holton system, a traction tape harness is applied to the tail, and the rat's hindquarters are elevated by attaching the harness to a pulley system. Weight-bearing hind limbs are unloaded, and there is a headward fluid shift. The tail-suspended rats are able to move freely about their cages on their forelimbs and tolerate this procedure with minimal signs of stress. The cage used in Holton's model is basically a clear acrylic box set on a plastic grid floor with the pulley and tail harness system attached to the open top of the cage. Food is available from a square food cup recessed into a corner of the floor. In this system, urine, feces, and spilled food fall through the grid floor onto absorbent paper beneath the cage and cannot be separated and recovered quantitatively for analysis in metabolic balance studies. Commercially available metabolic cages are generally cylindrical and have been used with a centrally located suspension apparatus in other space flight models. The large living area, three times as large as most metabolic cages, and the free range of motion unique to Holton's model, essential for musculoskeletal investigations, were sacrificed. Holton's cages can accommodate animals ranging in weight from 70 to 600 g. Although an alternative construction of Holton's cage has been reported, it does not permit collection of separate urine and fecal samples. We describe the modifications to Holton's food delivery system, cage base, and the addition of a separator system for the collection of urine and fecal samples for metabolic and nutrition studies in the tail suspension model.

  9. A single-cell spiking model for the origin of grid-cell patterns

    PubMed Central

    Kempter, Richard

    2017-01-01

    Spatial cognition in mammals is thought to rely on the activity of grid cells in the entorhinal cortex, yet the fundamental principles underlying the origin of grid-cell firing are still debated. Grid-like patterns could emerge via Hebbian learning and neuronal adaptation, but current computational models remained too abstract to allow direct confrontation with experimental data. Here, we propose a single-cell spiking model that generates grid firing fields via spike-rate adaptation and spike-timing dependent plasticity. Through rigorous mathematical analysis applicable in the linear limit, we quantitatively predict the requirements for grid-pattern formation, and we establish a direct link to classical pattern-forming systems of the Turing type. Our study lays the groundwork for biophysically-realistic models of grid-cell activity. PMID:28968386

  10. New ghost-node method for linking different models with varied grid refinement

    USGS Publications Warehouse

    James, S.C.; Dickinson, J.E.; Mehl, S.W.; Hill, M.C.; Leake, S.A.; Zyvoloski, G.A.; Eddebbarh, A.-A.

    2006-01-01

    A flexible, robust method for linking grids of locally refined ground-water flow models constructed with different numerical methods is needed to address a variety of hydrologic problems. This work outlines and tests a new ghost-node model-linking method for a refined "child" model that is contained within a larger and coarser "parent" model that is based on the iterative method of Steffen W. Mehl and Mary C. Hill (2002, Advances in Water Res., 25, p. 497-511; 2004, Advances in Water Res., 27, p. 899-912). The method is applicable to steady-state solutions for ground-water flow. Tests are presented for a homogeneous two-dimensional system that has matching grids (parent cells border an integer number of child cells) or nonmatching grids. The coupled grids are simulated by using the finite-difference and finite-element models MODFLOW and FEHM, respectively. The simulations require no alteration of the MODFLOW or FEHM models and are executed using a batch file on Windows operating systems. Results indicate that when the grids are matched spatially so that nodes and child-cell boundaries are aligned, the new coupling technique has error nearly equal to that when coupling two MODFLOW models. When the grids are nonmatching, model accuracy is slightly increased compared to that for matching-grid cases. Overall, results indicate that the ghost-node technique is a viable means to couple distinct models because the overall head and flow errors relative to the analytical solution are less than if only the regional coarse-grid model was used to simulate flow in the child model's domain.

  11. Using box models to quantify zonal distributions and emissions of halocarbons in the background atmosphere.

    NASA Astrophysics Data System (ADS)

    Elkins, J. W.; Nance, J. D.; Dutton, G. S.; Montzka, S. A.; Hall, B. D.; Miller, B.; Butler, J. H.; Mondeel, D. J.; Siso, C.; Moore, F. L.; Hintsa, E. J.; Wofsy, S. C.; Rigby, M. L.

    2015-12-01

    The Halocarbons and other Atmospheric Trace Species (HATS) of NOAA's Global Monitoring Division started measurements of the major chlorofluorocarbons and nitrous oxide in 1977 from flask samples collected at five remote sites around the world. Our program has expanded to over 40 compounds at twelve sites, which includes six in situ instruments and twelve flask sites. The Montreal Protocol for Substances that Deplete the Ozone Layer and its subsequent amendments has helped to decrease the concentrations of many of the ozone depleting compounds in the atmosphere. Our goal is to provide zonal emission estimates for these trace gases from multi-box models and their estimated atmospheric lifetimes in this presentation and make the emission values available on our web site. We plan to use our airborne measurements to calibrate the exchange times between the boxes for 5-box and 12-box models using sulfur hexafluoride where emissions are better understood.

  12. Parameterized Finite Element Modeling and Buckling Analysis of Six Typical Composite Grid Cylindrical Shells

    NASA Astrophysics Data System (ADS)

    Lai, Changliang; Wang, Junbiao; Liu, Chuang

    2014-10-01

    Six typical composite grid cylindrical shells are constructed by superimposing three basic types of ribs. Then buckling behavior and structural efficiency of these shells are analyzed under axial compression, pure bending, torsion and transverse bending by finite element (FE) models. The FE models are created by a parametrical FE modeling approach that defines FE models with original natural twisted geometry and orients cross-sections of beam elements exactly. And the approach is parameterized and coded by Patran Command Language (PCL). The demonstrations of FE modeling indicate the program enables efficient generation of FE models and facilitates parametric studies and design of grid shells. Using the program, the effects of helical angles on the buckling behavior of six typical grid cylindrical shells are determined. The results of these studies indicate that the triangle grid and rotated triangle grid cylindrical shell are more efficient than others under axial compression and pure bending, whereas under torsion and transverse bending, the hexagon grid cylindrical shell is most efficient. Additionally, buckling mode shapes are compared and provide an understanding of composite grid cylindrical shells that is useful in preliminary design of such structures.

  13. Power Grid Construction Project Portfolio Optimization Based on Bi-level programming model

    NASA Astrophysics Data System (ADS)

    Zhao, Erdong; Li, Shangqi

    2017-08-01

    As the main body of power grid operation, county-level power supply enterprises undertake an important emission to guarantee the security of power grid operation and safeguard social power using order. The optimization of grid construction projects has been a key issue of power supply capacity and service level of grid enterprises. According to the actual situation of power grid construction project optimization of county-level power enterprises, on the basis of qualitative analysis of the projects, this paper builds a Bi-level programming model based on quantitative analysis. The upper layer of the model is the target restriction of the optimal portfolio; the lower layer of the model is enterprises’ financial restrictions on the size of the enterprise project portfolio. Finally, using a real example to illustrate operation proceeding and the optimization result of the model. Through qualitative analysis and quantitative analysis, the bi-level programming model improves the accuracy and normative standardization of power grid enterprises projects.

  14. Energy Spectra of Higher Reynolds Number Turbulence by the DNS with up to 122883 Grid Points

    NASA Astrophysics Data System (ADS)

    Ishihara, Takashi; Kaneda, Yukio; Morishita, Koji; Yokokawa, Mitsuo; Uno, Atsuya

    2014-11-01

    Large-scale direct numerical simulations (DNS) of forced incompressible turbulence in a periodic box with up to 122883 grid points have been performed using K computer. The maximum Taylor-microscale Reynolds number Rλ, and the maximum Reynolds number Re based on the integral length scale are over 2000 and 105, respectively. Our previous DNS with Rλ up to 1100 showed that the energy spectrum has a slope steeper than - 5 / 3 (the Kolmogorov scaling law) by factor 0 . 1 at the wavenumber range (kη < 0 . 03). Here η is the Kolmogorov length scale. Our present DNS at higher resolutions show that the energy spectra with different Reynolds numbers (Rλ > 1000) are well normalized not by the integral length-scale but by the Kolmogorov length scale, at the wavenumber range of the steeper slope. This result indicates that the steeper slope is not inherent character in the inertial subrange, and is affected by viscosity.

  15. Smart grid initialization reduces the computational complexity of multi-objective image registration based on a dual-dynamic transformation model to account for large anatomical differences

    NASA Astrophysics Data System (ADS)

    Bosman, Peter A. N.; Alderliesten, Tanja

    2016-03-01

    We recently demonstrated the strong potential of using dual-dynamic transformation models when tackling deformable image registration problems involving large anatomical differences. Dual-dynamic transformation models employ two moving grids instead of the common single moving grid for the target image (and single fixed grid for the source image). We previously employed powerful optimization algorithms to make use of the additional flexibility offered by a dual-dynamic transformation model with good results, directly obtaining insight into the trade-off between important registration objectives as a result of taking a multi-objective approach to optimization. However, optimization has so far been initialized using two regular grids, which still leaves a great potential of dual-dynamic transformation models untapped: a-priori grid alignment with image structures/areas that are expected to deform more. This allows (far) less grid points to be used, compared to using a sufficiently refined regular grid, leading to (far) more efficient optimization, or, equivalently, more accurate results using the same number of grid points. We study the implications of exploiting this potential by experimenting with two new smart grid initialization procedures: one manual expert-based and one automated image-feature-based. We consider a CT test case with large differences in bladder volume with and without a multi-resolution scheme and find a substantial benefit of using smart grid initialization.

  16. BeatBox-HPC simulation environment for biophysically and anatomically realistic cardiac electrophysiology.

    PubMed

    Antonioletti, Mario; Biktashev, Vadim N; Jackson, Adrian; Kharche, Sanjay R; Stary, Tomas; Biktasheva, Irina V

    2017-01-01

    The BeatBox simulation environment combines flexible script language user interface with the robust computational tools, in order to setup cardiac electrophysiology in-silico experiments without re-coding at low-level, so that cell excitation, tissue/anatomy models, stimulation protocols may be included into a BeatBox script, and simulation run either sequentially or in parallel (MPI) without re-compilation. BeatBox is a free software written in C language to be run on a Unix-based platform. It provides the whole spectrum of multi scale tissue modelling from 0-dimensional individual cell simulation, 1-dimensional fibre, 2-dimensional sheet and 3-dimensional slab of tissue, up to anatomically realistic whole heart simulations, with run time measurements including cardiac re-entry tip/filament tracing, ECG, local/global samples of any variables, etc. BeatBox solvers, cell, and tissue/anatomy models repositories are extended via robust and flexible interfaces, thus providing an open framework for new developments in the field. In this paper we give an overview of the BeatBox current state, together with a description of the main computational methods and MPI parallelisation approaches.

  17. A comparison between skeleton and bounding box models for falling direction recognition

    NASA Astrophysics Data System (ADS)

    Narupiyakul, Lalita; Srisrisawang, Nitikorn

    2017-12-01

    Falling is an injury that can lead to a serious medical condition in every range of the age of people. However, in the case of elderly, the risk of serious injury is much higher. Due to the fact that one way of preventing serious injury is to treat the fallen person as soon as possible, several works attempted to implement different algorithms to recognize the fall. Our work compares the performance of two models based on features extraction: (i) Body joint data (Skeleton Data) which are the joint's positions in 3 axes and (ii) Bounding box (Box-size Data) covering all body joints. Machine learning algorithms that were chosen are Decision Tree (DT), Naïve Bayes (NB), K-nearest neighbors (KNN), Linear discriminant analysis (LDA), Voting Classification (VC), and Gradient boosting (GB). The results illustrate that the models trained with Skeleton data are performed far better than those trained with Box-size data (with an average accuracy of 94-81% and 80-75%, respectively). KNN shows the best performance in both Body joint model and Bounding box model. In conclusion, KNN with Body joint model performs the best among the others.

  18. A Gridded Climatology of Clouds over Land (1971-1996) and Ocean (1954-2008) from Surface Observations Worldwide (NDP-026E)*

    DOE Data Explorer

    Hahn, C. J. [University of Arizona; Warren, S. G. [University of Washington

    2007-01-01

    Surface synoptic weather reports from ships and land stations worldwide were processed to produce a global cloud climatology which includes: total cloud cover, the amount and frequency of occurrence of nine cloud types within three levels of the troposphere, the frequency of occurrence of clear sky and of precipitation, the base heights of low clouds, and the non-overlapped amounts of middle and high clouds. Synoptic weather reports are made every three hours; the cloud information in a report is obtained visually by human observers. The reports used here cover the period 1971-96 for land and 1954-2008 for ocean. This digital archive provides multi-year monthly, seasonal, and annual averages in 5x5-degree grid boxes (or 10x10-degree boxes for some quantities over the ocean). Daytime and nighttime averages, as well as the diurnal average (average of day and night), are given. Nighttime averages were computed using only those reports that met an "illuminance criterion" (i.e., made under adequate moonlight or twilight), thus minimizing the "night-detection bias" and making possible the determination of diurnal cycles and nighttime trends for cloud types. The phase and amplitude of the first harmonic of both the diurnal cycle and the annual cycle are given for the various cloud types. Cloud averages for individual years are also given for the ocean for each of 4 seasons, and for each of the 12 months (daytime-only averages for the months). [Individual years for land are not gridded, but are given for individual stations in a companion data set, CDIAC's NDP-026D).] This analysis used 185 million reports from 5388 weather stations on continents and islands, and 50 million reports from ships; these reports passed a series of quality-control checks. This analysis updates (and in most ways supercedes) the previous cloud climatology constructed by the authors in the 1980s. Many of the long-term averages described here are mapped on the University of Washington, Department of Atmospheric Sciences Web site. The Online Cloud Atlas containing NDP-026E data is available via the University of Washington.

  19. Assessing image quality of low-cost laparoscopic box trainers: options for residents training at home.

    PubMed

    Kiely, Daniel J; Stephanson, Kirk; Ross, Sue

    2011-10-01

    Low-cost laparoscopic box trainers built using home computers and webcams may provide residents with a useful tool for practice at home. This study set out to evaluate the image quality of low-cost laparoscopic box trainers compared with a commercially available model. Five low-cost laparoscopic box trainers including the components listed were compared in random order to one commercially available box trainer: A (high-definition USB 2.0 webcam, PC laptop), B (Firewire webcam, Mac laptop), C (high-definition USB 2.0 webcam, Mac laptop), D (standard USB webcam, PC desktop), E (Firewire webcam, PC desktop), and F (the TRLCD03 3-DMEd Standard Minimally Invasive Training System). Participants observed still image quality and performed a peg transfer task using each box trainer. Participants rated still image quality, image quality with motion, and whether the box trainer had sufficient image quality to be useful for training. Sixteen residents in obstetrics and gynecology took part in the study. The box trainers showing no statistically significant difference from the commercially available model were A, B, C, D, and E for still image quality; A for image quality with motion; and A and B for usefulness of the simulator based on image quality. The cost of the box trainers A-E is approximately $100 to $160 each, not including a computer or laparoscopic instruments. Laparoscopic box trainers built from a high-definition USB 2.0 webcam with a PC (box trainer A) or from a Firewire webcam with a Mac (box trainer B) provide image quality comparable with a commercial standard.

  20. Deciphering the molecular mechanisms underlying the binding of the TWIST1/E12 complex to regulatory E-box sequences

    PubMed Central

    Bouard, Charlotte; Terreux, Raphael; Honorat, Mylène; Manship, Brigitte; Ansieau, Stéphane; Vigneron, Arnaud M.; Puisieux, Alain; Payen, Léa

    2016-01-01

    Abstract The TWIST1 bHLH transcription factor controls embryonic development and cancer processes. Although molecular and genetic analyses have provided a wealth of data on the role of bHLH transcription factors, very little is known on the molecular mechanisms underlying their binding affinity to the E-box sequence of the promoter. Here, we used an in silico model of the TWIST1/E12 (TE) heterocomplex and performed molecular dynamics (MD) simulations of its binding to specific (TE-box) and modified E-box sequences. We focused on (i) active E-box and inactive E-box sequences, on (ii) modified active E-box sequences, as well as on (iii) two box sequences with modified adjacent bases the AT- and TA-boxes. Our in silico models were supported by functional in vitro binding assays. This exploration highlighted the predominant role of protein side-chain residues, close to the heart of the complex, at anchoring the dimer to DNA sequences, and unveiled a shift towards adjacent ((-1) and (-1*)) bases and conserved bases of modified E-box sequences. In conclusion, our study provides proof of the predictive value of these MD simulations, which may contribute to the characterization of specific inhibitors by docking approaches, and their use in pharmacological therapies by blocking the tumoral TWIST1/E12 function in cancers. PMID:27151200

  1. A 24.5-Year Global Dataset of Direct Normal Irradiance: Result from the Application of a Global-to-Beam Model to the NASA GEWEX SRB Global Horizontal Irradiance

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Stackhouse, P. W.; Chandler, W.; Hoell, J. M., Jr.; Westberg, D. J.

    2015-12-01

    The DIRINDEX model has previously been applied to the NASA GEWEX SRB Release 3.0 global horizontal irradiances (GHIs) to derive 3-hourly, daily and monthly mean direct normal irradiances (DNIs) for the period from 2000 to 2005 (http://dx.doi.org/10.1016/j.solener.2014.09.006), though the model was originally designed to estimate hourly DNIs from hourly GHIs. Input to the DIRINDEX model comprised 1.) the 3-hourly all-sky and clear-sky GHIs from the GEWEX SRB dataset; 2.) the surface pressure and the atmospheric column water vapor from the GEOS4 dataset; and 3.) daily mean aerosol optical depth at 700 nm derived from the daily mean aerosol data from the Model of Atmospheric Transport and CHemistry (MATCH). The GEWEX SRB data is spatially available on a quasi-equal-area global grid system consisting of 44016 boxes ranging from 1 degree latitude by 1 degree longitude around the Equator to 1 degree latitude by 120 degree longitude next to the poles. The derived DNIs were on the same grid system. Due to the limited availability of the MATCH aerosol data, the model was applied to the years from 2000 to 2005 only. The results were compared with ground-based measurements from 39 sites of the Baseline Surface Radiation Network (BSRN). The comparison statistics show that the results were in better agreement with their BSRN counterparts than the current Surface meteorology and Solar Energy (SSE) Release 6.0 data (https://eosweb.larc.nasa.gov/sse/). In this paper, we present results from the model over the entire time span of the GEWEX SRB Release 3.0 data (July 1983 to December2007) in which the MERRA atmospheric data were substituted for the GEOS4 data, and the Max-Planck Aerosol Climatology Version 1 (MAC-v1) data for the MATCH data. As a consequence, we derived a 24.5-year DNI dataset of global coverage continuous from July 1983 to December 2007. Comparisons with the BSRN data show that the results are comparable in quality with that from the earlier application.

  2. Occupancy modeling reveals territory-level effects of nest boxes on the presence, colonization, and persistence of a declining raptor in a fruit-growing region.

    PubMed

    Shave, Megan E; Lindell, Catherine A

    2017-01-01

    Nest boxes for predators in agricultural regions are an easily implemented tool to improve local habitat quality with potential benefits for both conservation and agriculture. The potential for nest boxes to increase raptor populations in agricultural regions is of particular interest given their positions as top predators. This study examined the effects of cherry orchard nest boxes on the local breeding population of a declining species, the American Kestrel (Falco sparverius), in a fruit-growing region of Michigan. During the 2013-2016 study, we added a total of 23 new nest boxes in addition to 24 intact boxes installed previously; kestrels used up to 100% of our new boxes each season. We conducted temporally-replicated surveys along four roadside transects divided into 1.6 km × 500 m sites. We developed a multi-season occupancy model under a Bayesian framework and found that nest boxes had strong positive effects on first-year site occupancy, site colonization, and site persistence probabilities. The estimated number of occupied sites increased between 2013 and 2016, which correlated with the increase in number of sites with boxes. Kestrel detections decreased with survey date but were not affected by time of day or activity at the boxes themselves. These results indicate that nest boxes determined the presence of kestrels at our study sites and support the conclusion that the local kestrel population is likely limited by nest site availability. Furthermore, our results are highly relevant to the farmers on whose properties the boxes were installed, for we can conclude that installing a nest box in an orchard resulted in a high probability of kestrels occupying that orchard or the areas adjacent to it.

  3. Towards the Irving-Kirkwood limit of the mechanical stress tensor

    NASA Astrophysics Data System (ADS)

    Smith, E. R.; Heyes, D. M.; Dini, D.

    2017-06-01

    The probability density functions (PDFs) of the local measure of pressure as a function of the sampling volume are computed for a model Lennard-Jones (LJ) fluid using the Method of Planes (MOP) and Volume Averaging (VA) techniques. This builds on the study of Heyes, Dini, and Smith [J. Chem. Phys. 145, 104504 (2016)] which only considered the VA method for larger subvolumes. The focus here is typically on much smaller subvolumes than considered previously, which tend to the Irving-Kirkwood limit where the pressure tensor is defined at a point. The PDFs from the MOP and VA routes are compared for cubic subvolumes, V =ℓ3. Using very high grid-resolution and box-counting analysis, we also show that any measurement of pressure in a molecular system will fail to exactly capture the molecular configuration. This suggests that it is impossible to obtain the pressure in the Irving-Kirkwood limit using the commonly employed grid based averaging techniques. More importantly, below ℓ ≈3 in LJ reduced units, the PDFs depart from Gaussian statistics, and for ℓ =1.0 , a double peaked PDF is observed in the MOP but not VA pressure distributions. This departure from a Gaussian shape means that the average pressure is not the most representative or common value to arise. In addition to contributing to our understanding of local pressure formulas, this work shows a clear lower limit on the validity of simply taking the average value when coarse graining pressure from molecular (and colloidal) systems.

  4. Towards the Irving-Kirkwood limit of the mechanical stress tensor.

    PubMed

    Smith, E R; Heyes, D M; Dini, D

    2017-06-14

    The probability density functions (PDFs) of the local measure of pressure as a function of the sampling volume are computed for a model Lennard-Jones (LJ) fluid using the Method of Planes (MOP) and Volume Averaging (VA) techniques. This builds on the study of Heyes, Dini, and Smith [J. Chem. Phys. 145, 104504 (2016)] which only considered the VA method for larger subvolumes. The focus here is typically on much smaller subvolumes than considered previously, which tend to the Irving-Kirkwood limit where the pressure tensor is defined at a point. The PDFs from the MOP and VA routes are compared for cubic subvolumes, V=ℓ 3 . Using very high grid-resolution and box-counting analysis, we also show that any measurement of pressure in a molecular system will fail to exactly capture the molecular configuration. This suggests that it is impossible to obtain the pressure in the Irving-Kirkwood limit using the commonly employed grid based averaging techniques. More importantly, below ℓ≈3 in LJ reduced units, the PDFs depart from Gaussian statistics, and for ℓ=1.0, a double peaked PDF is observed in the MOP but not VA pressure distributions. This departure from a Gaussian shape means that the average pressure is not the most representative or common value to arise. In addition to contributing to our understanding of local pressure formulas, this work shows a clear lower limit on the validity of simply taking the average value when coarse graining pressure from molecular (and colloidal) systems.

  5. Towards the Irving-Kirkwood limit of the mechanical stress tensor

    PubMed Central

    Heyes, D. M.; Dini, D.

    2017-01-01

    The probability density functions (PDFs) of the local measure of pressure as a function of the sampling volume are computed for a model Lennard-Jones (LJ) fluid using the Method of Planes (MOP) and Volume Averaging (VA) techniques. This builds on the study of Heyes, Dini, and Smith [J. Chem. Phys. 145, 104504 (2016)] which only considered the VA method for larger subvolumes. The focus here is typically on much smaller subvolumes than considered previously, which tend to the Irving-Kirkwood limit where the pressure tensor is defined at a point. The PDFs from the MOP and VA routes are compared for cubic subvolumes, V=ℓ3. Using very high grid-resolution and box-counting analysis, we also show that any measurement of pressure in a molecular system will fail to exactly capture the molecular configuration. This suggests that it is impossible to obtain the pressure in the Irving-Kirkwood limit using the commonly employed grid based averaging techniques. More importantly, below ℓ≈3 in LJ reduced units, the PDFs depart from Gaussian statistics, and for ℓ=1.0, a double peaked PDF is observed in the MOP but not VA pressure distributions. This departure from a Gaussian shape means that the average pressure is not the most representative or common value to arise. In addition to contributing to our understanding of local pressure formulas, this work shows a clear lower limit on the validity of simply taking the average value when coarse graining pressure from molecular (and colloidal) systems. PMID:29166053

  6. Domain modeling and grid generation for multi-block structured grids with application to aerodynamic and hydrodynamic configurations

    NASA Technical Reports Server (NTRS)

    Spekreijse, S. P.; Boerstoel, J. W.; Vitagliano, P. L.; Kuyvenhoven, J. L.

    1992-01-01

    About five years ago, a joint development was started of a flow simulation system for engine-airframe integration studies on propeller as well as jet aircraft. The initial system was based on the Euler equations and made operational for industrial aerodynamic design work. The system consists of three major components: a domain modeller, for the graphical interactive subdivision of flow domains into an unstructured collection of blocks; a grid generator, for the graphical interactive computation of structured grids in blocks; and a flow solver, for the computation of flows on multi-block grids. The industrial partners of the collaboration and NLR have demonstrated that the domain modeller, grid generator and flow solver can be applied to simulate Euler flows around complete aircraft, including propulsion system simulation. Extension to Navier-Stokes flows is in progress. Delft Hydraulics has shown that both the domain modeller and grid generator can also be applied successfully for hydrodynamic configurations. An overview is given about the main aspects of both domain modelling and grid generation.

  7. Reliability analysis in interdependent smart grid systems

    NASA Astrophysics Data System (ADS)

    Peng, Hao; Kan, Zhe; Zhao, Dandan; Han, Jianmin; Lu, Jianfeng; Hu, Zhaolong

    2018-06-01

    Complex network theory is a useful way to study many real complex systems. In this paper, a reliability analysis model based on complex network theory is introduced in interdependent smart grid systems. In this paper, we focus on understanding the structure of smart grid systems and studying the underlying network model, their interactions, and relationships and how cascading failures occur in the interdependent smart grid systems. We propose a practical model for interdependent smart grid systems using complex theory. Besides, based on percolation theory, we also study the effect of cascading failures effect and reveal detailed mathematical analysis of failure propagation in such systems. We analyze the reliability of our proposed model caused by random attacks or failures by calculating the size of giant functioning components in interdependent smart grid systems. Our simulation results also show that there exists a threshold for the proportion of faulty nodes, beyond which the smart grid systems collapse. Also we determine the critical values for different system parameters. In this way, the reliability analysis model based on complex network theory can be effectively utilized for anti-attack and protection purposes in interdependent smart grid systems.

  8. Thermodynamic modeling of small scale biomass gasifiers: Development and assessment of the ''Multi-Box'' approach.

    PubMed

    Vakalis, Stergios; Patuzzi, Francesco; Baratieri, Marco

    2016-04-01

    Modeling can be a powerful tool for designing and optimizing gasification systems. Modeling applications for small scale/fixed bed biomass gasifiers have been interesting due to their increased commercial practices. Fixed bed gasifiers are characterized by a wide range of operational conditions and are multi-zoned processes. The reactants are distributed in different phases and the products from each zone influence the following process steps and thus the composition of the final products. The present study aims to improve the conventional 'Black-Box' thermodynamic modeling by means of developing multiple intermediate 'boxes' that calculate two phase (solid-vapor) equilibriums in small scale gasifiers. Therefore the model is named ''Multi-Box''. Experimental data from a small scale gasifier have been used for the validation of the model. The returned results are significantly closer with the actual case study measurements in comparison to single-stage thermodynamic modeling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Direct Comparisons of Ice Cloud Macro- and Microphysical Properties Simulated by the Community Atmosphere Model CAM5 with HIPPO Aircraft Observations

    NASA Astrophysics Data System (ADS)

    Wu, C.; Liu, X.; Diao, M.; Zhang, K.; Gettelman, A.

    2015-12-01

    A dominant source of uncertainty within climate system modeling lies in the representation of cloud processes. This is not only because of the great complexity in cloud microphysics, but also because of the large variations of cloud amount and macroscopic properties in time and space. In this study, the cloud properties simulated by the Community Atmosphere Model version 5.4 (CAM5.4) are evaluated using the HIAPER Pole-to-Pole Observations (HIPPO, 2009-2011). CAM5.4 is driven by the meteorology (U, V, and T) from GEOS5 analysis, while water vapor, hydrometeors and aerosols are calculated by the model itself. For direct comparison of CAM5.4 and HIPPO observations, model output is collocated with HIPPO flights. Generally, the model has an ability to capture specific cloud systems of meso- to large-scales. In total, the model can reproduce 80% of observed cloud occurrences inside model grid boxes, and even higher (93%) for ice clouds (T≤-40°C). However, the model produces plenty of clouds that are not presented in the observation. The model also simulates significantly larger cloud fraction including for ice clouds compared to the observation. Further analysis shows that the overestimation is a result of bias in relative humidity (RH) in the model. The bias of RH can be mostly attributed to the discrepancies of water vapor, and to a lesser extent to those of temperature. Down to the micro-scale level of ice clouds, the model can simulate reasonably well the magnitude of ice and snow number concentration (Ni, with diameter larger than 75 μm). However, the model simulates fewer occurrences of Ni>50 L-1. This can be partially ascribed to the low bias of aerosol number concentration (Naer, with diameter between 0.1-1 μm) simulated by the model. Moreover, the model significantly underestimates both the number mean diameter (Di,n) and the volume mean diameter (Di,v) of ice/snow. The result shows that the underestimation may be related to a weaker positive relationship between Di,n and Naer and/or the underestimation of Naer. Finally, it is suggested that better representation of sub-grid variability of meteorology (e.g., water vapor) is needed to improve the formation and evolution of ice clouds in the model.

  10. FUN3D and CFL3D Computations for the First High Lift Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Lee-Rausch, Elizabeth M.; Rumsey, Christopher L.

    2011-01-01

    Two Reynolds-averaged Navier-Stokes codes were used to compute flow over the NASA Trapezoidal Wing at high lift conditions for the 1st AIAA CFD High Lift Prediction Workshop, held in Chicago in June 2010. The unstructured-grid code FUN3D and the structured-grid code CFL3D were applied to several different grid systems. The effects of code, grid system, turbulence model, viscous term treatment, and brackets were studied. The SST model on this configuration predicted lower lift than the Spalart-Allmaras model at high angles of attack; the Spalart-Allmaras model agreed better with experiment. Neglecting viscous cross-derivative terms caused poorer prediction in the wing tip vortex region. Output-based grid adaptation was applied to the unstructured-grid solutions. The adapted grids better resolved wake structures and reduced flap flow separation, which was also observed in uniform grid refinement studies. Limitations of the adaptation method as well as areas for future improvement were identified.

  11. [A test of the focusing hypothesis for category judgment: an explanation using the mental-box model].

    PubMed

    Hatori, Tsuyoshi; Takemura, Kazuhisa; Fujii, Satoshi; Ideno, Takashi

    2011-06-01

    This paper presents a new model of category judgment. The model hypothesizes that, when more attention is focused on a category, the psychological range of the category gets narrower (category-focusing hypothesis). We explain this hypothesis by using the metaphor of a "mental-box" model: the more attention that is focused on a mental box (i.e., a category set), the smaller the size of the box becomes (i.e., a cardinal number of the category set). The hypothesis was tested in an experiment (N = 40), where the focus of attention on prescribed verbal categories was manipulated. The obtained data gave support to the hypothesis: category-focusing effects were found in three experimental tasks (regarding the category of "food", "height", and "income"). The validity of the hypothesis was discussed based on the results.

  12. A fast dynamic grid adaption scheme for meteorological flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiedler, B.H.; Trapp, R.J.

    1993-10-01

    The continuous dynamic grid adaption (CDGA) technique is applied to a compressible, three-dimensional model of a rising thermal. The computational cost, per grid point per time step, of using CDGA instead of a fixed, uniform Cartesian grid is about 53% of the total cost of the model with CDGA. The use of general curvilinear coordinates contributes 11.7% to this total, calculating and moving the grid 6.1%, and continually updating the transformation relations 20.7%. Costs due to calculations that involve the gridpoint velocities (as well as some substantial unexplained costs) contribute the remaining 14.5%. A simple way to limit the costmore » of calculating the grid is presented. The grid is adapted by solving an elliptic equation for gridpoint coordinates on a coarse grid and then interpolating the full finite-difference grid. In this application, the additional costs per grid point of CDGA are shown to be easily offset by the savings resulting from the reduction in the required number of grid points. In simulation of the thermal costs are reduced by a factor of 3, as compared with those of a companion model with a fixed, uniform Cartesian grid. 8 refs., 8 figs.« less

  13. Assessment of grid optimisation measures for the German transmission grid using open source grid data

    NASA Astrophysics Data System (ADS)

    Böing, F.; Murmann, A.; Pellinger, C.; Bruckmeier, A.; Kern, T.; Mongin, T.

    2018-02-01

    The expansion of capacities in the German transmission grid is a necessity for further integration of renewable energy sources into the electricity sector. In this paper, the grid optimisation measures ‘Overhead Line Monitoring’, ‘Power-to-Heat’ and ‘Demand Response in the Industry’ are evaluated and compared against conventional grid expansion for the year 2030. Initially, the methodical approach of the simulation model is presented and detailed descriptions of the grid model and the used grid data, which partly originates from open-source platforms, are provided. Further, this paper explains how ‘Curtailment’ and ‘Redispatch’ can be reduced by implementing grid optimisation measures and how the depreciation of economic costs can be determined considering construction costs. The developed simulations show that the conventional grid expansion is more efficient and implies more grid relieving effects than the evaluated grid optimisation measures.

  14. Summertime Coincident Observations of Ice Water Path in the Visible/Near-IR, Radar, and Microwave Frequencies

    NASA Technical Reports Server (NTRS)

    Pittman, Jasna V.; Robertson, Franklin R.; Atkinson, Robert J.

    2008-01-01

    Accurate representation of the physical and radiative properties of clouds in climate models continues to be a challenge. At present, both remote sensing observations and modeling of microphysical properties of clouds rely heavily on parameterizations or assumptions on particle size distribution (PSD) and cloud phase. In this study, we compare Ice Water Path (IWP), an important physical and radiative property that provides the amount of ice present in a cloud column, using measurements obtained via three different retrieval strategies. The datasets we use in this study include Visible/Near-IR IWP from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument flying aboard the Aqua satellite, Radar-only IWP from the CloudSat instrument operating at 94 GHz, and NOAA/NESDIS operational IWP from the 89 and 157 GHz channels of the Microwave Humidity Sounder (MHS) instrument flying aboard the NOAA-18 satellite. In the Visible/Near-IR, IWP is derived from observations of optical thickness and effective radius. CloudSat IWP is determined from measurements of cloud backscatter and assumed PSD. MHS IWP retrievals depend on scattering measurements at two different, non-water absorbing channels, 89 and 157 GHz. In order to compare IWP obtained from these different techniques and collected at different vertical and horizontal resolutions, we examine summertime cases in the tropics (30S - 30N) when all 3 satellites are within 4 minutes of each other (approximately 1500 km). All measurements are then gridded to a common 15 km x 15 km box determined by MHS. In a grid box comparison, we find CloudSat to report the highest IWP followed by MODIS, followed by MHS. In a statistical comparison, probability density distributions show MHS with the highest frequencies at IWP of 100-1000 g/m(exp 2) and CloudSat with the longest tail reporting IWP of several thousands g/m(exp 2). For IWP greater than 30 g/m(exp 2), MODIS is consistently higher than CloudSat, and it is higher at the lower IWPs but lower at the higher IWPs that overlap with MHS. Some of these differences can be attributed to the limitations of the measuring techniques themselves, but some can result from the assumptions made in the algorithms that generate the IWP product. We investigate this issue by creating categories based on various conditions such as cloud type, precipitation presence, underlying liquid water content, and surface type (land vs. ocean) and by comparing the performance of the IWP products under each condition.

  15. Membrane potential dynamics of grid cells

    PubMed Central

    Domnisoru, Cristina; Kinkhabwala, Amina A.; Tank, David W.

    2014-01-01

    During navigation, grid cells increase their spike rates in firing fields arranged on a strikingly regular triangular lattice, while their spike timing is often modulated by theta oscillations. Oscillatory interference models of grid cells predict theta amplitude modulations of membrane potential during firing field traversals, while competing attractor network models predict slow depolarizing ramps. Here, using in-vivo whole-cell recordings, we tested these models by directly measuring grid cell intracellular potentials in mice running along linear tracks in virtual reality. Grid cells had large and reproducible ramps of membrane potential depolarization that were the characteristic signature tightly correlated with firing fields. Grid cells also exhibited intracellular theta oscillations that influenced their spike timing. However, the properties of theta amplitude modulations were not consistent with the view that they determine firing field locations. Our results support cellular and network mechanisms in which grid fields are produced by slow ramps, as in attractor models, while theta oscillations control spike timing. PMID:23395984

  16. The two-box model of climate: limitations and applications to planetary habitability and maximum entropy production studies.

    PubMed

    Lorenz, Ralph D

    2010-05-12

    The 'two-box model' of planetary climate is discussed. This model has been used to demonstrate consistency of the equator-pole temperature gradient on Earth, Mars and Titan with what would be predicted from a principle of maximum entropy production (MEP). While useful for exposition and for generating first-order estimates of planetary heat transports, it has too low a resolution to investigate climate systems with strong feedbacks. A two-box MEP model agrees well with the observed day : night temperature contrast observed on the extrasolar planet HD 189733b.

  17. Models for nearly every occasion: Part I - One box models.

    PubMed

    Hewett, Paul; Ganser, Gary H

    2017-01-01

    The standard "well mixed room," "one box" model cannot be used to predict occupational exposures whenever the scenario involves the use of local controls. New "constant emission" one box models are proposed that permit either local exhaust or local exhaust with filtered return, coupled with general room ventilation or the recirculation of a portion of the general room exhaust. New "two box" models are presented in Part II of this series. Both steady state and transient models were developed. The steady state equation for each model, including the standard one box steady state model, is augmented with an additional factor reflecting the fraction of time the substance was generated during each task. This addition allows the easy calculation of the average exposure for cyclic and irregular emission patterns, provided the starting and ending concentrations are zero or near zero, or the cumulative time across all tasks is long (e.g., several tasks to a full shift). The new models introduce additional variables, such as the efficiency of the local exhaust to immediately capture freshly generated contaminant and the filtration efficiency whenever filtered exhaust is returned to the workspace. Many of the model variables are knowable (e.g., room volume and ventilation rate). A structured procedure for calibrating a model to a work scenario is introduced that can be applied to both continuous and cyclic processes. The "calibration" procedure generates estimates of the generation rate and all of remaining unknown model variables.

  18. ATLAS - A new Lagrangian transport and mixing model with detailed stratospheric chemistry

    NASA Astrophysics Data System (ADS)

    Wohltmann, I.; Rex, M.; Lehmann, R.

    2009-04-01

    We present a new global Chemical Transport Model (CTM) with full stratospheric chemistry and Lagrangian transport and mixing called ATLAS. Lagrangian models have some crucial advantages over Eulerian grid-box based models, like no numerical diffusion, no limitation of the time step of the model by the CFL criterion, conservation of mixing ratios by design and easy parallelization of code. The transport module is based on a trajectory code developed at the Alfred Wegener Institute. The horizontal and vertical resolution, the vertical coordinate system (pressure, potential temperature, hybrid coordinate) and the time step of the model are flexible, so that the model can be used both for process studies and long-time runs over several decades. Mixing of the Lagrangian air parcels is parameterized based on the local shear and strain of the flow with a method similar to that used in the CLaMS model, but with some modifications like a triangulation that introduces no vertical layers. The stratospheric chemistry module was developed at the Institute and includes 49 species and 170 reactions and a detailed treatment of heterogenous chemistry on polar stratospheric clouds. We present an overview over the model architecture, the transport and mixing concept and some validation results. Comparison of model results with tracer data from flights of the ER2 aircraft in the stratospheric polar vortex in 1999/2000 which are able to resolve fine tracer filaments show that excellent agreement with observed tracer structures can be achieved with a suitable mixing parameterization.

  19. Evaluation of load flow and grid expansion in a unit-commitment and expansion optimization model SciGRID International Conference on Power Grid Modelling

    NASA Astrophysics Data System (ADS)

    Senkpiel, Charlotte; Biener, Wolfgang; Shammugam, Shivenes; Längle, Sven

    2018-02-01

    Energy system models serve as a basis for long term system planning. Joint optimization of electricity generating technologies, storage systems and the electricity grid leads to lower total system cost compared to an approach in which the grid expansion follows a given technology portfolio and their distribution. Modelers often face the problem of finding a good tradeoff between computational time and the level of detail that can be modeled. This paper analyses the differences between a transport model and a DC load flow model to evaluate the validity of using a simple but faster transport model within the system optimization model in terms of system reliability. The main findings in this paper are that a higher regional resolution of a system leads to better results compared to an approach in which regions are clustered as more overloads can be detected. An aggregation of lines between two model regions compared to a line sharp representation has little influence on grid expansion within a system optimizer. In a DC load flow model overloads can be detected in a line sharp case, which is therefore preferred. Overall the regions that need to reinforce the grid are identified within the system optimizer. Finally the paper recommends the usage of a load-flow model to test the validity of the model results.

  20. Grid-based mapping: A method for rapidly determining the spatial distributions of small features over very large areas

    NASA Astrophysics Data System (ADS)

    Ramsdale, Jason D.; Balme, Matthew R.; Conway, Susan J.; Gallagher, Colman; van Gasselt, Stephan A.; Hauber, Ernst; Orgel, Csilla; Séjourné, Antoine; Skinner, James A.; Costard, Francois; Johnsson, Andreas; Losiak, Anna; Reiss, Dennis; Swirad, Zuzanna M.; Kereszturi, Akos; Smith, Isaac B.; Platz, Thomas

    2017-06-01

    The increased volume, spatial resolution, and areal coverage of high-resolution images of Mars over the past 15 years have led to an increased quantity and variety of small-scale landform identifications. Though many such landforms are too small to represent individually on regional-scale maps, determining their presence or absence across large areas helps form the observational basis for developing hypotheses on the geological nature and environmental history of a study area. The combination of improved spatial resolution and near-continuous coverage significantly increases the time required to analyse the data. This becomes problematic when attempting regional or global-scale studies of metre and decametre-scale landforms. Here, we describe an approach for mapping small features (from decimetre to kilometre scale) across large areas, formulated for a project to study the northern plains of Mars, and provide context on how this method was developed and how it can be implemented. Rather than ;mapping; with points and polygons, grid-based mapping uses a ;tick box; approach to efficiently record the locations of specific landforms (we use an example suite of glacial landforms; including viscous flow features, the latitude dependant mantle and polygonised ground). A grid of squares (e.g. 20 km by 20 km) is created over the mapping area. Then the basemap data are systematically examined, grid-square by grid-square at full resolution, in order to identify the landforms while recording the presence or absence of selected landforms in each grid-square to determine spatial distributions. The result is a series of grids recording the distribution of all the mapped landforms across the study area. In some ways, these are equivalent to raster images, as they show a continuous distribution-field of the various landforms across a defined (rectangular, in most cases) area. When overlain on context maps, these form a coarse, digital landform map. We find that grid-based mapping provides an efficient solution to the problems of mapping small landforms over large areas, by providing a consistent and standardised approach to spatial data collection. The simplicity of the grid-based mapping approach makes it extremely scalable and workable for group efforts, requiring minimal user experience and producing consistent and repeatable results. The discrete nature of the datasets, simplicity of approach, and divisibility of tasks, open up the possibility for citizen science in which crowdsourcing large grid-based mapping areas could be applied.

  1. JIGSAW-GEO (1.0): Locally Orthogonal Staggered Unstructured Grid Generation for General Circulation Modelling on the Sphere

    NASA Technical Reports Server (NTRS)

    Engwirda, Darren

    2017-01-01

    An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered VoronoiDelaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail.

  2. JIGSAW-GEO (1.0): locally orthogonal staggered unstructured grid generation for general circulation modelling on the sphere

    NASA Astrophysics Data System (ADS)

    Engwirda, Darren

    2017-06-01

    An algorithm for the generation of non-uniform, locally orthogonal staggered unstructured spheroidal grids is described. This technique is designed to generate very high-quality staggered Voronoi-Delaunay meshes appropriate for general circulation modelling on the sphere, including applications to atmospheric simulation, ocean-modelling and numerical weather prediction. Using a recently developed Frontal-Delaunay refinement technique, a method for the construction of high-quality unstructured spheroidal Delaunay triangulations is introduced. A locally orthogonal polygonal grid, derived from the associated Voronoi diagram, is computed as the staggered dual. It is shown that use of the Frontal-Delaunay refinement technique allows for the generation of very high-quality unstructured triangulations, satisfying a priori bounds on element size and shape. Grid quality is further improved through the application of hill-climbing-type optimisation techniques. Overall, the algorithm is shown to produce grids with very high element quality and smooth grading characteristics, while imposing relatively low computational expense. A selection of uniform and non-uniform spheroidal grids appropriate for high-resolution, multi-scale general circulation modelling are presented. These grids are shown to satisfy the geometric constraints associated with contemporary unstructured C-grid-type finite-volume models, including the Model for Prediction Across Scales (MPAS-O). The use of user-defined mesh-spacing functions to generate smoothly graded, non-uniform grids for multi-resolution-type studies is discussed in detail.

  3. An Evaluation of Recently Developed RANS-Based Turbulence Models for Flow Over a Two-Dimensional Block Subjected to Different Mesh Structures and Grid Resolutions

    NASA Astrophysics Data System (ADS)

    Kardan, Farshid; Cheng, Wai-Chi; Baverel, Olivier; Porté-Agel, Fernando

    2016-04-01

    Understanding, analyzing and predicting meteorological phenomena related to urban planning and built environment are becoming more essential than ever to architectural and urban projects. Recently, various version of RANS models have been established but more validation cases are required to confirm their capability for wind flows. In the present study, the performance of recently developed RANS models, including the RNG k-ɛ , SST BSL k-ω and SST ⪆mma-Reθ , have been evaluated for the flow past a single block (which represent the idealized architecture scale). For validation purposes, the velocity streamlines and the vertical profiles of the mean velocities and variances were compared with published LES and wind tunnel experiment results. Furthermore, other additional CFD simulations were performed to analyze the impact of regular/irregular mesh structures and grid resolutions based on selected turbulence model in order to analyze the grid independency. Three different grid resolutions (coarse, medium and fine) of Nx × Ny × Nz = 320 × 80 × 320, 160 × 40 × 160 and 80 × 20 × 80 for the computational domain and nx × nz = 26 × 32, 13 × 16 and 6 × 8, which correspond to number of grid points on the block edges, were chosen and tested. It can be concluded that among all simulated RANS models, the SST ⪆mma-Reθ model performed best and agreed fairly well to the LES simulation and experimental results. It can also be concluded that the SST ⪆mma-Reθ model provides a very satisfactory results in terms of grid dependency in the fine and medium grid resolutions in both regular and irregular structure meshes. On the other hand, despite a very good performance of the RNG k-ɛ model in the fine resolution and in the regular structure grids, a disappointing performance of this model in the coarse and medium grid resolutions indicates that the RNG k-ɛ model is highly dependent on grid structure and grid resolution. These quantitative validations are essential to access the accuracy of RANS models for the simulation of flow in urban environment.

  4. NREL: International Activities - Country Programs

    Science.gov Websites

    for use of mini-grid quality assurance and design standards and advising on mini-grid business models communities of practice and technical collaboration across countries on mini-grid development, modeling and interconnection standards and procedures, and with strengthening mini-grids and energy access programs. NREL is

  5. Application Note: Power Grid Modeling With Xyce.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sholander, Peter E.

    This application note describes how to model steady-state power flows and transient events in electric power grids with the SPICE-compatible Xyce TM Parallel Electronic Simulator developed at Sandia National Labs. This application notes provides a brief tutorial on the basic devices (branches, bus shunts, transformers and generators) found in power grids. The focus is on the features supported and assumptions made by the Xyce models for power grid elements. It then provides a detailed explanation, including working Xyce netlists, for simulating some simple power grid examples such as the IEEE 14-bus test case.

  6. PARADIGM USING JOINT DETERMINISTIC GRID MODELING AND SUB-GRID VARIABILITY STOCHASTIC DESCRIPTION AS A TEMPLATE FOR MODEL EVALUATION

    EPA Science Inventory

    The goal of achieving verisimilitude of air quality simulations to observations is problematic. Chemical transport models such as the Community Multi-Scale Air Quality (CMAQ) modeling system produce volume averages of pollutant concentration fields. When grid sizes are such tha...

  7. Large Eddy Simulation of Wall-Bounded Turbulent Flows with the Lattice Boltzmann Method: Effect of Collision Model, SGS Model and Grid Resolution

    NASA Astrophysics Data System (ADS)

    Pradhan, Aniruddhe; Akhavan, Rayhaneh

    2017-11-01

    Effect of collision model, subgrid-scale model and grid resolution in Large Eddy Simulation (LES) of wall-bounded turbulent flows with the Lattice Boltzmann Method (LBM) is investigated in turbulent channel flow. The Single Relaxation Time (SRT) collision model is found to be more accurate than Multi-Relaxation Time (MRT) collision model in well-resolved LES. Accurate LES requires grid resolutions of Δ+ <= 4 in the near-wall region, which is comparable to Δ+ <= 2 required in DNS. At larger grid resolutions SRT becomes unstable, while MRT remains stable but gives unacceptably large errors. LES with no model gave errors comparable to the Dynamic Smagorinsky Model (DSM) and the Wall Adapting Local Eddy-viscosity (WALE) model. The resulting errors in the prediction of the friction coefficient in turbulent channel flow at a bulk Reynolds Number of 7860 (Reτ 442) with Δ+ = 4 and no-model, DSM and WALE were 1.7%, 2.6%, 3.1% with SRT, and 8.3% 7.5% 8.7% with MRT, respectively. These results suggest that LES of wall-bounded turbulent flows with LBM requires either grid-embedding in the near-wall region, with grid resolutions comparable to DNS, or a wall model. Results of LES with grid-embedding and wall models will be discussed.

  8. PECHCV, PECHFV, PEFHCV and PEFHFV: A set of atmospheric, primitive equation forecast models for the Northern Hemisphere, volume 3

    NASA Technical Reports Server (NTRS)

    Wellck, R. E.; Pearce, M. L.

    1976-01-01

    As part of the SEASAT program of NASA, a set of four hemispheric, atmospheric prediction models were developed. The models, which use a polar stereographic grid in the horizontal and a sigma coordinate in the vertical, are: (1) PECHCV - five sigma layers and a 63 x 63 horizontal grid, (2) PECHFV - ten sigma layers and a 63 x 63 horizontal grid, (3) PEFHCV - five sigma layers and a 187 x 187 horizontal grid, and (4) PEFHFV - ten sigma layers and a 187 x 187 horizontal grid. The models and associated computer programs are described.

  9. Brane boxes, anomalies, bending, and tadpoles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leigh, R.G.; Rozali, M.

    1999-01-01

    Certain classes of chiral four-dimensional gauge theories may be obtained as the world volume theories of D5-branes are suspended between networks of NS5-branes, the so-called brane box models. In this paper, we derive the stringy consistency conditions placed on these models, and show that they are equivalent to an anomaly cancellation of the gauge theories. We derive these conditions in the orbifold theories which are {ital T} dual to the elliptic brane box models. Specifically, we show that the expression for tadpoles for unphysical twisted Ramond-Ramond 4-form fields in the orbifold theory are proportional to the gauge anomalies of themore » brane box theory. Thus string consistency is equivalent to world volume gauge anomaly cancellation. Furthermore, we find additional cylinder amplitudes which give the {beta} functions of the gauge theory. We show how these correspond to bending of the NS-branes in the brane box theory. {copyright} {ital 1998} {ital The American Physical Society}« less

  10. Quantifying parameter uncertainty in stochastic models using the Box Cox transformation

    NASA Astrophysics Data System (ADS)

    Thyer, Mark; Kuczera, George; Wang, Q. J.

    2002-08-01

    The Box-Cox transformation is widely used to transform hydrological data to make it approximately Gaussian. Bayesian evaluation of parameter uncertainty in stochastic models using the Box-Cox transformation is hindered by the fact that there is no analytical solution for the posterior distribution. However, the Markov chain Monte Carlo method known as the Metropolis algorithm can be used to simulate the posterior distribution. This method properly accounts for the nonnegativity constraint implicit in the Box-Cox transformation. Nonetheless, a case study using the AR(1) model uncovered a practical problem with the implementation of the Metropolis algorithm. The use of a multivariate Gaussian jump distribution resulted in unacceptable convergence behaviour. This was rectified by developing suitable parameter transformations for the mean and variance of the AR(1) process to remove the strong nonlinear dependencies with the Box-Cox transformation parameter. Applying this methodology to the Sydney annual rainfall data and the Burdekin River annual runoff data illustrates the efficacy of these parameter transformations and demonstrate the value of quantifying parameter uncertainty.

  11. Stability assessment of a multi-port power electronic interface for hybrid micro-grid applications

    NASA Astrophysics Data System (ADS)

    Shamsi, Pourya

    Migration to an industrial society increases the demand for electrical energy. Meanwhile, social causes for preserving the environment and reducing pollutions seek cleaner forms of energy sources. Therefore, there has been a growth in distributed generation from renewable sources in the past decade. Existing regulations and power system coordination does not allow for massive integration of distributed generation throughout the grid. Moreover, the current infrastructures are not designed for interfacing distributed and deregulated generation. In order to remedy this problem, a hybrid micro-grid based on nano-grids is introduced. This system consists of a reliable micro-grid structure that provides a smooth transition from the current distribution networks to smart micro-grid systems. Multi-port power electronic interfaces are introduced to manage the local generation, storage, and consumption. Afterwards, a model for this micro-grid is derived. Using this model, the stability of the system under a variety of source and load induced disturbances is studied. Moreover, pole-zero study of the micro-grid is performed under various loading conditions. An experimental setup of this micro-grid is developed, and the validity of the model in emulating the dynamic behavior of the system is verified. This study provides a theory for a novel hybrid micro-grid as well as models for stability assessment of the proposed micro-grid.

  12. Demographic consequences of nest box use for Red-footed Falcons Falco vespertinus in Central Asia

    USGS Publications Warehouse

    Bragin, Evgeny A.; Bragin, Alexander E.; Katzner, Todd

    2017-01-01

    Nest box programs are frequently implemented for the conservation of cavity-nesting birds, but their effectiveness is rarely evaluated in comparison to birds not using nest boxes. In the European Palearctic, Red-footed Falcon Falco vespertinus populations are both of high conservation concern and are strongly associated with nest box programs in heavily managed landscapes. We used a 21-year monitoring dataset collected on 753 nesting attempts by Red-footed Falcons in unmanaged natural or semi-natural habitats to provide basic information on this poorly known species; to evaluate long-term demographic trends; and to evaluate response of demographic parameters of Red-footed Falcons to environmental factors including use of nest boxes. We observed significant differences among years in laying date, offspring loss, and numbers of fledglings produced, but not in egg production. Of these four parameters, offspring loss and, to a lesser extent, number of fledglings exhibited directional trends over time. Variation in laying date and in numbers of eggs were not well explained by any one model, but instead by combinations of models, each with informative terms for nest type. Nevertheless, laying in nest boxes occurred 2.10 ± 0.70 days earlier than in natural nests. In contrast, variation in both offspring loss and numbers of fledglings produced were fairly well explained by a single model including terms for nest type, nest location, and an interaction between the two parameters (65% and 81% model weights respectively), with highest offspring loss in nest boxes on forest edges. Because, for other species, earlier laying dates are associated with more fit individuals, this interaction highlighted a possible ecological trap, whereby birds using nest boxes on forest edges lay eggs earlier but suffer greater offspring loss and produce lower numbers of fledglings than do those in other nesting settings. If nest boxes increase offspring loss for Red-footed Falcons in heavily managed landscapes where populations are at greater risk, or for the many other species of rare or endangered birds supported by nest box programs, these processes could have important demographic and conservation consequences.

  13. Structure of an E3:E2~Ub Complex Reveals an Allosteric Mechanism Shared among RING/U-box Ligases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pruneda, Jonathan N.; Littlefield, Peter J.; Soss, Sarah E.

    2012-09-28

    Despite the widespread importance of RING/U-box E3 ubiquitin ligases in ubiquitin (Ub) signaling, the mechanismby which this class of enzymes facilitates Ub transfer remains enigmatic. Here, we present a structural model for a RING/U-box E3:E2~Ub complex poised for Ub transfer. The model and additional analyses reveal that E3 binding biases dynamic E2~Ub ensembles toward closed conformations with enhanced reactivity for substrate lysines. We identify a key hydrogen bond between a highly conserved E3 side chain and an E2 backbone carbonyl, observed in all structures of active RING/ U-Box E3/E2 pairs, as the linchpin for allosteric activation of E2~Ub. The conformationalmore » biasing mechanism is generalizable across diverse E2s and RING/U-box E3s, but is not shared by HECT-type E3s. The results provide a structural model for a RING/ U-box E3:E2~Ub ligase complex and identify the long sought-after source of allostery for RING/UBox activation of E2~Ub conjugates.« less

  14. Optimal variable-grid finite-difference modeling for porous media

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yin, Xingyao; Li, Haishan

    2014-12-01

    Numerical modeling of poroelastic waves by the finite-difference (FD) method is more expensive than that of acoustic or elastic waves. To improve the accuracy and computational efficiency of seismic modeling, variable-grid FD methods have been developed. In this paper, we derived optimal staggered-grid finite difference schemes with variable grid-spacing and time-step for seismic modeling in porous media. FD operators with small grid-spacing and time-step are adopted for low-velocity or small-scale geological bodies, while FD operators with big grid-spacing and time-step are adopted for high-velocity or large-scale regions. The dispersion relations of FD schemes were derived based on the plane wave theory, then the FD coefficients were obtained using the Taylor expansion. Dispersion analysis and modeling results demonstrated that the proposed method has higher accuracy with lower computational cost for poroelastic wave simulation in heterogeneous reservoirs.

  15. Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Regional Study: Gujarat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochran, Jaquelin M; Palchak, Joseph D; Ehlen, Annaliese K

    This chapter on Andhra Pradesh is one of six state chapters included in Appendix C of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study). The objective of the state chapters is to provide modeling assumptions, results, and next steps to use and improve the model specific to each state. The model has inherent uncertainties, particularly in how the intrastate transmission network and RE generation projects will develop (e.g., locations, capacities). The model also does not include information on contracts or must-run status of particular plantsmore » for reliability purposes. By providing details on the higher spatial resolution model of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study), which better represents the impact of congestion on least-cost scheduling and dispatch, provides a deeper understanding of the relationship among renewable energy (RE) location, transmission, and system flexibility with regard to RE integration, compared to 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. I - National Study.'« less

  16. Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Regional Study: Tamil Nadu

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochran, Jaquelin M; Palchak, Joseph D; Ehlen, Annaliese K

    This chapter on Tamil Nadu is one of six state chapters included in Appendix C of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study). The objective of the state chapters is to provide modeling assumptions, results, and next steps to use and improve the model specific to each state. The model has inherent uncertainties, particularly in how the intrastate transmission network and RE generation projects will develop (e.g., locations, capacities). The model also does not include information on contracts or must-run status of particular plantsmore » for reliability purposes. By providing details on the higher spatial resolution model of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study), which better represents the impact of congestion on least-cost scheduling and dispatch, provides a deeper understanding of the relationship among renewable energy (RE) location, transmission, and system flexibility with regard to RE integration, compared to 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. I - National Study.'« less

  17. Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Regional Study: Rajasthan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochran, Jaquelin M; Palchak, Joseph D; Ehlen, Annaliese K

    This chapter on Andhra Pradesh is one of six state chapters included in Appendix C of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study). The objective of the state chapters is to provide modeling assumptions, results, and next steps to use and improve the model specific to each state. The model has inherent uncertainties, particularly in how the intrastate transmission network and RE generation projects will develop (e.g., locations, capacities). The model also does not include information on contracts or must-run status of particular plantsmore » for reliability purposes. By providing details on the higher spatial resolution model of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study), which better represents the impact of congestion on least-cost scheduling and dispatch, provides a deeper understanding of the relationship among renewable energy (RE) location, transmission, and system flexibility with regard to RE integration, compared to 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. I - National Study.'« less

  18. Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Regional Study: Andhra Pradesh

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochran, Jaquelin M; Palchak, Joseph D; Ehlen, Annaliese K

    This chapter on Andhra Pradesh is one of six state chapters included in Appendix C of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study). The objective of the state chapters is to provide modeling assumptions, results, and next steps to use and improve the model specific to each state. The model has inherent uncertainties, particularly in how the intrastate transmission network and RE generation projects will develop (e.g., locations, capacities). The model also does not include information on contracts or must-run status of particular plantsmore » for reliability purposes. By providing details on the higher spatial resolution model of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study), which better represents the impact of congestion on least-cost scheduling and dispatch, provides a deeper understanding of the relationship among renewable energy (RE) location, transmission, and system flexibility with regard to RE integration, compared to 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. I - National Study.'« less

  19. Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Regional Study: Karnataka

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochran, Jaquelin M; Palchak, Joseph D; Ehlen, Annaliese K

    This chapter on Andhra Pradesh is one of six state chapters included in Appendix C of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study). The objective of the state chapters is to provide modeling assumptions, results, and next steps to use and improve the model specific to each state. The model has inherent uncertainties, particularly in how the intrastate transmission network and RE generation projects will develop (e.g., locations, capacities). The model also does not include information on contracts or must-run status of particular plantsmore » for reliability purposes. By providing details on the higher spatial resolution model of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study), which better represents the impact of congestion on least-cost scheduling and dispatch, provides a deeper understanding of the relationship among renewable energy (RE) location, transmission, and system flexibility with regard to RE integration, compared to 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. I - National Study.'« less

  20. Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Regional Study: Maharashtra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochran, Jaquelin M; Palchak, Joseph D; Ehlen, Annaliese K

    This chapter on Andhra Pradesh is one of six state chapters included in Appendix C of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study). The objective of the state chapters is to provide modeling assumptions, results, and next steps to use and improve the model specific to each state. The model has inherent uncertainties, particularly in how the intrastate transmission network and RE generation projects will develop (e.g., locations, capacities). The model also does not include information on contracts or must-run status of particular plantsmore » for reliability purposes. By providing details on the higher spatial resolution model of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study), which better represents the impact of congestion on least-cost scheduling and dispatch, provides a deeper understanding of the relationship among renewable energy (RE) location, transmission, and system flexibility with regard to RE integration, compared to 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. I - National Study.'« less

  1. The construction of a Central Netherlands temperature

    NASA Astrophysics Data System (ADS)

    van der Schrier, G.; van Ulden, A.; van Oldenborgh, G. J.

    2011-05-01

    The Central Netherlands Temperature (CNT) is a monthly daily mean temperature series constructed from homogenized time series from the centre of the Netherlands. The purpose of this series is to offer a homogeneous time series representative of a larger area in order to study large-scale temperature changes. It will also facilitate a comparison with climate models, which resolve similar scales. From 1906 onwards, temperature measurements in the Netherlands have been sufficiently standardized to construct a high-quality series. Long time series have been constructed by merging nearby stations and using the overlap to calibrate the differences. These long time series and a few time series of only a few decades in length have been subjected to a homogeneity analysis in which significant breaks and artificial trends have been corrected. Many of the detected breaks correspond to changes in the observations that are documented in the station metadata. This version of the CNT, to which we attach the version number 1.1, is constructed as the unweighted average of four stations (De Bilt, Winterswijk/Hupsel, Oudenbosch/Gilze-Rijen and Gemert/Volkel) with the stations Eindhoven and Deelen added from 1951 and 1958 onwards, respectively. The global gridded datasets used for detecting and attributing climate change are based on raw observational data. Although some homogeneity adjustments are made, these are not based on knowledge of local circumstances but only on statistical evidence. Despite this handicap, and the fact that these datasets use grid boxes that are far larger then the area associated with that of the Central Netherlands Temperature, the temperature interpolated to the CNT region shows a warming trend that is broadly consistent with the CNT trend in all of these datasets. The actual trends differ from the CNT trend up to 30 %, which highlights the need to base future global gridded temperature datasets on homogenized time series.

  2. Grid computing enhances standards-compatible geospatial catalogue service

    NASA Astrophysics Data System (ADS)

    Chen, Aijun; Di, Liping; Bai, Yuqi; Wei, Yaxing; Liu, Yang

    2010-04-01

    A catalogue service facilitates sharing, discovery, retrieval, management of, and access to large volumes of distributed geospatial resources, for example data, services, applications, and their replicas on the Internet. Grid computing provides an infrastructure for effective use of computing, storage, and other resources available online. The Open Geospatial Consortium has proposed a catalogue service specification and a series of profiles for promoting the interoperability of geospatial resources. By referring to the profile of the catalogue service for Web, an innovative information model of a catalogue service is proposed to offer Grid-enabled registry, management, retrieval of and access to geospatial resources and their replicas. This information model extends the e-business registry information model by adopting several geospatial data and service metadata standards—the International Organization for Standardization (ISO)'s 19115/19119 standards and the US Federal Geographic Data Committee (FGDC) and US National Aeronautics and Space Administration (NASA) metadata standards for describing and indexing geospatial resources. In order to select the optimal geospatial resources and their replicas managed by the Grid, the Grid data management service and information service from the Globus Toolkits are closely integrated with the extended catalogue information model. Based on this new model, a catalogue service is implemented first as a Web service. Then, the catalogue service is further developed as a Grid service conforming to Grid service specifications. The catalogue service can be deployed in both the Web and Grid environments and accessed by standard Web services or authorized Grid services, respectively. The catalogue service has been implemented at the George Mason University/Center for Spatial Information Science and Systems (GMU/CSISS), managing more than 17 TB of geospatial data and geospatial Grid services. This service makes it easy to share and interoperate geospatial resources by using Grid technology and extends Grid technology into the geoscience communities.

  3. Preliminary analysis on hybrid Box-Jenkins - GARCH modeling in forecasting gold price

    NASA Astrophysics Data System (ADS)

    Yaziz, Siti Roslindar; Azizan, Noor Azlinna; Ahmad, Maizah Hura; Zakaria, Roslinazairimah; Agrawal, Manju; Boland, John

    2015-02-01

    Gold has been regarded as a valuable precious metal and the most popular commodity as a healthy return investment. Hence, the analysis and prediction of gold price become very significant to investors. This study is a preliminary analysis on gold price and its volatility that focuses on the performance of hybrid Box-Jenkins models together with GARCH in analyzing and forecasting gold price. The Box-Cox formula is used as the data transformation method due to its potential best practice in normalizing data, stabilizing variance and reduces heteroscedasticity using 41-year daily gold price data series starting 2nd January 1973. Our study indicates that the proposed hybrid model ARIMA-GARCH with t-innovation can be a new potential approach in forecasting gold price. This finding proves the strength of GARCH in handling volatility in the gold price as well as overcomes the non-linear limitation in the Box-Jenkins modeling.

  4. A multi-resolution approach to electromagnetic modeling.

    NASA Astrophysics Data System (ADS)

    Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu

    2018-04-01

    We present a multi-resolution approach for three-dimensional magnetotelluric forward modeling. Our approach is motivated by the fact that fine grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography, and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. This is especially true for forward modeling required in regularized inversion, where conductivity variations at depth are generally very smooth. With a conventional structured finite-difference grid the fine discretization required to adequately represent rapid variations near the surface are continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modeling is especially important for solving regularized inversion problems. We implement a multi-resolution finite-difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of sub-grids, with each sub-grid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modeling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modeling operators on interfaces between adjacent sub-grids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models show that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.

  5. DEM Based Modeling: Grid or TIN? The Answer Depends

    NASA Astrophysics Data System (ADS)

    Ogden, F. L.; Moreno, H. A.

    2015-12-01

    The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.

  6. HPC Aspects of Variable-Resolution Global Climate Modeling using a Multi-scale Convection Parameterization

    EPA Science Inventory

    High performance computing (HPC) requirements for the new generation variable grid resolution (VGR) global climate models differ from that of traditional global models. A VGR global model with 15 km grids over the CONUS stretching to 60 km grids elsewhere will have about ~2.5 tim...

  7. Control and protection of outdoor embedded camera for astronomy

    NASA Astrophysics Data System (ADS)

    Rigaud, F.; Jegouzo, I.; Gaudemard, J.; Vaubaillon, J.

    2012-09-01

    The purpose of CABERNET- Podet-Met (CAmera BEtter Resolution NETwork, Pole sur la Dynamique de l'Environnement Terrestre - Meteor) project is the automated observation, by triangulation with three cameras, of meteor showers to perform a calculation of meteoroids trajectory and velocity. The scientific goal is to search the parent body, comet or asteroid, for each observed meteor. To install outdoor cameras in order to perform astronomy measurements for several years with high reliability requires a very specific design for the box. For these cameras, this contribution shows how we fulfilled the various functions of their boxes, such as cooling of the CCD, heating to melt snow and ice, the protecting against moisture, lightning and Solar light. We present the principal and secondary functions, the product breakdown structure, the technical solutions evaluation grid of criteria, the adopted technology products and their implementation in multifunction subsets for miniaturization purpose. To manage this project, we aim to get the lowest manpower and development time for every part. In appendix, we present the measurements the image quality evolution during the CCD cooling, and some pictures of the prototype.

  8. Parametrisation of initial conditions for seasonal stream flow forecasting in the Swiss Rhine basin

    NASA Astrophysics Data System (ADS)

    Schick, Simon; Rössler, Ole; Weingartner, Rolf

    2016-04-01

    Current climate forecast models show - to the best of our knowledge - low skill in forecasting climate variability in Central Europe at seasonal lead times. When it comes to seasonal stream flow forecasting, initial conditions thus play an important role. Here, initial conditions refer to the catchments moisture at the date of forecast, i.e. snow depth, stream flow and lake level, soil moisture content, and groundwater level. The parametrisation of these initial conditions can take place at various spatial and temporal scales. Examples are the grid size of a distributed model or the time aggregation of predictors in statistical models. Therefore, the present study aims to investigate the extent to which the parametrisation of initial conditions at different spatial scales leads to differences in forecast errors. To do so, we conduct a forecast experiment for the Swiss Rhine at Basel, which covers parts of Germany, Austria, and Switzerland and is southerly bounded by the Alps. Seasonal mean stream flow is defined for the time aggregation of 30, 60, and 90 days and forecasted at 24 dates within the calendar year, i.e. at the 1st and 16th day of each month. A regression model is employed due to the various anthropogenic effects on the basins hydrology, which often are not quantifiable but might be grasped by a simple black box model. Furthermore, the pool of candidate predictors consists of antecedent temperature, precipitation, and stream flow only. This pragmatic approach follows the fact that observations of variables relevant for hydrological storages are either scarce in space or time (soil moisture, groundwater level), restricted to certain seasons (snow depth), or regions (lake levels, snow depth). For a systematic evaluation, we therefore focus on the comprehensive archives of meteorological observations and reanalyses to estimate the initial conditions via climate variability prior to the date of forecast. The experiment itself is based on four different approaches, whose differences in model skill were estimated within a rigorous cross-validation framework for the period 1982-2013: The predictands are regressed on antecedent temperature, precipitation, and stream flow. Here, temperature and precipitation constitute basin averages out of the E-OBS gridded data set. As in 1., but temperature and precipitation are used at the E-OBS grid scale (0.25 degree in longitude and latitude) without spatial averaging. As in 1., but the regression model is applied to 66 gauged subcatchments of the Rhine basin. Forecasts for these subcatchments are then simply summed and upscaled to the area of the Rhine basin. As in 3., but the forecasts at the subcatchment scale are additionally weighted in terms of hydrological representativeness of the corresponding subcatchment.

  9. Grid convergence errors in hemodynamic solution of patient-specific cerebral aneurysms.

    PubMed

    Hodis, Simona; Uthamaraj, Susheil; Smith, Andrea L; Dennis, Kendall D; Kallmes, David F; Dragomir-Daescu, Dan

    2012-11-15

    Computational fluid dynamics (CFD) has become a cutting-edge tool for investigating hemodynamic dysfunctions in the body. It has the potential to help physicians quantify in more detail the phenomena difficult to capture with in vivo imaging techniques. CFD simulations in anatomically realistic geometries pose challenges in generating accurate solutions due to the grid distortion that may occur when the grid is aligned with complex geometries. In addition, results obtained with computational methods should be trusted only after the solution has been verified on multiple high-quality grids. The objective of this study was to present a comprehensive solution verification of the intra-aneurysmal flow results obtained on different morphologies of patient-specific cerebral aneurysms. We chose five patient-specific brain aneurysm models with different dome morphologies and estimated the grid convergence errors for each model. The grid convergence errors were estimated with respect to an extrapolated solution based on the Richardson extrapolation method, which accounts for the degree of grid refinement. For four of the five models, calculated velocity, pressure, and wall shear stress values at six different spatial locations converged monotonically, with maximum uncertainty magnitudes ranging from 12% to 16% on the finest grids. Due to the geometric complexity of the fifth model, the grid convergence errors showed oscillatory behavior; therefore, each patient-specific model required its own grid convergence study to establish the accuracy of the analysis. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Examples of grid generation with implicitly specified surfaces using GridPro (TM)/az3000. 1: Filleted multi-tube configurations

    NASA Technical Reports Server (NTRS)

    Cheng, Zheming; Eiseman, Peter R.

    1995-01-01

    With examples, we illustrate how implicitly specified surfaces can be used for grid generation with GridPro/az3000. The particular examples address two questions: (1) How do you model intersecting tubes with fillets? and (2) How do you generate grids inside the intersected tubes? The implication is much more general. With the results in a forthcoming paper which develops an easy-to-follow procedure for implicit surface modeling, we provide a powerful means for rapid prototyping in grid generation.

  11. A comparative study of turbulence models for overset grids

    NASA Technical Reports Server (NTRS)

    Renze, Kevin J.; Buning, Pieter G.; Rajagopalan, R. G.

    1992-01-01

    The implementation of two different types of turbulence models for a flow solver using the Chimera overset grid method is examined. Various turbulence model characteristics, such as length scale determination and transition modeling, are found to have a significant impact on the computed pressure distribution for a multielement airfoil case. No inherent problem is found with using either algebraic or one-equation turbulence models with an overset grid scheme, but simulation of turbulence for multiple-body or complex geometry flows is very difficult regardless of the gridding method. For complex geometry flowfields, modification of the Baldwin-Lomax turbulence model is necessary to select the appropriate length scale in wall-bounded regions. The overset grid approach presents no obstacle to use of a one- or two-equation turbulence model. Both Baldwin-Lomax and Baldwin-Barth models have problems providing accurate eddy viscosity levels for complex multiple-body flowfields such as those involving the Space Shuttle.

  12. Impact of Considering 110 kV Grid Structures on the Congestion Management in the German Transmission Grid

    NASA Astrophysics Data System (ADS)

    Hoffrichter, André; Barrios, Hans; Massmann, Janek; Venkataramanachar, Bhavasagar; Schnettler, Armin

    2018-02-01

    The structural changes in the European energy system lead to an increase of renewable energy sources that are primarily connected to the distribution grid. Hence the stationary analysis of the transmission grid and the regionalization of generation capacities are strongly influenced by subordinate grid structures. To quantify the impact on the congestion management in the German transmission grid, a 110 kV grid model is derived using publicly available data delivered by Open Street Map and integrated into an existing model of the European transmission grid. Power flow and redispatch simulations are performed for three different regionalization methods and grid configurations. The results show a significant impact of the 110 kV system and prove an overestimation of power flows in the transmission grid when neglecting subordinate grids. Thus, the redispatch volume in Germany to dissolve bottlenecks in case of N-1 contingencies decreases by 38 % when considering the 110 kV grid.

  13. Improved method for calibration of exchange flows for a physical transport box model of Tampa Bay, FL USA

    EPA Science Inventory

    Results for both sequential and simultaneous calibration of exchange flows between segments of a 10-box, one-dimensional, well-mixed, bifurcated tidal mixing model for Tampa Bay are reported. Calibrations were conducted for three model options with different mathematical expressi...

  14. Time Series ARIMA Models of Undergraduate Grade Point Average.

    ERIC Educational Resources Information Center

    Rogers, Bruce G.

    The Auto-Regressive Integrated Moving Average (ARIMA) Models, often referred to as Box-Jenkins models, are regression methods for analyzing sequential dependent observations with large amounts of data. The Box-Jenkins approach, a three-stage procedure consisting of identification, estimation and diagnosis, was used to select the most appropriate…

  15. Wave Resource Characterization Using an Unstructured Grid Modeling Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Wei-Cheng; Yang, Zhaoqing; Wang, Taiping

    This paper presents a modeling study conducted on the central Oregon coast for wave resource characterization using the unstructured-grid SWAN model coupled with a nested-grid WWIII model. The flexibility of models of various spatial resolutions and the effects of open- boundary conditions simulated by a nested-grid WWIII model with different physics packages were evaluated. The model results demonstrate the advantage of the unstructured-grid modeling approach for flexible model resolution and good model skills in simulating the six wave resource parameters recommended by the International Electrotechnical Commission in comparison to the observed data in Year 2009 at National Data Buoy Centermore » Buoy 46050. Notably, spectral analysis indicates that the ST4 physics package improves upon the model skill of the ST2 physics package for predicting wave power density for large waves, which is important for wave resource assessment, device load calculation, and risk management. In addition, bivariate distributions show the simulated sea state of maximum occurrence with the ST4 physics package matched the observed data better than that with the ST2 physics package. This study demonstrated that the unstructured-grid wave modeling approach, driven by the nested-grid regional WWIII outputs with the ST4 physics package, can efficiently provide accurate wave hindcasts to support wave resource characterization. Our study also suggests that wind effects need to be considered if the dimension of the model domain is greater than approximately 100 km, or O (10^2 km).« less

  16. LPV Modeling of a Flexible Wing Aircraft Using Modal Alignment and Adaptive Gridding Methods

    NASA Technical Reports Server (NTRS)

    Al-Jiboory, Ali Khudhair; Zhu, Guoming; Swei, Sean Shan-Min; Su, Weihua; Nguyen, Nhan T.

    2017-01-01

    One of the earliest approaches in gain-scheduling control is the gridding based approach, in which a set of local linear time-invariant models are obtained at various gridded points corresponding to the varying parameters within the flight envelop. In order to ensure smooth and effective Linear Parameter-Varying control, aligning all the flexible modes within each local model and maintaining small number of representative local models over the gridded parameter space are crucial. In addition, since the flexible structural models tend to have large dimensions, a tractable model reduction process is necessary. In this paper, the notion of s-shifted H2- and H Infinity-norm are introduced and used as a metric to measure the model mismatch. A new modal alignment algorithm is developed which utilizes the defined metric for aligning all the local models over the entire gridded parameter space. Furthermore, an Adaptive Grid Step Size Determination algorithm is developed to minimize the number of local models required to represent the gridded parameter space. For model reduction, we propose to utilize the concept of Composite Modal Cost Analysis, through which the collective contribution of each flexible mode is computed and ranked. Therefore, a reduced-order model is constructed by retaining only those modes with significant contribution. The NASA Generic Transport Model operating at various flight speeds is studied for verification purpose, and the analysis and simulation results demonstrate the effectiveness of the proposed modeling approach.

  17. MODFLOW-LGR-Modifications to the streamflow-routing package (SFR2) to route streamflow through locally refined grids

    USGS Publications Warehouse

    Mehl, Steffen W.; Hill, Mary C.

    2011-01-01

    This report documents modifications to the Streamflow-Routing Package (SFR2) to route streamflow through grids constructed using the multiple-refined-areas capability of shared node Local Grid Refinement (LGR) of MODFLOW-2005. MODFLOW-2005 is the U.S. Geological Survey modular, three-dimensional, finite-difference groundwater-flow model. LGR provides the capability to simulate groundwater flow by using one or more block-shaped, higher resolution local grids (child model) within a coarser grid (parent model). LGR accomplishes this by iteratively coupling separate MODFLOW-2005 models such that heads and fluxes are balanced across the shared interfacing boundaries. Compatibility with SFR2 allows for streamflow routing across grids. LGR can be used in two- and three-dimensional, steady-state and transient simulations and for simulations of confined and unconfined groundwater systems.

  18. RCS of fundamental scatterers in the HF band by wire-grid modelling

    NASA Astrophysics Data System (ADS)

    Trueman, C. W.; Kubina, S. J.

    To extract the maximum information from the return of a radar target such as an aircraft, the target's scattering properties must be well known. Wire grid modeling allows a detailed representation of the surface of a complex scatterer such as an aircraft, in the frequency range where the aircraft size is comparable to a wavelength. A moment method analysis determines the currents on the wires of the grid including the interactions between all parts of the structure. Wire grid models of fundamental scatterers (plates, strips, cubes, and spheres) of sizes comparable to the wavelength in the 2-30 MHz range are analyzed. The study of the radar cross section (RCS) of wire grids in comparison with measured RCS data helps to establish guidelines for building wire grid models, specifying such parameters as where to locate wires, how short the segments must be, and what radius to use. The guidelines so developed can then be applied to build wire grid models of much more complex bodies such as aircraft with much greater confidence.

  19. Semantic web data warehousing for caGrid.

    PubMed

    McCusker, James P; Phillips, Joshua A; González Beltrán, Alejandra; Finkelstein, Anthony; Krauthammer, Michael

    2009-10-01

    The National Cancer Institute (NCI) is developing caGrid as a means for sharing cancer-related data and services. As more data sets become available on caGrid, we need effective ways of accessing and integrating this information. Although the data models exposed on caGrid are semantically well annotated, it is currently up to the caGrid client to infer relationships between the different models and their classes. In this paper, we present a Semantic Web-based data warehouse (Corvus) for creating relationships among caGrid models. This is accomplished through the transformation of semantically-annotated caBIG Unified Modeling Language (UML) information models into Web Ontology Language (OWL) ontologies that preserve those semantics. We demonstrate the validity of the approach by Semantic Extraction, Transformation and Loading (SETL) of data from two caGrid data sources, caTissue and caArray, as well as alignment and query of those sources in Corvus. We argue that semantic integration is necessary for integration of data from distributed web services and that Corvus is a useful way of accomplishing this. Our approach is generalizable and of broad utility to researchers facing similar integration challenges.

  20. HAM2D: 2D Shearing Box Model

    NASA Astrophysics Data System (ADS)

    Gammie, Charles F.; Guan, Xiaoyue

    2012-10-01

    HAM solves non-relativistic hyperbolic partial differential equations in conservative form using high-resolution shock-capturing techniques. This version of HAM has been configured to solve the magnetohydrodynamic equations of motion in axisymmetry to evolve a shearing box model.

  1. An adaptive grid to improve the efficiency and accuracy of modelling underwater noise from shipping

    NASA Astrophysics Data System (ADS)

    Trigg, Leah; Chen, Feng; Shapiro, Georgy; Ingram, Simon; Embling, Clare

    2017-04-01

    Underwater noise from shipping is becoming a significant concern and has been listed as a pollutant under Descriptor 11 of the Marine Strategy Framework Directive. Underwater noise models are an essential tool to assess and predict noise levels for regulatory procedures such as environmental impact assessments and ship noise monitoring. There are generally two approaches to noise modelling. The first is based on simplified energy flux models, assuming either spherical or cylindrical propagation of sound energy. These models are very quick but they ignore important water column and seabed properties, and produce significant errors in the areas subject to temperature stratification (Shapiro et al., 2014). The second type of model (e.g. ray-tracing and parabolic equation) is based on an advanced physical representation of sound propagation. However, these acoustic propagation models are computationally expensive to execute. Shipping noise modelling requires spatial discretization in order to group noise sources together using a grid. A uniform grid size is often selected to achieve either the greatest efficiency (i.e. speed of computations) or the greatest accuracy. In contrast, this work aims to produce efficient and accurate noise level predictions by presenting an adaptive grid where cell size varies with distance from the receiver. The spatial range over which a certain cell size is suitable was determined by calculating the distance from the receiver at which propagation loss becomes uniform across a grid cell. The computational efficiency and accuracy of the resulting adaptive grid was tested by comparing it to uniform 1 km and 5 km grids. These represent an accurate and computationally efficient grid respectively. For a case study of the Celtic Sea, an application of the adaptive grid over an area of 160×160 km reduced the number of model executions required from 25600 for a 1 km grid to 5356 in December and to between 5056 and 13132 in August, which represents a 2 to 5-fold increase in efficiency. The 5 km grid reduces the number of model executions further to 1024. However, over the first 25 km the 5 km grid produces errors of up to 13.8 dB when compared to the highly accurate but inefficient 1 km grid. The newly developed adaptive grid generates much smaller errors of less than 0.5 dB while demonstrating high computational efficiency. Our results show that the adaptive grid provides the ability to retain the accuracy of noise level predictions and improve the efficiency of the modelling process. This can help safeguard sensitive marine ecosystems from noise pollution by improving the underwater noise predictions that inform management activities. References Shapiro, G., Chen, F., Thain, R., 2014. The Effect of Ocean Fronts on Acoustic Wave Propagation in a Shallow Sea, Journal of Marine System, 139: 217 - 226. http://dx.doi.org/10.1016/j.jmarsys.2014.06.007.

  2. Bayesian inference for multivariate meta-analysis Box-Cox transformation models for individual patient data with applications to evaluation of cholesterol lowering drugs

    PubMed Central

    Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G.; Shah, Arvind K.; Lin, Jianxin

    2013-01-01

    In this paper, we propose a class of Box-Cox transformation regression models with multidimensional random effects for analyzing multivariate responses for individual patient data (IPD) in meta-analysis. Our modeling formulation uses a multivariate normal response meta-analysis model with multivariate random effects, in which each response is allowed to have its own Box-Cox transformation. Prior distributions are specified for the Box-Cox transformation parameters as well as the regression coefficients in this complex model, and the Deviance Information Criterion (DIC) is used to select the best transformation model. Since the model is quite complex, a novel Monte Carlo Markov chain (MCMC) sampling scheme is developed to sample from the joint posterior of the parameters. This model is motivated by a very rich dataset comprising 26 clinical trials involving cholesterol lowering drugs where the goal is to jointly model the three dimensional response consisting of Low Density Lipoprotein Cholesterol (LDL-C), High Density Lipoprotein Cholesterol (HDL-C), and Triglycerides (TG) (LDL-C, HDL-C, TG). Since the joint distribution of (LDL-C, HDL-C, TG) is not multivariate normal and in fact quite skewed, a Box-Cox transformation is needed to achieve normality. In the clinical literature, these three variables are usually analyzed univariately: however, a multivariate approach would be more appropriate since these variables are correlated with each other. A detailed analysis of these data is carried out using the proposed methodology. PMID:23580436

  3. Bayesian inference for multivariate meta-analysis Box-Cox transformation models for individual patient data with applications to evaluation of cholesterol-lowering drugs.

    PubMed

    Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G; Shah, Arvind K; Lin, Jianxin

    2013-10-15

    In this paper, we propose a class of Box-Cox transformation regression models with multidimensional random effects for analyzing multivariate responses for individual patient data in meta-analysis. Our modeling formulation uses a multivariate normal response meta-analysis model with multivariate random effects, in which each response is allowed to have its own Box-Cox transformation. Prior distributions are specified for the Box-Cox transformation parameters as well as the regression coefficients in this complex model, and the deviance information criterion is used to select the best transformation model. Because the model is quite complex, we develop a novel Monte Carlo Markov chain sampling scheme to sample from the joint posterior of the parameters. This model is motivated by a very rich dataset comprising 26 clinical trials involving cholesterol-lowering drugs where the goal is to jointly model the three-dimensional response consisting of low density lipoprotein cholesterol (LDL-C), high density lipoprotein cholesterol (HDL-C), and triglycerides (TG) (LDL-C, HDL-C, TG). Because the joint distribution of (LDL-C, HDL-C, TG) is not multivariate normal and in fact quite skewed, a Box-Cox transformation is needed to achieve normality. In the clinical literature, these three variables are usually analyzed univariately; however, a multivariate approach would be more appropriate because these variables are correlated with each other. We carry out a detailed analysis of these data by using the proposed methodology. Copyright © 2013 John Wiley & Sons, Ltd.

  4. The Mass-loss Return from Evolved Stars to the Large Magellanic Cloud. IV. Construction and Validation of a Grid of Models for Oxygen-rich AGB Stars, Red Supergiants, and Extreme AGB Stars

    NASA Astrophysics Data System (ADS)

    Sargent, Benjamin A.; Srinivasan, S.; Meixner, M.

    2011-02-01

    To measure the mass loss from dusty oxygen-rich (O-rich) evolved stars in the Large Magellanic Cloud (LMC), we have constructed a grid of models of spherically symmetric dust shells around stars with constant mass-loss rates using 2Dust. These models will constitute the O-rich model part of the "Grid of Red supergiant and Asymptotic giant branch star ModelS" (GRAMS). This model grid explores four parameters—stellar effective temperature from 2100 K to 4700 K luminosity from 103 to 106 L sun; dust shell inner radii of 3, 7, 11, and 15 R star; and 10.0 μm optical depth from 10-4 to 26. From an initial grid of ~1200 2Dust models, we create a larger grid of ~69,000 models by scaling to cover the luminosity range required by the data. These models are available online to the public. The matching in color-magnitude diagrams and color-color diagrams to observed O-rich asymptotic giant branch (AGB) and red supergiant (RSG) candidate stars from the SAGE and SAGE-Spec LMC samples and a small sample of OH/IR stars is generally very good. The extreme AGB star candidates from SAGE are more consistent with carbon-rich (C-rich) than O-rich dust composition. Our model grid suggests lower limits to the mid-infrared colors of the dustiest AGB stars for which the chemistry could be O-rich. Finally, the fitting of GRAMS models to spectral energy distributions of sources fit by other studies provides additional verification of our grid and anticipates future, more expansive efforts.

  5. Rosin-Rammler Distributions in ANSYS Fluent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunham, Ryan Q.

    In Health Physics monitoring, particles need to be collected and tracked. One method is to predict the motion of potential health hazards with computer models. Particles released from various sources within a glove box can become a respirable health hazard if released into the area surrounding a glove box. The goal of modeling the aerosols in a glove box is to reduce the hazards associated with a leak in the glove box system. ANSYS Fluent provides a number of tools for modeling this type of environment. Particles can be released using injections into the flow path with turbulent properties. Themore » models of particle tracks can then be used to predict paths and concentrations of particles within the flow. An attempt to understand and predict the handling of data by Fluent was made, and results iteratively tracked. Trends in data were studied to comprehend the final results. The purpose of the study was to allow a better understanding of the operation of Fluent for aerosol modeling for future application in many fields.« less

  6. Molecular envelope and atomic model of an anti-terminated glyQS T-box regulator in complex with tRNAGly

    PubMed Central

    Chetnani, Bhaskar

    2017-01-01

    Abstract A T-box regulator or riboswitch actively monitors the levels of charged/uncharged tRNA and participates in amino acid homeostasis by regulating genes involved in their utilization or biosynthesis. It has an aptamer domain for cognate tRNA recognition and an expression platform to sense the charge state and modulate gene expression. These two conserved domains are connected by a variable linker that harbors additional secondary structural elements, such as Stem III. The structural basis for specific tRNA binding is known, but the structural basis for charge sensing and the role of other elements remains elusive. To gain new structural insights on the T-box mechanism, a molecular envelope was calculated from small angle X-ray scattering data for the Bacillus subtilis glyQS T-box riboswitch in complex with an uncharged tRNAGly. A structural model of an anti-terminated glyQS T-box in complex with its cognate tRNAGly was derived based on the molecular envelope. It shows the location and relative orientation of various secondary structural elements. The model was validated by comparing the envelopes of the wild-type complex and two variants. The structural model suggests that in addition to a possible regulatory role, Stem III could aid in preferential stabilization of the T-box anti-terminated state allowing read-through of regulated genes. PMID:28531275

  7. System load forecasts for an electric utility. [Hourly loads using Box-Jenkins method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uri, N.D.

    This paper discusses forecasting hourly system load for an electric utility using Box-Jenkins time-series analysis. The results indicate that a model based on the method of Box and Jenkins, given its simplicity, gives excellent results over the forecast horizon.

  8. Probabilistic Learning by Rodent Grid Cells

    PubMed Central

    Cheung, Allen

    2016-01-01

    Mounting evidence shows mammalian brains are probabilistic computers, but the specific cells involved remain elusive. Parallel research suggests that grid cells of the mammalian hippocampal formation are fundamental to spatial cognition but their diverse response properties still defy explanation. No plausible model exists which explains stable grids in darkness for twenty minutes or longer, despite being one of the first results ever published on grid cells. Similarly, no current explanation can tie together grid fragmentation and grid rescaling, which show very different forms of flexibility in grid responses when the environment is varied. Other properties such as attractor dynamics and grid anisotropy seem to be at odds with one another unless additional properties are assumed such as a varying velocity gain. Modelling efforts have largely ignored the breadth of response patterns, while also failing to account for the disastrous effects of sensory noise during spatial learning and recall, especially in darkness. Here, published electrophysiological evidence from a range of experiments are reinterpreted using a novel probabilistic learning model, which shows that grid cell responses are accurately predicted by a probabilistic learning process. Diverse response properties of probabilistic grid cells are statistically indistinguishable from rat grid cells across key manipulations. A simple coherent set of probabilistic computations explains stable grid fields in darkness, partial grid rescaling in resized arenas, low-dimensional attractor grid cell dynamics, and grid fragmentation in hairpin mazes. The same computations also reconcile oscillatory dynamics at the single cell level with attractor dynamics at the cell ensemble level. Additionally, a clear functional role for boundary cells is proposed for spatial learning. These findings provide a parsimonious and unified explanation of grid cell function, and implicate grid cells as an accessible neuronal population readout of a set of probabilistic spatial computations. PMID:27792723

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arakawa, Akio; Konor, C.S.

    Two types of vertical grids are used for atmospheric models: The Lorenz (L grid) and the Charney-Phillips grid (CP grid). In this paper, problems with the L grid are pointed out that are due to the existence of an extra degree of freedom in the vertical distribution of the temperature (and the potential temperature). Then a vertical differencing of the primitive equations based on the CP grid is presented, while most of the advantages of the L grid in a hybrid {sigma}-p vetical coordinate are maintained. The discrete hydrostatic equation is constructed in such a way that it is freemore » from the vertical computational mode in the thermal field. Also, the vertical advection of the potential temperature in the discrete thermodynamic equation is constructed in such a way that it reduces to the standard (and most straightforward) vertical differencing of the quasigeostrophic equations based on the CP grid. Simulations of standing oscillations superposed on a resting atmosphere are presented using two vertically discrete models, one based on the L grid and the other on the CP grid. The comparison of the simulations shows that with the L grid a stationary vertically zigzag pattern dominates in the thermal field, while with the CP grid no such pattern is evident. Simulations of the growth of an extrapolated cyclone in a cyclic channel on a {beta} plan are also presented using two different {sigma}-coordinate models, again one with the L grid and the other with the CP grid, starting from random disturbances. 17 refs., 8 figs.« less

  10. Grid-Independent Large-Eddy Simulation in Turbulent Channel Flow using Three-Dimensional Explicit Filtering

    NASA Technical Reports Server (NTRS)

    Gullbrand, Jessica

    2003-01-01

    In this paper, turbulence-closure models are evaluated using the 'true' LES approach in turbulent channel flow. The study is an extension of the work presented by Gullbrand (2001), where fourth-order commutative filter functions are applied in three dimensions in a fourth-order finite-difference code. The true LES solution is the grid-independent solution to the filtered governing equations. The solution is obtained by keeping the filter width constant while the computational grid is refined. As the grid is refined, the solution converges towards the true LES solution. The true LES solution will depend on the filter width used, but will be independent of the grid resolution. In traditional LES, because the filter is implicit and directly connected to the grid spacing, the solution converges towards a direct numerical simulation (DNS) as the grid is refined, and not towards the solution of the filtered Navier-Stokes equations. The effect of turbulence-closure models is therefore difficult to determine in traditional LES because, as the grid is refined, more turbulence length scales are resolved and less influence from the models is expected. In contrast, in the true LES formulation, the explicit filter eliminates all scales that are smaller than the filter cutoff, regardless of the grid resolution. This ensures that the resolved length-scales do not vary as the grid resolution is changed. In true LES, the cell size must be smaller than or equal to the cutoff length scale of the filter function. The turbulence-closure models investigated are the dynamic Smagorinsky model (DSM), the dynamic mixed model (DMM), and the dynamic reconstruction model (DRM). These turbulence models were previously studied using two-dimensional explicit filtering in turbulent channel flow by Gullbrand & Chow (2002). The DSM by Germano et al. (1991) is used as the USFS model in all the simulations. This enables evaluation of different reconstruction models for the RSFS stresses. The DMM consists of the scale-similarity model (SSM) by Bardina et al. (1983), which is an RSFS model, in linear combination with the DSM. In the DRM, the RSFS stresses are modeled by using an estimate of the unfiltered velocity in the unclosed term, while the USFS stresses are modeled by the DSM. The DSM and the DMM are two commonly used turbulence-closure models, while the DRM is a more recent model.

  11. Gene transfer of high-mobility group box 1 box-A domain in a rat acute liver failure model.

    PubMed

    Tanaka, Masayuki; Shinoda, Masahiro; Takayanagi, Atsushi; Oshima, Go; Nishiyama, Ryo; Fukuda, Kazumasa; Yagi, Hiroshi; Hayashida, Tetsu; Masugi, Yohei; Suda, Koichi; Yamada, Shingo; Miyasho, Taku; Hibi, Taizo; Abe, Yuta; Kitago, Minoru; Obara, Hideaki; Itano, Osamu; Takeuchi, Hiroya; Sakamoto, Michiie; Tanabe, Minoru; Maruyama, Ikuro; Kitagawa, Yuko

    2015-04-01

    High-mobility group box 1 (HMGB1) has recently been identified as an important mediator of various kinds of acute and chronic inflammation. The protein encoded by the box-A domain of the HMGB1 gene is known to act as a competitive inhibitor of HMGB1. In this study, we investigated whether box-A gene transfer results in box-A protein production in rats and assessed therapeutic efficacy in vivo using an acute liver failure (ALF) model. Three types of adenovirus vectors were constructed-a wild type and two mutants-and a mutant vector was then selected based on the secretion from HeLa cells. The secreted protein was subjected to a tumor necrosis factor (TNF) production inhibition test in vitro. The vector was injected via the portal vein in healthy Wistar rats to confirm box-A protein production in the liver. The vector was then injected via the portal vein in rats with ALF. Western blot analysis showed enhanced expression of box-A protein in HeLa cells transfected with one of the mutant vectors. The culture supernatant from HeLa cells transfected with the vector inhibited TNF-α production from macrophages. Expression of box-A protein was confirmed in the transfected liver at 72 h after transfection. Transfected rats showed decreased hepatic enzymes, plasma HMGB1, and hepatic TNF-α messenger RNA levels, and histologic findings and survival were significantly improved. HMGB1 box-A gene transfer results in box-A protein production in the liver and appears to have a beneficial effect on ALF in rats. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Service engineering for grid services in medicine and life science.

    PubMed

    Weisbecker, Anette; Falkner, Jürgen

    2009-01-01

    Clearly defined services with appropriate business models are necessary in order to exploit the benefit of grid computing for industrial and academic users in medicine and life sciences. In the project Services@MediGRID the service engineering approach is used to develop those clearly defined grid services and to provide sustainable business models for their usage.

  13. Online dynamical downscaling of temperature and precipitation within the iLOVECLIM model (version 1.1)

    NASA Astrophysics Data System (ADS)

    Quiquet, Aurélien; Roche, Didier M.; Dumas, Christophe; Paillard, Didier

    2018-02-01

    This paper presents the inclusion of an online dynamical downscaling of temperature and precipitation within the model of intermediate complexity iLOVECLIM v1.1. We describe the following methodology to generate temperature and precipitation fields on a 40 km × 40 km Cartesian grid of the Northern Hemisphere from the T21 native atmospheric model grid. Our scheme is not grid specific and conserves energy and moisture in the same way as the original climate model. We show that we are able to generate a high-resolution field which presents a spatial variability in better agreement with the observations compared to the standard model. Although the large-scale model biases are not corrected, for selected model parameters, the downscaling can induce a better overall performance compared to the standard version on both the high-resolution grid and on the native grid. Foreseen applications of this new model feature include the improvement of ice sheet model coupling and high-resolution land surface models.

  14. Cscibox: A Software System for Age-Model Construction and Evaluation

    NASA Astrophysics Data System (ADS)

    Bradley, E.; Anderson, K. A.; Marchitto, T. M., Jr.; de Vesine, L. R.; White, J. W. C.; Anderson, D. M.

    2014-12-01

    CSciBox is an integrated software system for the construction and evaluation of age models of paleo-environmetal archives, both directly dated and cross dated. The time has come to encourage cross-pollinization between earth science and computer science in dating paleorecords. This project addresses that need. The CSciBox code, which is being developed by a team of computer scientists and geoscientists, is open source and freely available on github. The system employs modern database technology to store paleoclimate proxy data and analysis results in an easily accessible and searchable form. This makes it possible to do analysis on the whole core at once, in an interactive fashion, or to tailor the analysis to a subset of the core without loading the entire data file. CSciBox provides a number of 'components' that perform the common steps in age-model construction and evaluation: calibrations, reservoir-age correction, interpolations, statistics, and so on. The user employs these components via a graphical user interface (GUI) to go from raw data to finished age model in a single tool: e.g., an IntCal09 calibration of 14C data from a marine sediment core, followed by a piecewise-linear interpolation. CSciBox's GUI supports plotting of any measurement in the core against any other measurement, or against any of the variables in the calculation of the age model-with or without explicit error representations. Using the GUI, CSciBox's user can import a new calibration curve or other background data set and define a new module that employs that information. Users can also incorporate other software (e.g., Calib, BACON) as 'plug ins.' In the case of truly large data or significant computational effort, CSciBox is parallelizable across modern multicore processors, or clusters, or even the cloud. The next generation of the CSciBox code, currently in the testing stages, includes an automated reasoning engine that supports a more-thorough exploration of plausible age models and cross-dating scenarios.

  15. A new finite element and finite difference hybrid method for computing electrostatics of ionic solvated biomolecule

    NASA Astrophysics Data System (ADS)

    Ying, Jinyong; Xie, Dexuan

    2015-10-01

    The Poisson-Boltzmann equation (PBE) is one widely-used implicit solvent continuum model for calculating electrostatics of ionic solvated biomolecule. In this paper, a new finite element and finite difference hybrid method is presented to solve PBE efficiently based on a special seven-overlapped box partition with one central box containing the solute region and surrounded by six neighboring boxes. In particular, an efficient finite element solver is applied to the central box while a fast preconditioned conjugate gradient method using a multigrid V-cycle preconditioning is constructed for solving a system of finite difference equations defined on a uniform mesh of each neighboring box. Moreover, the PBE domain, the box partition, and an interface fitted tetrahedral mesh of the central box can be generated adaptively for a given PQR file of a biomolecule. This new hybrid PBE solver is programmed in C, Fortran, and Python as a software tool for predicting electrostatics of a biomolecule in a symmetric 1:1 ionic solvent. Numerical results on two test models with analytical solutions and 12 proteins validate this new software tool, and demonstrate its high performance in terms of CPU time and memory usage.

  16. Morphometric differences and fluctuating asymmetry in Melipona subnitida Ducke 1910 (Hymenoptera: Apidae) in different types of housing.

    PubMed

    Lima, C B S; Nunes, L A; Carvalho, C A L; Ribeiro, M F; Souza, B A; Silva, C S B

    2016-01-01

    A geometric morphometrics approach was applied to evaluate differences in forewing patterns of the Jandaira bee (Melipona subnitida Ducke). For this, we studied the presence of fluctuating asymmetry (FA) in forewing shape and size of colonies kept in either rational hive boxes or natural tree trunks. We detected significant FA for wing size as well as wing shape independent of the type of housing (rational box or tree trunks), indicating the overall presence of stress during the development of the studied specimens. FA was also significant (p < 0.01) between rational boxes, possibly related to the use of various models of rational boxes used for keeping stingless bees. In addition, a Principal Component Analysis indicated morphometric variation between bee colonies kept in either rational hive boxes or in tree trunks, that may be related to the different origins of the bees: tree trunk colonies were relocated natural colonies while rational box colonies originated from multiplying other colonies. We conclude that adequate measures should be taken to reduce the amount of stress during bee handling by using standard models of rational boxes that cause the least disruption.

  17. Locally refined block-centred finite-difference groundwater models: Evaluation of parameter sensitivity and the consequences for inverse modelling

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and the performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are: (a) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed, and (b) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.

  18. Locally refined block-centered finite-difference groundwater models: Evaluation of parameter sensitivity and the consequences for inverse modelling and predictions

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are (1) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed and (2) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.

  19. Relationships between High Impact Tropical Rainfall Events and Environmental Conditions

    NASA Astrophysics Data System (ADS)

    Painter, C.; Varble, A.; Zipser, E. J.

    2017-12-01

    While rainfall increases as moisture and vertical motion increase, relationships between regional environmental conditions and rainfall event characteristics remain more uncertain. Of particular importance are long duration, heavy rain rate, and significant accumulation events that contribute sizable fractions of overall precipitation over short time periods. This study seeks to establish relationships between observed rainfall event properties and environmental conditions. Event duration, rain rate, and rainfall accumulation are derived using the Tropical Rainfall Measuring Mission (TRMM) 3B42 3-hourly, 0.25° resolution rainfall retrieval from 2002-2013 between 10°N and 10°S. Events are accumulated into 2.5° grid boxes and matched to monthly mean total column water vapor (TCWV) and 500-hPa vertical motion (omega) in each 2.5° grid box, retrieved from ERA-interim reanalysis. Only months with greater than 3 mm/day rainfall are included to ensure sufficient sampling. 90th and 99th percentile oceanic events last more than 20% longer and have rain rates more than 20% lower than those over land for a given TCWV-omega condition. Event duration and accumulation are more sensitive to omega than TCWV over oceans, but more sensitive to TCWV than omega over land, suggesting system size, propagation speed, and/or forcing mechanism differences for land and ocean regions. Sensitivities of duration, rain rate, and accumulation to TCWV and omega increase with increasing event extremity. For 3B42 and ERA-Interim relationships, the 90th percentile oceanic event accumulation increases by 0.93 mm for every 1 Pa/min change in rising motion, but this increases to 3.7 mm for every 1 Pa/min for the 99th percentile. Over land, the 90th percentile event accumulation increases by 0.55 mm for every 1 mm increase in TCWV, whereas the 99th percentile increases by 0.90 mm for every 1 mm increase in TCWV. These changes in event accumulation are highly correlated with changes in event duration. Relationships between 3B42 event properties and ERA-Interim environmental conditions are currently being evaluated using the MERRA-2 reanalysis and two years of 30-minute, 0.1° Integrated Multi-satellitE Retrievals for GPM (IMERG) data. If results remain consistent, they may be valuable for evaluating weather and climate models.

  20. The two-box model of climate: limitations and applications to planetary habitability and maximum entropy production studies

    PubMed Central

    Lorenz, Ralph D.

    2010-01-01

    The ‘two-box model’ of planetary climate is discussed. This model has been used to demonstrate consistency of the equator–pole temperature gradient on Earth, Mars and Titan with what would be predicted from a principle of maximum entropy production (MEP). While useful for exposition and for generating first-order estimates of planetary heat transports, it has too low a resolution to investigate climate systems with strong feedbacks. A two-box MEP model agrees well with the observed day : night temperature contrast observed on the extrasolar planet HD 189733b. PMID:20368253

  1. Three-dimensional time dependent computation of turbulent flow

    NASA Technical Reports Server (NTRS)

    Kwak, D.; Reynolds, W. C.; Ferziger, J. H.

    1975-01-01

    The three-dimensional, primitive equations of motion are solved numerically for the case of isotropic box turbulence and the distortion of homogeneous turbulence by irrotational plane strain at large Reynolds numbers. A Gaussian filter is applied to governing equations to define the large scale field. This gives rise to additional second order computed scale stresses (Leonard stresses). The residual stresses are simulated through an eddy viscosity. Uniform grids are used, with a fourth order differencing scheme in space and a second order Adams-Bashforth predictor for explicit time stepping. The results are compared to the experiments and statistical information extracted from the computer generated data.

  2. Numerical simulation of steady three-dimensional flows in axial turbomachinery bladerows

    NASA Astrophysics Data System (ADS)

    Basson, Anton Herman

    The formulation for and application of a numerical model for low Mach number steady three-dimensional flows in axial turbomachinery blade rows is presented. The formulation considered here includes an efficient grid generation scheme (particularly suited to computational grids for the analysis of turbulent turbomachinery flows) and a semi-implicit, pressure-based computational fluid dynamics scheme that directly includes artificial dissipation, applicable to viscous and inviscid flows. The grid generation technique uses a combination of algebraic and elliptic methods, in conjunction with the Minimal Residual Method, to economically generate smooth structured grids. For typical H-grids in turbomachinery bladerows, when compared to a purely elliptic grid generation scheme, the presented grid generation scheme produces grids with much improved smoothness near the leading and trailing edges, allows the use of small near wall grid spacing required by low Reynolds number turbulence models, and maintains orthogonality of the grid near the solid boundaries even for high flow angle cascades. A specialized embedded H-grid for application particularly to tip clearance flows is presented. This topology smoothly discretizes the domain without modifying the tip shape, while requiring only minor modifications to H-grid flow solvers. Better quantitative modeling of the tip clearance vortex structure than that obtained with a pinched tip approximation is demonstrated. The formulation of artificial dissipation terms for a semi-implicit, pressure-based (SIMPLE type) flow solver, is presented. It is applied to both the Euler and the Navier-Stokes equations, expressed in generalized coordinates using a non-staggered grid. This formulation is compared to some SIMPLE and time marching formulations, revealing the artificial dissipation inherent in some commonly used semi-implicit formulations. The effect of the amount of dissipation on the accuracy of the solution and the convergence rate is quantitatively demonstrated for a number of flow cases. The ability of the formulation to model complex steady turbomachinery flows is demonstrated, e.g. for pressure driven secondary flows, turbine nozzle wakes, turbulent boundary layers. The formulation's modeling of blade surface heat transfer is assessed. The numerical model is used to investigate the structure of phenomena associated with tip clearance flows in a turbine nozzle.

  3. Optimization of multi-objective micro-grid based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Gan, Yang

    2018-04-01

    The paper presents a multi-objective optimal configuration model for independent micro-grid with the aim of economy and environmental protection. The Pareto solution set can be obtained by solving the multi-objective optimization configuration model of micro-grid with the improved particle swarm algorithm. The feasibility of the improved particle swarm optimization algorithm for multi-objective model is verified, which provides an important reference for multi-objective optimization of independent micro-grid.

  4. Coordinated learning of grid cell and place cell spatial and temporal properties: multiple scales, attention and oscillations.

    PubMed

    Grossberg, Stephen; Pilly, Praveen K

    2014-02-05

    A neural model proposes how entorhinal grid cells and hippocampal place cells may develop as spatial categories in a hierarchy of self-organizing maps (SOMs). The model responds to realistic rat navigational trajectories by learning both grid cells with hexagonal grid firing fields of multiple spatial scales, and place cells with one or more firing fields, that match neurophysiological data about their development in juvenile rats. Both grid and place cells can develop by detecting, learning and remembering the most frequent and energetic co-occurrences of their inputs. The model's parsimonious properties include: similar ring attractor mechanisms process linear and angular path integration inputs that drive map learning; the same SOM mechanisms can learn grid cell and place cell receptive fields; and the learning of the dorsoventral organization of multiple spatial scale modules through medial entorhinal cortex to hippocampus (HC) may use mechanisms homologous to those for temporal learning through lateral entorhinal cortex to HC ('neural relativity'). The model clarifies how top-down HC-to-entorhinal attentional mechanisms may stabilize map learning, simulates how hippocampal inactivation may disrupt grid cells, and explains data about theta, beta and gamma oscillations. The article also compares the three main types of grid cell models in the light of recent data.

  5. Development and evaluation of a local grid refinement method for block-centered finite-difference groundwater models using shared nodes

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    A new method of local grid refinement for two-dimensional block-centered finite-difference meshes is presented in the context of steady-state groundwater-flow modeling. The method uses an iteration-based feedback with shared nodes to couple two separate grids. The new method is evaluated by comparison with results using a uniform fine mesh, a variably spaced mesh, and a traditional method of local grid refinement without a feedback. Results indicate: (1) The new method exhibits quadratic convergence for homogeneous systems and convergence equivalent to uniform-grid refinement for heterogeneous systems. (2) Coupling the coarse grid with the refined grid in a numerically rigorous way allowed for improvement in the coarse-grid results. (3) For heterogeneous systems, commonly used linear interpolation of heads from the large model onto the boundary of the refined model produced heads that are inconsistent with the physics of the flow field. (4) The traditional method works well in situations where the better resolution of the locally refined grid has little influence on the overall flow-system dynamics, but if this is not true, lack of a feedback mechanism produced errors in head up to 3.6% and errors in cell-to-cell flows up to 25%. ?? 2002 Elsevier Science Ltd. All rights reserved.

  6. GSRP/David Marshall: Fully Automated Cartesian Grid CFD Application for MDO in High Speed Flows

    NASA Technical Reports Server (NTRS)

    2003-01-01

    With the renewed interest in Cartesian gridding methodologies for the ease and speed of gridding complex geometries in addition to the simplicity of the control volumes used in the computations, it has become important to investigate ways of extending the existing Cartesian grid solver functionalities. This includes developing methods of modeling the viscous effects in order to utilize Cartesian grids solvers for accurate drag predictions and addressing the issues related to the distributed memory parallelization of Cartesian solvers. This research presents advances in two areas of interest in Cartesian grid solvers, viscous effects modeling and MPI parallelization. The development of viscous effects modeling using solely Cartesian grids has been hampered by the widely varying control volume sizes associated with the mesh refinement and the cut cells associated with the solid surface. This problem is being addressed by using physically based modeling techniques to update the state vectors of the cut cells and removing them from the finite volume integration scheme. This work is performed on a new Cartesian grid solver, NASCART-GT, with modifications to its cut cell functionality. The development of MPI parallelization addresses issues associated with utilizing Cartesian solvers on distributed memory parallel environments. This work is performed on an existing Cartesian grid solver, CART3D, with modifications to its parallelization methodology.

  7. Reduction of peak energy demand based on smart appliances energy consumption adjustment

    NASA Astrophysics Data System (ADS)

    Powroźnik, P.; Szulim, R.

    2017-08-01

    In the paper the concept of elastic model of energy management for smart grid and micro smart grid is presented. For the proposed model a method for reducing peak demand in micro smart grid has been defined. The idea of peak demand reduction in elastic model of energy management is to introduce a balance between demand and supply of current power for the given Micro Smart Grid in the given moment. The results of the simulations studies were presented. They were carried out on real household data available on UCI Machine Learning Repository. The results may have practical application in the smart grid networks, where there is a need for smart appliances energy consumption adjustment. The article presents a proposal to implement the elastic model of energy management as the cloud computing solution. This approach of peak demand reduction might have application particularly in a large smart grid.

  8. Global Precipitation Measurement (GPM) Mission: Precipitation Processing System (PPS) GPM Mission Gridded Text Products Provide Surface Precipitation Retrievals

    NASA Technical Reports Server (NTRS)

    Stocker, Erich Franz; Kelley, O.; Kummerow, C.; Huffman, G.; Olson, W.; Kwiatkowski, J.

    2015-01-01

    In February 2015, the Global Precipitation Measurement (GPM) mission core satellite will complete its first year in space. The core satellite carries a conically scanning microwave imager called the GPM Microwave Imager (GMI), which also has 166 GHz and 183 GHz frequency channels. The GPM core satellite also carries a dual frequency radar (DPR) which operates at Ku frequency, similar to the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar, and a new Ka frequency. The precipitation processing system (PPS) is producing swath-based instantaneous precipitation retrievals from GMI, both radars including a dual-frequency product, and a combined GMIDPR precipitation retrieval. These level 2 products are written in the HDF5 format and have many additional parameters beyond surface precipitation that are organized into appropriate groups. While these retrieval algorithms were developed prior to launch and are not optimal, these algorithms are producing very creditable retrievals. It is appropriate for a wide group of users to have access to the GPM retrievals. However, for researchers requiring only surface precipitation, these L2 swath products can appear to be very intimidating and they certainly do contain many more variables than the average researcher needs. Some researchers desire only surface retrievals stored in a simple easily accessible format. In response, PPS has begun to produce gridded text based products that contain just the most widely used variables for each instrument (surface rainfall rate, fraction liquid, fraction convective) in a single line for each grid box that contains one or more observations.This paper will describe the gridded data products that are being produced and provide an overview of their content. Currently two types of gridded products are being produced: (1) surface precipitation retrievals from the core satellite instruments GMI, DPR, and combined GMIDPR (2) surface precipitation retrievals for the partner constellation satellites. Both of these gridded products are generated for a.25 degree x.25 degree hourly grid, which are packaged into daily ASCII (American Standard Code for Information Interchange) files that can downloaded from the PPS FTP (File Transfer Protocol) site. To reduce the download size, the files are compressed using the gzip utility.This paper will focus on presenting high-level details about the gridded text product being generated from the instruments on the GPM core satellite. But summary information will also be presented about the partner radiometer gridded product. All retrievals for the partner radiometer are done using the GPROF2014 algorithmusing as input the PPS generated inter-calibrated 1C product for the radiometer.

  9. GPM Mission Gridded Text Products Providing Surface Precipitation Retrievals

    NASA Astrophysics Data System (ADS)

    Stocker, Erich Franz; Kelley, Owen; Huffman, George; Kummerow, Christian

    2015-04-01

    In February 2015, the Global Precipitation Measurement (GPM) mission core satellite will complete its first year in space. The core satellite carries a conically scanning microwave imager called the GPM Microwave Imager (GMI), which also has 166 GHz and 183 GHz frequency channels. The GPM core satellite also carries a dual frequency radar (DPR) which operates at Ku frequency, similar to the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar), and a new Ka frequency. The precipitation processing system (PPS) is producing swath-based instantaneous precipitation retrievals from GMI, both radars including a dual-frequency product, and a combined GMI/DPR precipitation retrieval. These level 2 products are written in the HDF5 format and have many additional parameters beyond surface precipitation that are organized into appropriate groups. While these retrieval algorithms were developed prior to launch and are not optimal, these algorithms are producing very creditable retrievals. It is appropriate for a wide group of users to have access to the GPM retrievals. However, for reseachers requiring only surface precipitation, these L2 swath products can appear to be very intimidating and they certainly do contain many more variables than the average researcher needs. Some researchers desire only surface retrievals stored in a simple easily accessible format. In response, PPS has begun to produce gridded text based products that contain just the most widely used variables for each instrument (surface rainfall rate, fraction liquid, fraction convective) in a single line for each grid box that contains one or more observations. This paper will describe the gridded data products that are being produced and provide an overview of their content. Currently two types of gridded products are being produced: (1) surface precipitation retrievals from the core satellite instruments - GMI, DPR, and combined GMI/DPR (2) surface precipitation retrievals for the partner constellation satellites. Both of these gridded products are generated for a .25 degree x .25 degree hourly grid, which are packaged into daily ASCII files that can downloaded from the PPS FTP site. To reduce the download size, the files are compressed using the gzip utility. This paper will focus on presenting high-level details about the gridded text product being generated from the instruments on the GPM core satellite. But summary information will also be presented about the partner radiometer gridded product. All retrievals for the partner radiometer are done using the GPROF2014 algorithm using as input the PPS generated inter-calibrated 1C product for the radiometer.

  10. CFD Script for Rapid TPS Damage Assessment

    NASA Technical Reports Server (NTRS)

    McCloud, Peter

    2013-01-01

    This grid generation script creates unstructured CFD grids for rapid thermal protection system (TPS) damage aeroheating assessments. The existing manual solution is cumbersome, open to errors, and slow. The invention takes a large-scale geometry grid and its large-scale CFD solution, and creates a unstructured patch grid that models the TPS damage. The flow field boundary condition for the patch grid is then interpolated from the large-scale CFD solution. It speeds up the generation of CFD grids and solutions in the modeling of TPS damages and their aeroheating assessment. This process was successfully utilized during STS-134.

  11. Accurate path integration in continuous attractor network models of grid cells.

    PubMed

    Burak, Yoram; Fiete, Ila R

    2009-02-01

    Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of approximately 10-100 meters and approximately 1-10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other.

  12. To Grid or Not to Grid… Precipitation Data and Hydrological Modeling in the Khangai Mountain Region of Mongolia

    NASA Astrophysics Data System (ADS)

    Venable, N. B. H.; Fassnacht, S. R.; Adyabadam, G.

    2014-12-01

    Precipitation data in semi-arid and mountainous regions is often spatially and temporally sparse, yet it is a key variable needed to drive hydrological models. Gridded precipitation datasets provide a spatially and temporally coherent alternative to the use of point-based station data, but in the case of Mongolia, may not be constructed from all data available from government data sources, or may only be available at coarse resolutions. To examine the uncertainty associated with the use of gridded and/or point precipitation data, monthly water balance models of three river basins across forest steppe (the Khoid Tamir River at Ikhtamir), steppe (the Baidrag River at Bayanburd), and desert steppe (the Tuin River at Bogd) ecozones in the Khangai Mountain Region of Mongolia were compared. The models were forced over a 10-year period from 2001-2010, with gridded temperature and precipitation data at a 0.5 x 0.5 degree resolution. These results were compared to modeling using an interpolated hybrid of the gridded data and additional point data recently gathered from government sources; and with point data from the nearest meteorological station to the streamflow gage of choice. Goodness-of-fit measures including the Nash-Sutcliff Efficiency statistic, the percent bias, and the RMSE-observations standard deviation ratio were used to assess model performance. The results were mixed with smaller differences between the two gridded products as compared to the differences between gridded products and station data. The largest differences in precipitation inputs and modeled runoff amounts occurred between the two gridded datasets and station data in the desert steppe (Tuin), and the smallest differences occurred in the forest steppe (Khoid Tamir) and steppe (Baidrag). Mean differences between water balance model results are generally smaller than mean differences in the initial input data over the period of record. Seasonally, larger differences in gridded versus station-based precipitation products and modeled outputs occur in summer in the desert-steppe, and in spring in the forest steppe. Choice of precipitation data source in terms of gridded or point-based data directly affects model outcomes with greater uncertainty noted on a seasonal basis across ecozones of the Khangai.

  13. Greening the Grid: Advances in Production Cost Modeling for India Renewable Energy Grid Integration Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochran, Jaquelin; Palchak, David

    The Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid study uses advanced weather and power system modeling to explore the operational impacts of meeting India's 2022 renewable energy targets and identify actions that may be favorable for integrating high levels of renewable energy into the Indian grid. The study relies primarily on a production cost model that simulates optimal scheduling and dispatch of available generation in a future year (2022) by minimizing total production costs subject to physical, operational, and market constraints. This fact sheet provides a detailed look at each of thesemore » models, including their common assumptions and the insights provided by each.« less

  14. Effects of Grid Resolution on Modeled Air Pollutant Concentrations Due to Emissions from Large Point Sources: Case Study during KORUS-AQ 2016 Campaign

    NASA Astrophysics Data System (ADS)

    Ju, H.; Bae, C.; Kim, B. U.; Kim, H. C.; Kim, S.

    2017-12-01

    Large point sources in the Chungnam area received a nation-wide attention in South Korea because the area is located southwest of the Seoul Metropolitan Area whose population is over 22 million and the summertime prevalent winds in the area is northeastward. Therefore, emissions from the large point sources in the Chungnam area were one of the major observation targets during the KORUS-AQ 2016 including aircraft measurements. In general, horizontal grid resolutions of eulerian photochemical models have profound effects on estimated air pollutant concentrations. It is due to the formulation of grid models; that is, emissions in a grid cell will be assumed to be mixed well under planetary boundary layers regardless of grid cell sizes. In this study, we performed series of simulations with the Comprehensive Air Quality Model with eXetension (CAMx). For 9-km and 3-km simulations, we used meteorological fields obtained from the Weather Research and Forecast model while utilizing the "Flexi-nesting" option in the CAMx for the 1-km simulation. In "Flexi-nesting" mode, CAMx interpolates or assigns model inputs from the immediate parent grid. We compared modeled concentrations with ground observation data as well as aircraft measurements to quantify variations of model bias and error depending on horizontal grid resolutions.

  15. Models for the modern power grid

    NASA Astrophysics Data System (ADS)

    Nardelli, Pedro H. J.; Rubido, Nicolas; Wang, Chengwei; Baptista, Murilo S.; Pomalaza-Raez, Carlos; Cardieri, Paulo; Latva-aho, Matti

    2014-10-01

    This article reviews different kinds of models for the electric power grid that can be used to understand the modern power system, the smart grid. From the physical network to abstract energy markets, we identify in the literature different aspects that co-determine the spatio-temporal multilayer dynamics of power system. We start our review by showing how the generation, transmission and distribution characteristics of the traditional power grids are already subject to complex behaviour appearing as a result of the the interplay between dynamics of the nodes and topology, namely synchronisation and cascade effects. When dealing with smart grids, the system complexity increases even more: on top of the physical network of power lines and controllable sources of electricity, the modernisation brings information networks, renewable intermittent generation, market liberalisation, prosumers, among other aspects. In this case, we forecast a dynamical co-evolution of the smart grid and other kind of networked systems that cannot be understood isolated. This review compiles recent results that model electric power grids as complex systems, going beyond pure technological aspects. From this perspective, we then indicate possible ways to incorporate the diverse co-evolving systems into the smart grid model using, for example, network theory and multi-agent simulation.

  16. Smart Grid Maturity Model: SGMM Model Definition. Version 1.2

    DTIC Science & Technology

    2011-09-01

    electricity (e.g., solar power and wind) to be connected to the grid. If this were the case, any excess generated electricity would flow onto the grid, and... solar panels to the grid or electric vehicles to the grid. CUST-4.7 A common residential customer experience has been integrated. This experience is...individual devices (e.g., appliances) has been deployed. CUST-5.3 Plug-and-play customer-based generation (e.g., wind and solar ) is supported. This

  17. Optimal Control of Micro Grid Operation Mode Seamless Switching Based on Radau Allocation Method

    NASA Astrophysics Data System (ADS)

    Chen, Xiaomin; Wang, Gang

    2017-05-01

    The seamless switching process of micro grid operation mode directly affects the safety and stability of its operation. According to the switching process from island mode to grid-connected mode of micro grid, we establish a dynamic optimization model based on two grid-connected inverters. We use Radau allocation method to discretize the model, and use Newton iteration method to obtain the optimal solution. Finally, we implement the optimization mode in MATLAB and get the optimal control trajectory of the inverters.

  18. A review of presented mathematical models in Parkinson's disease: black- and gray-box models.

    PubMed

    Sarbaz, Yashar; Pourakbari, Hakimeh

    2016-06-01

    Parkinson's disease (PD), one of the most common movement disorders, is caused by damage to the central nervous system. Despite all of the studies on PD, the formation mechanism of its symptoms remained unknown. It is still not obvious why damage only to the substantia nigra pars compacta, a small part of the brain, causes a wide range of symptoms. Moreover, the causes of brain damages remain to be fully elucidated. Exact understanding of the brain function seems to be impossible. On the other hand, some engineering tools are trying to understand the behavior and performance of complex systems. Modeling is one of the most important tools in this regard. Developing quantitative models for this disease has begun in recent decades. They are very effective not only in better understanding of the disease, offering new therapies, and its prediction and control, but also in its early diagnosis. Modeling studies include two main groups: black-box models and gray-box models. Generally, in the black-box modeling, regardless of the system information, the symptom is only considered as the output. Such models, besides the quantitative analysis studies, increase our knowledge of the disorders behavior and the disease symptoms. The gray-box models consider the involved structures in the symptoms appearance as well as the final disease symptoms. These models can effectively save time and be cost-effective for the researchers and help them select appropriate treatment mechanisms among all possible options. In this review paper, first, efforts are made to investigate some studies on PD quantitative analysis. Then, PD quantitative models will be reviewed. Finally, the results of using such models are presented to some extent.

  19. A new approach for the construction of gridded emission inventories from satellite data

    NASA Astrophysics Data System (ADS)

    Kourtidis, Konstantinos; Georgoulias, Aristeidis; Mijling, Bas; van der A, Ronald; Zhang, Qiang; Ding, Jieying

    2017-04-01

    We present a new method for the derivation of anthropogenic emission estimates for SO2. The method, which we term Enhancement Ratio Method (ERM), uses observed relationships between measured OMI satellite tropospheric columnar levels of SO2 and NOx in each 0.25 deg X 0.25 deg grid box at low wind speeds, and the Daily Emission estimates Constrained by Satellite Observations (DECSO) versions v1 and v3a NOx emission estimates to scale the SO2 emissions. The method is applied over China, and emission estimates for SO2 are derived for different seasons and years (2007-2011), thus allowing an insight into the interannual evolution of the emissions. The inventory shows a large decrease of emissions during 2007-2009 and a modest increase between 2010-2011. The evolution in emission strength over time calculated here is in general agreement with bottom-up inventories, although differences exist, not only between the current inventory and other inventories but also among the bottom up inventories themselves. The gridded emission estimates derived appear to be consistent, both in their spatial distribution and their magnitude, with the Multi-resolution Emission Inventory for China (MEIC). The total emissions correlate very well with most existing inventories. This research has been financed under the FP7 Programme MarcoPolo (Grand Number 606953, Theme SPA.2013.3.2-01).

  20. A photosynthesis-based two-leaf canopy stomatal conductance model for meteorology and air quality modeling with WRF/CMAQ PX LSM

    NASA Astrophysics Data System (ADS)

    Ran, Limei; Pleim, Jonathan; Song, Conghe; Band, Larry; Walker, John T.; Binkowski, Francis S.

    2017-02-01

    A coupled photosynthesis-stomatal conductance model with single-layer sunlit and shaded leaf canopy scaling is implemented and evaluated in a diagnostic box model with the Pleim-Xiu land surface model (PX LSM) and ozone deposition model components taken directly from the meteorology and air quality modeling system - WRF/CMAQ (Weather Research and Forecast model and Community Multiscale Air Quality model). The photosynthesis-based model for PX LSM (PX PSN) is evaluated at a FLUXNET site for implementation against different parameterizations and the current PX LSM approach with a simple Jarvis function (PX Jarvis). Latent heat flux (LH) from PX PSN is further evaluated at five FLUXNET sites with different vegetation types and landscape characteristics. Simulated ozone deposition and flux from PX PSN are evaluated at one of the sites with ozone flux measurements. Overall, the PX PSN simulates LH as well as the PX Jarvis approach. The PX PSN, however, shows distinct advantages over the PX Jarvis approach for grassland that likely result from its treatment of C3 and C4 plants for CO2 assimilation. Simulations using Moderate Resolution Imaging Spectroradiometer (MODIS) leaf area index (LAI) rather than LAI measured at each site assess how the model would perform with grid averaged data used in WRF/CMAQ. MODIS LAI estimates degrade model performance at all sites but one site having exceptionally old and tall trees. Ozone deposition velocity and ozone flux along with LH are simulated especially well by the PX PSN compared to significant overestimation by the PX Jarvis for a grassland site.

  1. Grid of Supergiant B[e] Models from HDUST Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Domiciano de Souza, A.; Carciofi, A. C.

    2012-12-01

    By using the Monte Carlo radiative transfer code HDUST (developed by A. C. Carciofi and J..E. Bjorkman) we have built a grid of models for stars presenting the B[e] phenomenon and a bimodal outflowing envelope. The models are particularly adapted to the study of B[e] supergiants and FS CMa type stars. The adopted physical parameters of the calculated models make the grid well adapted to interpret high angular and high spectral observations, in particular spectro-interferometric data from ESO-VLTI instruments AMBER (near-IR at low and medium spectral resolution) and MIDI (mid-IR at low spectral resolution). The grid models include, for example, a central B star with different effective temperatures, a gas (hydrogen) and silicate dust circumstellar envelope with a bimodal mass loss presenting dust in the denser equatorial regions. The HDUST grid models were pre-calculated using the high performance parallel computing facility Mésocentre SIGAMM, located at OCA, France.

  2. Develop and Test Coupled Physical Parameterizations and Tripolar Wave Model Grid: NAVGEM / WaveWatch III / HYCOM

    DTIC Science & Technology

    2013-09-30

    Tripolar Wave Model Grid: NAVGEM / WaveWatch III / HYCOM W. Erick Rogers Naval Research Laboratory, Code 7322 Stennis Space Center, MS 39529...Parameterizations and Tripolar Wave Model Grid: NAVGEM / WaveWatch III / HYCOM 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6

  3. DPW-VI Results Using FUN3D with Focus on k-kL-MEAH2015 (k-kL) Turbulence Model

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, K. S.; Carlson, Jan-Renee; Rumsey, Christopher L.; Lee-Rausch, Elizabeth M.; Park, Michael A.

    2017-01-01

    The Common Research Model wing-body configuration is investigated with the k-kL-MEAH2015 turbulence model implemented in FUN3D. This includes results presented at the Sixth Drag Prediction Workshop and additional results generated after the workshop with a nonlinear Quadratic Constitutive Relation (QCR) variant of the same turbulence model. The workshop provided grids are used, and a uniform grid refinement study is performed at the design condition. A large variation between results with and without a reconstruction limiter is exhibited on "medium" grid sizes, indicating that the medium grid size is too coarse for drawing conclusions in comparison with experiment. This variation is reduced with grid refinement. At a fixed angle of attack near design conditions, the QCR variant yielded decreased lift and drag compared with the linear eddy-viscosity model by an amount that was approximately constant with grid refinement. The k-kL-MEAH2015 turbulence model produced wing root junction flow behavior consistent with wind tunnel observations.

  4. Three-dimensional elliptic grid generation for an F-16

    NASA Technical Reports Server (NTRS)

    Sorenson, Reese L.

    1988-01-01

    A case history depicting the effort to generate a computational grid for the simulation of transonic flow about an F-16 aircraft at realistic flight conditions is presented. The flow solver for which this grid is designed is a zonal one, using the Reynolds averaged Navier-Stokes equations near the surface of the aircraft, and the Euler equations in regions removed from the aircraft. A body conforming global grid, suitable for the Euler equation, is first generated using 3-D Poisson equations having inhomogeneous terms modeled after the 2-D GRAPE code. Regions of the global grid are then designated for zonal refinement as appropriate to accurately model the flow physics. Grid spacing suitable for solution of the Navier-Stokes equations is generated in the refinement zones by simple subdivision of the given coarse grid intervals. That grid generation project is described, with particular emphasis on the global coarse grid.

  5. Semantic web data warehousing for caGrid

    PubMed Central

    McCusker, James P; Phillips, Joshua A; Beltrán, Alejandra González; Finkelstein, Anthony; Krauthammer, Michael

    2009-01-01

    The National Cancer Institute (NCI) is developing caGrid as a means for sharing cancer-related data and services. As more data sets become available on caGrid, we need effective ways of accessing and integrating this information. Although the data models exposed on caGrid are semantically well annotated, it is currently up to the caGrid client to infer relationships between the different models and their classes. In this paper, we present a Semantic Web-based data warehouse (Corvus) for creating relationships among caGrid models. This is accomplished through the transformation of semantically-annotated caBIG® Unified Modeling Language (UML) information models into Web Ontology Language (OWL) ontologies that preserve those semantics. We demonstrate the validity of the approach by Semantic Extraction, Transformation and Loading (SETL) of data from two caGrid data sources, caTissue and caArray, as well as alignment and query of those sources in Corvus. We argue that semantic integration is necessary for integration of data from distributed web services and that Corvus is a useful way of accomplishing this. Our approach is generalizable and of broad utility to researchers facing similar integration challenges. PMID:19796399

  6. Finite element corroboration of buckling phenomena observed in corrugated boxes

    Treesearch

    Thomas J. Urbanik; Edmond P. Saliklis

    2003-01-01

    Conventional compression strength formulas for corrugated fiberboard boxes are limited to geometry and material that produce an elastic postbuckling failure. Inelastic postbuckling can occur in squatty boxes and trays, but a mechanistic rationale for unifying observed strength data is lacking. This study combines a finite element model with a parametric design of the...

  7. Cloning, Characterization, Regulation, and Function of Dormancy-Associated MADS-Box Genes from Leafy Spurge

    USDA-ARS?s Scientific Manuscript database

    DORMANCY-ASSOCIATED MADS-BOX (DAM) genes are SHORT VEGETATIVE PHASE–Like MADS box transcription factors linked to endodormancy induction. We have cloned and characterized several cDNA and genomic clones of DAM genes from the model perennial weed leafy spurge (Euphorbia esula). We present evidence fo...

  8. Gray-box reservoir routing to compute flow propagation in operational forecasting and decision support systems

    NASA Astrophysics Data System (ADS)

    Russano, Euan; Schwanenberg, Dirk; Alvarado Montero, Rodolfo

    2017-04-01

    Operational forecasting and decision support systems for flood mitigation and the daily management of water resources require computationally efficient flow routing models. If backwater effects do not play an important role, a hydrological routing approach is often a pragmatic choice. It offers a reasonable accuracy at low computational costs in comparison to a more detailed hydraulic model. This work presents a nonlinear reservoir routing scheme as well as its implementation for the flow propagation between the hydro reservoir Três Marias and a downstream inundation-affected city Pirapora in Brazil. We refer to the model as a gray-box approach due to the identification of the parameter k by a data-driven approach for each reservoir of the cascade, instead of using estimates based on physical characteristics. The model reproduces the discharge at the gauge Pirapora, using 15 reservoirs in the cascade. The obtained results are compared with the ones obtained from the full-hydrodynamic model SOBEK. Results show a relatively good performance for the validation period, with a RMSE of 139.48 for the gray-box model, while the full-hydrodynamic model shows a RMSE of 136.67. The simulation time for a period of several years for the full-hydrodynamic took approximately 64s, while the gray-box model only required about 0.50s. This provides a significant speedup of the computation by only a little trade-off in accuracy, pointing at the potential of the simple approach in the context of time-critical, operational applications. Key-words: flow routing, reservoir routing, gray-box model

  9. Experiences of engineering Grid-based medical software.

    PubMed

    Estrella, F; Hauer, T; McClatchey, R; Odeh, M; Rogulin, D; Solomonides, T

    2007-08-01

    Grid-based technologies are emerging as potential solutions for managing and collaborating distributed resources in the biomedical domain. Few examples exist, however, of successful implementations of Grid-enabled medical systems and even fewer have been deployed for evaluation in practice. The objective of this paper is to evaluate the use in clinical practice of a Grid-based imaging prototype and to establish directions for engineering future medical Grid developments and their subsequent deployment. The MammoGrid project has deployed a prototype system for clinicians using the Grid as its information infrastructure. To assist in the specification of the system requirements (and for the first time in healthgrid applications), use-case modelling has been carried out in close collaboration with clinicians and radiologists who had no prior experience of this modelling technique. A critical qualitative and, where possible, quantitative analysis of the MammoGrid prototype is presented leading to a set of recommendations from the delivery of the first deployed Grid-based medical imaging application. We report critically on the application of software engineering techniques in the specification and implementation of the MammoGrid project and show that use-case modelling is a suitable vehicle for representing medical requirements and for communicating effectively with the clinical community. This paper also discusses the practical advantages and limitations of applying the Grid to real-life clinical applications and presents the consequent lessons learned. The work presented in this paper demonstrates that given suitable commitment from collaborating radiologists it is practical to deploy in practice medical imaging analysis applications using the Grid but that standardization in and stability of the Grid software is a necessary pre-requisite for successful healthgrids. The MammoGrid prototype has therefore paved the way for further advanced Grid-based deployments in the medical and biomedical domains.

  10. Evaluating gridded crop model simulations of evapotranspiration and irrigation using survey and remotely sensed data

    NASA Astrophysics Data System (ADS)

    Lopez Bobeda, J. R.

    2017-12-01

    The increasing use of groundwater for irrigation of crops has exacerbated groundwater sustainability issues faced by water limited regions. Gridded, process-based crop models have the potential to help farmers and policymakers asses the effects water shortages on yield and devise new strategies for sustainable water use. Gridded crop models are typically calibrated and evaluated using county-level survey data of yield, planting dates, and maturity dates. However, little is known about the ability of these models to reproduce observed crop evapotranspiration and water use at regional scales. The aim of this work is to evaluate a gridded version of the Decision Support System for Agrotechnology Transfer (DSSAT) crop model over the continental United States. We evaluated crop seasonal evapotranspiration over 5 arc-minute grids, and irrigation water use at the county level. Evapotranspiration was assessed only for rainfed agriculture to test the model evapotranspiration equations separate from the irrigation algorithm. Model evapotranspiration was evaluated against the Atmospheric Land Exchange Inverse (ALEXI) modeling product. Using a combination of the USDA crop land data layer (CDL) and the USGS Moderate Resolution Imaging Spectroradiometer Irrigated Agriculture Dataset for the United States (MIrAD-US), we selected only grids with more than 60% of their area planted with the simulated crops (corn, cotton, and soybean), and less than 20% of their area irrigated. Irrigation water use was compared against the USGS county level irrigated agriculture water use survey data. Simulated gridded data were aggregated to county level using USDA CDL and USGS MIrAD-US. Only counties where 70% or more of the irrigated land was corn, cotton, or soybean were selected for the evaluation. Our results suggest that gridded crop models can reasonably reproduce crop evapotranspiration at the country scale (RRMSE = 10%).

  11. Modelling GIC Flow in New Zealand's Electrical Transmission Grid

    NASA Astrophysics Data System (ADS)

    Divett, T.; Thomson, A. W. P.; Ingham, M.; Rodger, C. J.; Beggan, C.; Kelly, G.

    2016-12-01

    Transformers in Transpower New Zealand Ltd's electrical grid have been impacted by geomagnetically induced currents (GIC) during geomagnetic storms, for example in November 2001. In this study we have developed an initial model of the South Island's power grid to advance understanding of the impact of GIC on New Zealand's (NZ) grid. NZ's latitude and island setting mean that modelling approaches successfully used in the UK in the past can be used. Vasseur and Weidelt's thin sheet model is applied to model the electric field as a function of magnetic field and conductance. However the 4 km deep ocean near NZ's coast compared to the UK's relatively shallow continental shelf waters restricts the range of frequency and spatial grid that can be used due to assumptions in the thin sheet model. Some early consequences of these restrictions will be discussed. Lines carrying 220kV, 110kV and 66kV make up NZ's electrical transmission grid with multiple earthing nodes at each substation. Transpower have measured DC earth currents at 17 nodes in NZ's South Island grid for 15 years, including observations at multiple transformers for some substations. Different transformers at the same substation can experience quite different GIC during space weather events. Therefore we have initially modelled each transformer in some substations separately to compare directly with measured currents.Ultimately this study aims to develop a validated modelling tool that will be used to strengthen NZ's grid against the risks of space weather. Further, mitigation tactics which could be used to reduce the threat to the electrical grid will be evaluated. In particular we will focus at the transformer level where the risk lies, and not at the substation level as has been commonly done to date. As we will validate our model against the extensive Transpower observations, this will be a valuable confirmation of the approaches used by the wider international community.

  12. Confronting weather and climate models with observational data from soil moisture networks over the United States

    PubMed Central

    Dirmeyer, Paul A.; Wu, Jiexia; Norton, Holly E.; Dorigo, Wouter A.; Quiring, Steven M.; Ford, Trenton W.; Santanello, Joseph A.; Bosilovich, Michael G.; Ek, Michael B.; Koster, Randal D.; Balsamo, Gianpaolo; Lawrence, David M.

    2018-01-01

    Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses outperform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison. PMID:29645013

  13. Confronting Weather and Climate Models with Observational Data from Soil Moisture Networks over the United States

    NASA Technical Reports Server (NTRS)

    Dirmeyer, Paul A.; Wu, Jiexia; Norton, Holly E.; Dorigo, Wouter A.; Quiring, Steven M.; Ford, Trenton W.; Santanello, Joseph A., Jr.; Bosilovich, Michael G.; Ek, Michael B.; Koster, Randal Dean; hide

    2016-01-01

    Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses out perform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison.

  14. Confronting weather and climate models with observational data from soil moisture networks over the United States.

    PubMed

    Dirmeyer, Paul A; Wu, Jiexia; Norton, Holly E; Dorigo, Wouter A; Quiring, Steven M; Ford, Trenton W; Santanello, Joseph A; Bosilovich, Michael G; Ek, Michael B; Koster, Randal D; Balsamo, Gianpaolo; Lawrence, David M

    2016-04-01

    Four land surface models in uncoupled and coupled configurations are compared to observations of daily soil moisture from 19 networks in the conterminous United States to determine the viability of such comparisons and explore the characteristics of model and observational data. First, observations are analyzed for error characteristics and representation of spatial and temporal variability. Some networks have multiple stations within an area comparable to model grid boxes; for those we find that aggregation of stations before calculation of statistics has little effect on estimates of variance, but soil moisture memory is sensitive to aggregation. Statistics for some networks stand out as unlike those of their neighbors, likely due to differences in instrumentation, calibration and maintenance. Buried sensors appear to have less random error than near-field remote sensing techniques, and heat dissipation sensors show less temporal variability than other types. Model soil moistures are evaluated using three metrics: standard deviation in time, temporal correlation (memory) and spatial correlation (length scale). Models do relatively well in capturing large-scale variability of metrics across climate regimes, but poorly reproduce observed patterns at scales of hundreds of kilometers and smaller. Uncoupled land models do no better than coupled model configurations, nor do reanalyses outperform free-running models. Spatial decorrelation scales are found to be difficult to diagnose. Using data for model validation, calibration or data assimilation from multiple soil moisture networks with different types of sensors and measurement techniques requires great caution. Data from models and observations should be put on the same spatial and temporal scales before comparison.

  15. A Watched Ocean World Never Boils: Inspecting the Geochemical Impact on Ocean Worlds from Their Thermal Evolution

    NASA Astrophysics Data System (ADS)

    Spiers, E. M.; Schmidt, B. E.

    2018-05-01

    I aim to acquire better understanding of coupled thermal evolution and geochemical fluxes of an ocean world through a box model. A box model divides the system into plainer elements with realistically-solvable, dynamic equations.

  16. Molecular envelope and atomic model of an anti-terminated glyQS T-box regulator in complex with tRNAGly.

    PubMed

    Chetnani, Bhaskar; Mondragón, Alfonso

    2017-07-27

    A T-box regulator or riboswitch actively monitors the levels of charged/uncharged tRNA and participates in amino acid homeostasis by regulating genes involved in their utilization or biosynthesis. It has an aptamer domain for cognate tRNA recognition and an expression platform to sense the charge state and modulate gene expression. These two conserved domains are connected by a variable linker that harbors additional secondary structural elements, such as Stem III. The structural basis for specific tRNA binding is known, but the structural basis for charge sensing and the role of other elements remains elusive. To gain new structural insights on the T-box mechanism, a molecular envelope was calculated from small angle X-ray scattering data for the Bacillus subtilis glyQS T-box riboswitch in complex with an uncharged tRNAGly. A structural model of an anti-terminated glyQS T-box in complex with its cognate tRNAGly was derived based on the molecular envelope. It shows the location and relative orientation of various secondary structural elements. The model was validated by comparing the envelopes of the wild-type complex and two variants. The structural model suggests that in addition to a possible regulatory role, Stem III could aid in preferential stabilization of the T-box anti-terminated state allowing read-through of regulated genes. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. Predictive habitat models derived from nest-box occupancy for the endangered Carolina northern flying squirrel in the southern Appalachians

    USGS Publications Warehouse

    Ford, W. Mark; Evans, A.M.; Odom, Richard H.; Rodrigue, Jane L.; Kelly, C.A.; Abaid, Nicole; Diggins, Corinne A.; Newcomb, Doug

    2016-01-01

    In the southern Appalachians, artificial nest-boxes are used to survey for the endangered Carolina northern flying squirrel (CNFS; Glaucomys sabrinus coloratus), a disjunct subspecies associated with high elevation (>1385 m) forests. Using environmental parameters diagnostic of squirrel habitat, we created 35 a priori occupancy models in the program PRESENCE for boxes surveyed in western North Carolina, 1996-2011. Our best approximating model showed CNFS denning associated with sheltered landforms and montane conifers, primarily red spruce Picea rubens. As sheltering decreased, decreasing distance to conifers was important. Area with a high probability (>0.5) of occupancy was distributed over 18662 ha of habitat, mostly across 10 mountain ranges. Because nest-box surveys underrepresented areas >1750 m and CNFS forage in conifers, we combined areas of high occupancy with conifer GIS coverages to create an additional distribution model of likely habitat. Regionally, above 1385 m, we determined that 31795 ha could be occupied by CNFS. Known occupied patches ranged from

  18. User interface user's guide for HYPGEN

    NASA Technical Reports Server (NTRS)

    Chiu, Ing-Tsau

    1992-01-01

    The user interface (UI) of HYPGEN is developed using Panel Library to shorten the learning curve for new users and provide easier ways to run HYPGEN for casual users as well as for advanced users. Menus, buttons, sliders, and type-in fields are used extensively in UI to allow users to point and click with a mouse to choose various available options or to change values of parameters. On-line help is provided to give users information on using UI without consulting the manual. Default values are set for most parameters and boundary conditions are determined by UI to further reduce the effort needed to run HYPGEN; however, users are free to make any changes and save it in a file for later use. A hook to PLOT3D is built in to allow graphics manipulation. The viewpoint and min/max box for PLOT3D windows are computed by UI and saved in a PLOT3D journal file. For large grids which take a long time to generate on workstations, the grid generator (HYPGEN) can be run on faster computers such as Crays, while UI stays at the workstation.

  19. Mathematical modeling of polymer flooding using the unstructured Voronoi grid

    NASA Astrophysics Data System (ADS)

    Kireev, T. F.; Bulgakova, G. T.; Khatmullin, I. F.

    2017-12-01

    Effective recovery of unconventional oil reserves necessitates development of enhanced oil recovery techniques such as polymer flooding. The study investigated the model of polymer flooding with effects of adsorption and water salinity. The model takes into account six components that include elements of the classic black oil model. These components are polymer, salt, water, dead oil, dry gas and dissolved gas. Solution of the problem is obtained by finite volume method on unstructured Voronoi grid using fully implicit scheme and the Newton’s method. To compare several different grid configurations numerical simulation of polymer flooding is performed. The oil rates obtained by a hexagonal locally refined Voronoi grid are shown to be more accurate than the oil rates obtained by a rectangular grid with the same number of cells. The latter effect is caused by high solution accuracy near the wells due to the local grid refinement. Minimization of the grid orientation effect caused by the hexagonal pattern is also demonstrated. However, in the inter-well regions with large Voronoi cells flood front tends to flatten and the water breakthrough moment is smoothed.

  20. Visualization of the collective vortex-like motions in liquid argon and water: Molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Anikeenko, A. V.; Malenkov, G. G.; Naberukhin, Yu. I.

    2018-03-01

    We propose a new measure of collectivity of molecular motion in the liquid: the average vector of displacement of the particles, ⟨ΔR⟩, which initially have been localized within a sphere of radius Rsph and then have executed the diffusive motion during a time interval Δt. The more correlated the motion of the particles is, the longer will be the vector ⟨ΔR⟩. We visualize the picture of collective motions in molecular dynamics (MD) models of liquids by constructing the ⟨ΔR⟩ vectors and pinning them to the sites of the uniform grid which divides each of the edges of the model box into equal parts. MD models of liquid argon and water have been studied by this method. Qualitatively, the patterns of ⟨ΔR⟩ vectors are similar for these two liquids but differ in minor details. The most important result of our research is the revealing of the aggregates of ⟨ΔR⟩ vectors which have the form of extended flows which sometimes look like the parts of vortices. These vortex-like clusters of ⟨ΔR⟩ vectors have the mesoscopic size (of the order of 10 nm) and persist for tens of picoseconds. Dependence of the ⟨ΔR⟩ vector field on parameters Rsph, Δt, and on the model size has been investigated. This field in the models of liquids differs essentially from that in a random-walk model.

  1. Stochastic Thermodynamics of a Particle in a Box.

    PubMed

    Gong, Zongping; Lan, Yueheng; Quan, H T

    2016-10-28

    The piston system (particles in a box) is the simplest paradigmatic model in traditional thermodynamics. However, the recently established framework of stochastic thermodynamics (ST) fails to apply to this model system due to the embedded singularity in the potential. In this Letter, we study the ST of a particle in a box by adopting a novel coordinate transformation technique. Through comparing with the exact solution of a breathing harmonic oscillator, we obtain analytical results of work distribution for an arbitrary protocol in the linear response regime and verify various predictions of the fluctuation-dissipation relation. When applying to the Brownian Szilard engine model, we obtain the optimal protocol λ_{t}=λ_{0}2^{t/τ} for a given sufficiently long total time τ. Our study not only establishes a paradigm for studying ST of a particle in a box but also bridges the long-standing gap in the development of ST.

  2. A Box-Cox normal model for response times.

    PubMed

    Klein Entink, R H; van der Linden, W J; Fox, J-P

    2009-11-01

    The log-transform has been a convenient choice in response time modelling on test items. However, motivated by a dataset of the Medical College Admission Test where the lognormal model violated the normality assumption, the possibilities of the broader class of Box-Cox transformations for response time modelling are investigated. After an introduction and an outline of a broader framework for analysing responses and response times simultaneously, the performance of a Box-Cox normal model for describing response times is investigated using simulation studies and a real data example. A transformation-invariant implementation of the deviance information criterium (DIC) is developed that allows for comparing model fit between models with different transformation parameters. Showing an enhanced description of the shape of the response time distributions, its application in an educational measurement context is discussed at length.

  3. Semantic 3d City Model to Raster Generalisation for Water Run-Off Modelling

    NASA Astrophysics Data System (ADS)

    Verbree, E.; de Vries, M.; Gorte, B.; Oude Elberink, S.; Karimlou, G.

    2013-09-01

    Water run-off modelling applied within urban areas requires an appropriate detailed surface model represented by a raster height grid. Accurate simulations at this scale level have to take into account small but important water barriers and flow channels given by the large-scale map definitions of buildings, street infrastructure, and other terrain objects. Thus, these 3D features have to be rasterised such that each cell represents the height of the object class as good as possible given the cell size limitations. Small grid cells will result in realistic run-off modelling but with unacceptable computation times; larger grid cells with averaged height values will result in less realistic run-off modelling but fast computation times. This paper introduces a height grid generalisation approach in which the surface characteristics that most influence the water run-off flow are preserved. The first step is to create a detailed surface model (1:1.000), combining high-density laser data with a detailed topographic base map. The topographic map objects are triangulated to a set of TIN-objects by taking into account the semantics of the different map object classes. These TIN objects are then rasterised to two grids with a 0.5m cell-spacing: one grid for the object class labels and the other for the TIN-interpolated height values. The next step is to generalise both raster grids to a lower resolution using a procedure that considers the class label of each cell and that of its neighbours. The results of this approach are tested and validated by water run-off model runs for different cellspaced height grids at a pilot area in Amersfoort (the Netherlands). Two national datasets were used in this study: the large scale Topographic Base map (BGT, map scale 1:1.000), and the National height model of the Netherlands AHN2 (10 points per square meter on average). Comparison between the original AHN2 height grid and the semantically enriched and then generalised height grids shows that water barriers are better preserved with the new method. This research confirms the idea that topographical information, mainly the boundary locations and object classes, can enrich the height grid for this hydrological application.

  4. Dynamic Smagorinsky model on anisotropic grids

    NASA Technical Reports Server (NTRS)

    Scotti, A.; Meneveau, C.; Fatica, M.

    1996-01-01

    Large Eddy Simulation (LES) of complex-geometry flows often involves highly anisotropic meshes. To examine the performance of the dynamic Smagorinsky model in a controlled fashion on such grids, simulations of forced isotropic turbulence are performed using highly anisotropic discretizations. The resulting model coefficients are compared with a theoretical prediction (Scotti et al., 1993). Two extreme cases are considered: pancake-like grids, for which two directions are poorly resolved compared to the third, and pencil-like grids, where one direction is poorly resolved when compared to the other two. For pancake-like grids the dynamic model yields the results expected from the theory (increasing coefficient with increasing aspect ratio), whereas for pencil-like grids the dynamic model does not agree with the theoretical prediction (with detrimental effects only on smallest resolved scales). A possible explanation of the departure is attempted, and it is shown that the problem may be circumvented by using an isotropic test-filter at larger scales. Overall, all models considered give good large-scale results, confirming the general robustness of the dynamic and eddy-viscosity models. But in all cases, the predictions were poor for scales smaller than that of the worst resolved direction.

  5. Investigation of Grid Adaptation to Reduce Computational Efforts for a 2-D Hydrogen-Fueled Dual-Mode Scramjet

    NASA Astrophysics Data System (ADS)

    Foo, Kam Keong

    A two-dimensional dual-mode scramjet flowpath is developed and evaluated using the ANSYS Fluent density-based flow solver with various computational grids. Results are obtained for fuel-off, fuel-on non-reacting, and fuel-on reacting cases at different equivalence ratios. A one-step global chemical kinetics hydrogen-air model is used in conjunction with the eddy-dissipation model. Coarse, medium and fine computational grids are used to evaluate grid sensitivity and to investigate a lack of grid independence. Different grid adaptation strategies are performed on the coarse grid in an attempt to emulate the solutions obtained from the finer grids. The goal of this study is to investigate the feasibility of using various mesh adaptation criteria to significantly decrease computational efforts for high-speed reacting flows.

  6. Finite difference time domain grid generation from AMC helicopter models

    NASA Technical Reports Server (NTRS)

    Cravey, Robin L.

    1992-01-01

    A simple technique is presented which forms a cubic grid model of a helicopter from an Aircraft Modeling Code (AMC) input file. The AMC input file defines the helicopter fuselage as a series of polygonal cross sections. The cubic grid model is used as an input to a Finite Difference Time Domain (FDTD) code to obtain predictions of antenna performance on a generic helicopter model. The predictions compare reasonably well with measured data.

  7. Analysis of the Multi Strategy Goal Programming for Micro-Grid Based on Dynamic ant Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Qiu, J. P.; Niu, D. X.

    Micro-grid is one of the key technologies of the future energy supplies. Take economic planning. reliability, and environmental protection of micro grid as a basis for the analysis of multi-strategy objective programming problems for micro grid which contains wind power, solar power, and battery and micro gas turbine. Establish the mathematical model of each power generation characteristics and energy dissipation. and change micro grid planning multi-objective function under different operating strategies to a single objective model based on AHP method. Example analysis shows that in combination with dynamic ant mixed genetic algorithm can get the optimal power output of this model.

  8. Price of gasoline: forecasting comparisons. [Box-Jenkins, econometric, and regression methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bopp, A.E.; Neri, J.A.

    Gasoline prices are simulated using three popular forecasting methodologies: A Box--Jenkins type method, an econometric method, and a regression method. One-period-ahead and 18-period-ahead comparisons are made. For the one-period-ahead method, a Box--Jenkins type time-series model simulated best, although all do well. However, for the 18-period simulation, the econometric and regression methods perform substantially better than the Box-Jenkins formulation. A rationale for and implications of these results ae discussed. 11 references.

  9. Correlation of Structural Analysis and Test Results for the McDonnell Douglas Stitched/RFI All-Composite Wing Stub Box

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Jegley, Dawn C.; Bush, Harold G.; Hinrichs, Stephen C.

    1996-01-01

    The analytical and experimental results of an all-composite wing stub box are presented in this report. The wing stub box, which is representative of an inboard portion of a commercial transport high-aspect-ratio wing, was fabricated from stitched graphite-epoxy material with a Resin Film Infusion manufacturing process. The wing stub box was designed and constructed by the McDonnell Douglas Aerospace Company as part of the NASA Advanced Composites Technology program. The test article contained metallic load-introduction structures on the inboard and outboard ends of the graphite-epoxy wing stub box. The root end of the inboard load introduction structure was attached to a vertical reaction structure, and an upward load was applied to the outermost tip of the outboard load introduction structure to induce bending of the wing stub box. A finite element model was created in which the center portion of the wing-stub-box upper cover panel was modeled with a refined mesh. The refined mesh was required to represent properly the geometrically nonlinear structural behavior of the upper cover panel and to predict accurately the strains in the stringer webs of the stiffened upper cover panel. The analytical and experimental results for deflections and strains are in good agreement.

  10. Design of the Cross Section Shape of AN Aluminum Crash Box for Crashworthiness Enhancement of a CAR

    NASA Astrophysics Data System (ADS)

    Kim, S. B.; Huh, H.; Lee, G. H.; Yoo, J. S.; Lee, M. Y.

    This paper deals with the crashworthiness of an aluminum crash box for an auto-body with the various shapes of cross section such as a rectangle, a hexagon and an octagon. First, crash boxes with various cross sections were tested with numerical simulation to obtain the energy absorption capacity and the mean load. In case of the simple axial crush, the octagon shape shows higher mean load and energy absorption than the other two shapes. Secondly, the crash boxes were assembled to a simplified auto-body model for the overall crashworthiness. The model consists of a bumper, crash boxes, front side members and a sub-frame representing the behavior of a full car at the low speed impact. The analysis result shows that the rectangular cross section shows the best performance as a crash box which deforms prior to the front side member. The hexagonal and octagonal cross sections undergo torsion and local buckling as the width of cross section decreases while the rectangular cross section does not. The simulation result of the rectangular crash box was verified with the experimental result. The simulation result shows close tendency in the deformed shape and the load-displacement curve to the experimental result.

  11. Monitoring and Modeling Performance of Communications in Computational Grids

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Le, Thuy T.

    2003-01-01

    Computational grids may include many machines located in a number of sites. For efficient use of the grid we need to have an ability to estimate the time it takes to communicate data between the machines. For dynamic distributed grids it is unrealistic to know exact parameters of the communication hardware and the current communication traffic and we should rely on a model of the network performance to estimate the message delivery time. Our approach to a construction of such a model is based on observation of the messages delivery time with various message sizes and time scales. We record these observations in a database and use them to build a model of the message delivery time. Our experiments show presence of multiple bands in the logarithm of the message delivery times. These multiple bands represent multiple paths messages travel between the grid machines and are incorporated in our multiband model.

  12. Development of Multiobjective Optimization Techniques for Sonic Boom Minimization

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.

    1996-01-01

    A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.

  13. Integrated Devices and Systems | Grid Modernization | NREL

    Science.gov Websites

    storage models Microgrids Microgrids Grid Simulation and Power Hardware-in-the-Loop Grid simulation and power hardware-in-the-loop Grid Standards and Codes Standards and codes Contact Barry Mather, Ph.D

  14. Evaluation of a risk-based environmental hot spot delineation algorithm.

    PubMed

    Sinha, Parikhit; Lambert, Michael B; Schew, William A

    2007-10-22

    Following remedial investigations of hazardous waste sites, remedial strategies may be developed that target the removal of "hot spots," localized areas of elevated contamination. For a given exposure area, a hot spot may be defined as a sub-area that causes risks for the whole exposure area to be unacceptable. The converse of this statement may also apply: when a hot spot is removed from within an exposure area, risks for the exposure area may drop below unacceptable thresholds. The latter is the motivation for a risk-based approach to hot spot delineation, which was evaluated using Monte Carlo simulation. Random samples taken from a virtual site ("true site") were used to create an interpolated site. The latter was gridded and concentrations from the center of each grid box were used to calculate 95% upper confidence limits on the mean site contaminant concentration and corresponding hazard quotients for a potential receptor. Grid cells with the highest concentrations were removed and hazard quotients were recalculated until the site hazard quotient dropped below the threshold of 1. The grid cells removed in this way define the spatial extent of the hot spot. For each of the 100,000 Monte Carlo iterations, the delineated hot spot was compared to the hot spot in the "true site." On average, the algorithm was able to delineate hot spots that were collocated with and equal to or greater in size than the "true hot spot." When delineated hot spots were mapped onto the "true site," setting contaminant concentrations in the mapped area to zero, the hazard quotients for these "remediated true sites" were on average within 5% of the acceptable threshold of 1.

  15. An effective box trap for capturing lynx

    Treesearch

    Jay A. Kolbe; John R. Squires; Thomas W. Parker

    2003-01-01

    We designed a box trap for capturing lynx (Lynx lynx) that is lightweight, safe, effective, and less expensive than many commercial models. It can be constructed in approximately 3-4 hours from readily available materials. We used this trap to capture 40 lynx 89 times (96% of lynx entering traps) and observed no trapping related injuries. We compare our box...

  16. The BioGRID interaction database: 2013 update.

    PubMed

    Chatr-Aryamontri, Andrew; Breitkreutz, Bobby-Joe; Heinicke, Sven; Boucher, Lorrie; Winter, Andrew; Stark, Chris; Nixon, Julie; Ramage, Lindsay; Kolas, Nadine; O'Donnell, Lara; Reguly, Teresa; Breitkreutz, Ashton; Sellam, Adnane; Chen, Daici; Chang, Christie; Rust, Jennifer; Livstone, Michael; Oughtred, Rose; Dolinski, Kara; Tyers, Mike

    2013-01-01

    The Biological General Repository for Interaction Datasets (BioGRID: http//thebiogrid.org) is an open access archive of genetic and protein interactions that are curated from the primary biomedical literature for all major model organism species. As of September 2012, BioGRID houses more than 500 000 manually annotated interactions from more than 30 model organisms. BioGRID maintains complete curation coverage of the literature for the budding yeast Saccharomyces cerevisiae, the fission yeast Schizosaccharomyces pombe and the model plant Arabidopsis thaliana. A number of themed curation projects in areas of biomedical importance are also supported. BioGRID has established collaborations and/or shares data records for the annotation of interactions and phenotypes with most major model organism databases, including Saccharomyces Genome Database, PomBase, WormBase, FlyBase and The Arabidopsis Information Resource. BioGRID also actively engages with the text-mining community to benchmark and deploy automated tools to expedite curation workflows. BioGRID data are freely accessible through both a user-defined interactive interface and in batch downloads in a wide variety of formats, including PSI-MI2.5 and tab-delimited files. BioGRID records can also be interrogated and analyzed with a series of new bioinformatics tools, which include a post-translational modification viewer, a graphical viewer, a REST service and a Cytoscape plugin.

  17. Filter size definition in anisotropic subgrid models for large eddy simulation on irregular grids

    NASA Astrophysics Data System (ADS)

    Abbà, Antonella; Campaniello, Dario; Nini, Michele

    2017-06-01

    The definition of the characteristic filter size to be used for subgrid scales models in large eddy simulation using irregular grids is still an unclosed problem. We investigate some different approaches to the definition of the filter length for anisotropic subgrid scale models and we propose a tensorial formulation based on the inertial ellipsoid of the grid element. The results demonstrate an improvement in the prediction of several key features of the flow when the anisotropicity of the grid is explicitly taken into account with the tensorial filter size.

  18. Grid sensitivity for aerodynamic optimization and flow analysis

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, I.; Tiwari, S. N.

    1993-01-01

    After reviewing relevant literature, it is apparent that one aspect of aerodynamic sensitivity analysis, namely grid sensitivity, has not been investigated extensively. The grid sensitivity algorithms in most of these studies are based on structural design models. Such models, although sufficient for preliminary or conceptional design, are not acceptable for detailed design analysis. Careless grid sensitivity evaluations, would introduce gradient errors within the sensitivity module, therefore, infecting the overall optimization process. Development of an efficient and reliable grid sensitivity module with special emphasis on aerodynamic applications appear essential. The organization of this study is as follows. The physical and geometric representations of a typical model are derived in chapter 2. The grid generation algorithm and boundary grid distribution are developed in chapter 3. Chapter 4 discusses the theoretical formulation and aerodynamic sensitivity equation. The method of solution is provided in chapter 5. The results are presented and discussed in chapter 6. Finally, some concluding remarks are provided in chapter 7.

  19. Regional models of the gravity field from terrestrial gravity data of heterogeneous quality and density

    NASA Astrophysics Data System (ADS)

    Talvik, Silja; Oja, Tõnis; Ellmann, Artu; Jürgenson, Harli

    2014-05-01

    Gravity field models in a regional scale are needed for a number of applications, for example national geoid computation, processing of precise levelling data and geological modelling. Thus the methods applied for modelling the gravity field from surveyed gravimetric information need to be considered carefully. The influence of using different gridding methods, the inclusion of unit or realistic weights and indirect gridding of free air anomalies (FAA) are investigated in the study. Known gridding methods such as kriging (KRIG), least squares collocation (LSCO), continuous curvature (CCUR) and optimal Delaunay triangulation (ODET) are used for production of gridded gravity field surfaces. As the quality of data collected varies considerably depending on the methods and instruments available or used in surveying it is important to somehow weigh the input data. This puts additional demands on data maintenance as accuracy information needs to be available for each data point participating in the modelling which is complicated by older gravity datasets where the uncertainties of not only gravity values but also supplementary information such as survey point position are not always known very accurately. A number of gravity field applications (e.g. geoid computation) demand foran FAA model, the acquisition of which is also investigated. Instead of direct gridding it could be more appropriate to proceed with indirect FAA modelling using a Bouguer anomaly grid to reduce the effect of topography on the resulting FAA model (e.g. near terraced landforms). The inclusion of different gridding methods, weights and indirect FAA modelling helps to improve gravity field modelling methods. It becomes possible to estimate the impact of varying methodical approaches on the gravity field modelling as statistical output is compared. Such knowledge helps assess the accuracy of gravity field models and their effect on the aforementioned applications.

  20. Evaluating penalized logistic regression models to predict Heat-Related Electric grid stress days

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bramer, L. M.; Rounds, J.; Burleyson, C. D.

    Understanding the conditions associated with stress on the electricity grid is important in the development of contingency plans for maintaining reliability during periods when the grid is stressed. In this paper, heat-related grid stress and the relationship with weather conditions is examined using data from the eastern United States. Penalized logistic regression models were developed and applied to predict stress on the electric grid using weather data. The inclusion of other weather variables, such as precipitation, in addition to temperature improved model performance. Several candidate models and datasets were examined. A penalized logistic regression model fit at the operation-zone levelmore » was found to provide predictive value and interpretability. Additionally, the importance of different weather variables observed at different time scales were examined. Maximum temperature and precipitation were identified as important across all zones while the importance of other weather variables was zone specific. The methods presented in this work are extensible to other regions and can be used to aid in planning and development of the electrical grid.« less

  1. Evaluating penalized logistic regression models to predict Heat-Related Electric grid stress days

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bramer, Lisa M.; Rounds, J.; Burleyson, C. D.

    Understanding the conditions associated with stress on the electricity grid is important in the development of contingency plans for maintaining reliability during periods when the grid is stressed. In this paper, heat-related grid stress and the relationship with weather conditions were examined using data from the eastern United States. Penalized logistic regression models were developed and applied to predict stress on the electric grid using weather data. The inclusion of other weather variables, such as precipitation, in addition to temperature improved model performance. Several candidate models and combinations of predictive variables were examined. A penalized logistic regression model which wasmore » fit at the operation-zone level was found to provide predictive value and interpretability. Additionally, the importance of different weather variables observed at various time scales were examined. Maximum temperature and precipitation were identified as important across all zones while the importance of other weather variables was zone specific. In conclusion, the methods presented in this work are extensible to other regions and can be used to aid in planning and development of the electrical grid.« less

  2. Evaluating penalized logistic regression models to predict Heat-Related Electric grid stress days

    DOE PAGES

    Bramer, Lisa M.; Rounds, J.; Burleyson, C. D.; ...

    2017-09-22

    Understanding the conditions associated with stress on the electricity grid is important in the development of contingency plans for maintaining reliability during periods when the grid is stressed. In this paper, heat-related grid stress and the relationship with weather conditions were examined using data from the eastern United States. Penalized logistic regression models were developed and applied to predict stress on the electric grid using weather data. The inclusion of other weather variables, such as precipitation, in addition to temperature improved model performance. Several candidate models and combinations of predictive variables were examined. A penalized logistic regression model which wasmore » fit at the operation-zone level was found to provide predictive value and interpretability. Additionally, the importance of different weather variables observed at various time scales were examined. Maximum temperature and precipitation were identified as important across all zones while the importance of other weather variables was zone specific. In conclusion, the methods presented in this work are extensible to other regions and can be used to aid in planning and development of the electrical grid.« less

  3. Technical Report Series on Global Modeling and Data Assimilation. Volume 16; Filtering Techniques on a Stretched Grid General Circulation Model

    NASA Technical Reports Server (NTRS)

    Takacs, Lawrence L.; Sawyer, William; Suarez, Max J. (Editor); Fox-Rabinowitz, Michael S.

    1999-01-01

    This report documents the techniques used to filter quantities on a stretched grid general circulation model. Standard high-latitude filtering techniques (e.g., using an FFT (Fast Fourier Transformations) to decompose and filter unstable harmonics at selected latitudes) applied on a stretched grid are shown to produce significant distortions of the prognostic state when used to control instabilities near the pole. A new filtering technique is developed which accurately accounts for the non-uniform grid by computing the eigenvectors and eigenfrequencies associated with the stretching. A filter function, constructed to selectively damp those modes whose associated eigenfrequencies exceed some critical value, is used to construct a set of grid-spaced weights which are shown to effectively filter without distortion. Both offline and GCM (General Circulation Model) experiments are shown using the new filtering technique. Finally, a brief examination is also made on the impact of applying the Shapiro filter on the stretched grid.

  4. Comparison of Models for Spacer Grid Pressure Loss in Nuclear Fuel Bundles for One and Two-Phase Flows

    NASA Astrophysics Data System (ADS)

    Maskal, Alan B.

    Spacer grids maintain the structural integrity of the fuel rods within fuel bundles of nuclear power plants. They can also improve flow characteristics within the nuclear reactor core. However, spacer grids add reactor coolant pressure losses, which require estimation and engineering into the design. Several mathematical models and computer codes were developed over decades to predict spacer grid pressure loss. Most models use generalized characteristics, measured by older, less precise equipment. The study of OECD/US-NRC BWR Full-Size Fine Mesh Bundle Tests (BFBT) provides updated and detailed experimental single and two-phase results, using technically advanced flow measurements for a wide range of boundary conditions. This thesis compares the predictions from the mathematical models to the BFBT experimental data by utilizing statistical formulae for accuracy and precision. This thesis also analyzes the effects of BFBT flow characteristics on spacer grids. No single model has been identified as valid for all flow conditions. However, some models' predictions perform better than others within a range of flow conditions, based on the accuracy and precision of the models' predictions. This study also demonstrates that pressure and flow quality have a significant effect on two-phase flow spacer grid models' biases.

  5. A Petri Net model for distributed energy system

    NASA Astrophysics Data System (ADS)

    Konopko, Joanna

    2015-12-01

    Electrical networks need to evolve to become more intelligent, more flexible and less costly. The smart grid is the next generation power energy, uses two-way flows of electricity and information to create a distributed automated energy delivery network. Building a comprehensive smart grid is a challenge for system protection, optimization and energy efficient. Proper modeling and analysis is needed to build an extensive distributed energy system and intelligent electricity infrastructure. In this paper, the whole model of smart grid have been proposed using Generalized Stochastic Petri Nets (GSPN). The simulation of created model is also explored. The simulation of the model has allowed the analysis of how close the behavior of the model is to the usage of the real smart grid.

  6. The analysis of polar clouds from AVHRR satellite data using pattern recognition techniques

    NASA Technical Reports Server (NTRS)

    Smith, William L.; Ebert, Elizabeth

    1990-01-01

    The cloud cover in a set of summertime and wintertime AVHRR data from the Arctic and Antarctic regions was analyzed using a pattern recognition algorithm. The data were collected by the NOAA-7 satellite on 6 to 13 Jan. and 1 to 7 Jul. 1984 between 60 deg and 90 deg north and south latitude in 5 spectral channels, at the Global Area Coverage (GAC) resolution of approximately 4 km. This data embodied a Polar Cloud Pilot Data Set which was analyzed by a number of research groups as part of a polar cloud algorithm intercomparison study. This study was intended to determine whether the additional information contained in the AVHRR channels (beyond the standard visible and infrared bands on geostationary satellites) could be effectively utilized in cloud algorithms to resolve some of the cloud detection problems caused by low visible and thermal contrasts in the polar regions. The analysis described makes use of a pattern recognition algorithm which estimates the surface and cloud classification, cloud fraction, and surface and cloudy visible (channel 1) albedo and infrared (channel 4) brightness temperatures on a 2.5 x 2.5 deg latitude-longitude grid. In each grid box several spectral and textural features were computed from the calibrated pixel values in the multispectral imagery, then used to classify the region into one of eighteen surface and/or cloud types using the maximum likelihood decision rule. A slightly different version of the algorithm was used for each season and hemisphere because of differences in categories and because of the lack of visible imagery during winter. The classification of the scene is used to specify the optimal AVHRR channel for separating clear and cloudy pixels using a hybrid histogram-spatial coherence method. This method estimates values for cloud fraction, clear and cloudy albedos and brightness temperatures in each grid box. The choice of a class-dependent AVHRR channel allows for better separation of clear and cloudy pixels than does a global choice of a visible and/or infrared threshold. The classification also prevents erroneous estimates of large fractional cloudiness in areas of cloudfree snow and sea ice. The hybrid histogram-spatial coherence technique and the advantages of first classifying a scene in the polar regions are detailed. The complete Polar Cloud Pilot Data Set was analyzed and the results are presented and discussed.

  7. Comparison of distributed reacceleration and leaky-box models of cosmic-ray abundances (Z = 3-28)

    NASA Technical Reports Server (NTRS)

    Letaw, John R.; Silberberg, Rein; Tsao, C. H.

    1993-01-01

    A large collection of elemental and isotopic cosmic-ray data has been analyzed using the leaky-box transport model with and without reacceleration in the interstellar medium. Abundances of isotopes and elements with charges Z = 3-28 and energies E = 10 MeV/nucleon-1 TeV/nucleon were explored. Our results demonstrate that reacceleration models make detailed and accurate predictions with the same number of parameters or fewer as standard leaky-box models. Ad hoc fitting parameters in the standard model are replaced by astrophysically significant reacceleration parameters. Distributed reacceleration models explain the peak in secondary-to-primary ratios around 1 GeV/nucleon. They diminish the discrepancy between rigidity-dependent leakage and energy-independent anisotropy. They also offer the possibility of understanding isotopic anomalies at low energy.

  8. CSciBox: An Intelligent Assistant for Dating Ice and Sediment Cores

    NASA Astrophysics Data System (ADS)

    Finlinson, K.; Bradley, E.; White, J. W. C.; Anderson, K. A.; Marchitto, T. M., Jr.; de Vesine, L. R.; Jones, T. R.; Lindsay, C. M.; Israelsen, B.

    2015-12-01

    CSciBox is an integrated software system for the construction and evaluation of age models of paleo-environmental archives. It incorporates a number of data-processing and visualization facilities, ranging from simple interpolation to reservoir-age correction and 14C calibration via the Calib algorithm, as well as a number of firn and ice-flow models. It employs modern database technology to store paleoclimate proxy data and analysis results in an easily accessible and searchable form, and offers the user access to those data and computational elements via a modern graphical user interface (GUI). In the case of truly large data or computations, CSciBox is parallelizable across modern multi-core processors, or clusters, or even the cloud. The code is open source and freely available on github, as are one-click installers for various versions of Windows and Mac OSX. The system's architecture allows users to incorporate their own software in the form of computational components that can be built smoothly into CSciBox workflows, taking advantage of CSciBox's GUI, data importing facilities, and plotting capabilities. To date, BACON and StratiCounter have been integrated into CSciBox as embedded components. The user can manipulate and compose all of these tools and facilities as she sees fit. Alternatively, she can employ CSciBox's automated reasoning engine, which uses artificial intelligence techniques to explore the gamut of age models and cross-dating scenarios automatically. The automated reasoning engine captures the knowledge of expert geoscientists, and can output a description of its reasoning.

  9. Cosmic ray antiprotons in closed galaxy model

    NASA Technical Reports Server (NTRS)

    Protheroe, R.

    1981-01-01

    The flux of secondary antiprotons expected for the leaky-box model was calculated as well as that for the closed galaxy model of Peters and Westergard (1977). The antiproton/proton ratio observed at several GeV is a factor of 4 higher than the prediction for the leaky-box model but is consistent with that predicted for the closed galaxy model. New low energy data is not consistent with either model. The possibility of a primary antiproton component is discussed.

  10. The Particle/Wave-in-a-Box Model in Dutch Secondary Schools

    ERIC Educational Resources Information Center

    Hoekzema, Dick; van den Berg, Ed; Schooten, Gert; van Dijk, Leo

    2007-01-01

    The combination of mathematical and conceptual difficulties makes teaching quantum physics at secondary schools a precarious undertaking. With many of the conceptual difficulties being unavoidable, simplifying the mathematics becomes top priority. The particle/wave-in-a-box provides a teaching model which includes many aspects of serious …

  11. SUSCEPTIBILITY OF A GULF OF MEXICO ESTUARY TO HYPOXIA: AN ANALYSIS USING BOX MODELS

    EPA Science Inventory

    The extent of hypoxia and the physical factors affecting development and maintenance of hypoxia were examined for Pensacola Bay, Florida (USA) by conducting monthly water quality surveys for 3 years and by constructing salt-and-water balance box models using the resulting data. W...

  12. The Analysis of Organizational Diagnosis on Based Six Box Model in Universities

    ERIC Educational Resources Information Center

    Hamid, Rahimi; Siadat, Sayyed Ali; Reza, Hoveida; Arash, Shahin; Ali, Nasrabadi Hasan; Azizollah, Arbabisarjou

    2011-01-01

    Purpose: The analysis of organizational diagnosis on based six box model at universities. Research method: Research method was descriptive-survey. Statistical population consisted of 1544 faculty members of universities which through random strafed sampling method 218 persons were chosen as the sample. Research Instrument were organizational…

  13. Prestressing force monitoring method for a box girder through distributed long-gauge FBG sensors

    NASA Astrophysics Data System (ADS)

    Chen, Shi-Zhi; Wu, Gang; Xing, Tuo; Feng, De-Cheng

    2018-01-01

    Monitoring prestressing forces is essential for prestressed concrete box girder bridges. However, the current monitoring methods used for prestressing force were not applicable for a box girder neither because of the sensor’s setup being constrained or shear lag effect not being properly considered. Through combining with the previous analysis model of shear lag effect in the box girder, this paper proposed an indirect monitoring method for on-site determination of prestressing force in a concrete box girder utilizing the distributed long-gauge fiber Bragg grating sensor. The performance of this method was initially verified using numerical simulation for three different distribution forms of prestressing tendons. Then, an experiment involving two concrete box girders was conducted to study the feasibility of this method under different prestressing levels preliminarily. The results of both numerical simulation and lab experiment validated this method’s practicability in a box girder.

  14. Using Cloud-to-Ground Lightning Climatologies to Initialize Gridded Lightning Threat Forecasts for East Central Florida

    NASA Technical Reports Server (NTRS)

    Lambert, Winnie; Sharp, David; Spratt, Scott; Volkmer, Matthew

    2005-01-01

    Each morning, the forecasters at the National Weather Service in Melbourn, FL (NWS MLB) produce an experimental cloud-to-ground (CG) lightning threat index map for their county warning area (CWA) that is posted to their web site (http://www.srh.weather.gov/mlb/ghwo/lightning.shtml) . Given the hazardous nature of lightning in central Florida, especially during the warm season months of May-September, these maps help users factor the threat of lightning, relative to their location, into their daily plans. The maps are color-coded in five levels from Very Low to Extreme, with threat level definitions based on the probability of lightning occurrence and the expected amount of CG activity. On a day in which thunderstorms are expected, there are typically two or more threat levels depicted spatially across the CWA. The locations of relative lightning threat maxima and minima often depend on the position and orientation of the low-level ridge axis, forecast propagation and interaction of sea/lake/outflow boundaries, expected evolution of moisture and stability fields, and other factors that can influence the spatial distribution of thunderstorms over the CWA. The lightning threat index maps are issued for the 24-hour period beginning at 1200 UTC (0700 AM EST) each day with a grid resolution of 5 km x 5 km. Product preparation is performed on the AWIPS Graphical Forecast Editor (GFE), which is the standard NWS platform for graphical editing. Currently, the forecasters create each map manually, starting with a blank map. To improve efficiency of the forecast process, NWS MLB requested that the Applied Meteorology Unit (AMU) create gridded warm season lightning climatologies that could be used as first-guess inputs to initialize lightning threat index maps. The gridded values requested included CG strike densities and frequency of occurrence stratified by synoptic-scale flow regime. The intent is to increase consistency between forecasters while enabling them to focus on the mesoscale detail of the forecast, ultimately benefiting the end-users of the product. Several studies took place at the Florida State University (FSU) and NWS Tallahassee (TAE) for which they created daily flow regimes using Florida 1200 UTC synoptic soundings and CG strike densities from National Lightning Detection Network (NLDN) data. The densities were created on a 2.5 km x 2.5 km grid for every hour of every day during the warm seasons in the years 1989-2004. The grids encompass an area that includes the entire state of Florida and adjacent Atlantic and Gulf of Mexico waters. Personnel at the two organizations provided this data and supporting software for the work performed by the AMU. The densities were first stratified by flow regime, then by time in 1-, 3-, 6-, 12-, and 24-hour increments while maintaining the 2.5 km x 2.5 km grid resolution. A CG frequency of occurrence was calculated for each stratification and grid box by counting the number of days with lightning and dividing by the total number of days in the data set. New CG strike densities were calculated for each stratification and grid box by summing the strike number values over all warm seasons, then normalized by dividing the summed values by the number of lightning days. This makes the densities conditional on whether lightning occurred. The frequency climatology values will be used by forecasters as proxy inputs for lightning prObability, while the density climatology values will be used for CG amount. In addition to the benefits outlined above, these climatologies will provide improved temporal and spatial resolution, expansion of the lightning threat area to include adjacent coastal waters, and potential to extend the forecast to include the day-2 period. This presentation will describe the lightning threat index map, discuss the work done to create the maps initialized with climatological guidance, and show examples of the climatological CG lightning densities and frequencies of occurren based on flow regime.

  15. A framework for WRF to WRF-IBM grid nesting to enable multiscale simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiersema, David John; Lundquist, Katherine A.; Chow, Fotini Katapodes

    With advances in computational power, mesoscale models, such as the Weather Research and Forecasting (WRF) model, are often pushed to higher resolutions. As the model’s horizontal resolution is refined, the maximum resolved terrain slope will increase. Because WRF uses a terrain-following coordinate, this increase in resolved terrain slopes introduces additional grid skewness. At high resolutions and over complex terrain, this grid skewness can introduce large numerical errors that require methods, such as the immersed boundary method, to keep the model accurate and stable. Our implementation of the immersed boundary method in the WRF model, WRF-IBM, has proven effective at microscalemore » simulations over complex terrain. WRF-IBM uses a non-conforming grid that extends beneath the model’s terrain. Boundary conditions at the immersed boundary, the terrain, are enforced by introducing a body force term to the governing equations at points directly beneath the immersed boundary. Nesting between a WRF parent grid and a WRF-IBM child grid requires a new framework for initialization and forcing of the child WRF-IBM grid. This framework will enable concurrent multi-scale simulations within the WRF model, improving the accuracy of high-resolution simulations and enabling simulations across a wide range of scales.« less

  16. OBLIMAP 2.0: a fast climate model-ice sheet model coupler including online embeddable mapping routines

    NASA Astrophysics Data System (ADS)

    Reerink, Thomas J.; van de Berg, Willem Jan; van de Wal, Roderik S. W.

    2016-11-01

    This paper accompanies the second OBLIMAP open-source release. The package is developed to map climate fields between a general circulation model (GCM) and an ice sheet model (ISM) in both directions by using optimal aligned oblique projections, which minimize distortions. The curvature of the surfaces of the GCM and ISM grid differ, both grids may be irregularly spaced and the ratio of the grids is allowed to differ largely. OBLIMAP's stand-alone version is able to map data sets that differ in various aspects on the same ISM grid. Each grid may either coincide with the surface of a sphere, an ellipsoid or a flat plane, while the grid types might differ. Re-projection of, for example, ISM data sets is also facilitated. This is demonstrated by relevant applications concerning the major ice caps. As the stand-alone version also applies to the reverse mapping direction, it can be used as an offline coupler. Furthermore, OBLIMAP 2.0 is an embeddable GCM-ISM coupler, suited for high-frequency online coupled experiments. A new fast scan method is presented for structured grids as an alternative for the former time-consuming grid search strategy, realising a performance gain of several orders of magnitude and enabling the mapping of high-resolution data sets with a much larger number of grid nodes. Further, a highly flexible masked mapping option is added. The limitation of the fast scan method with respect to unstructured and adaptive grids is discussed together with a possible future parallel Message Passing Interface (MPI) implementation.

  17. Evaluation of Grid Modification Methods for On- and Off-Track Sonic Boom Analysis

    NASA Technical Reports Server (NTRS)

    Nayani, Sudheer N.; Campbell, Richard L.

    2013-01-01

    Grid modification methods have been under development at NASA to enable better predictions of low boom pressure signatures from supersonic aircraft. As part of this effort, two new codes, Stretched and Sheared Grid - Modified (SSG) and Boom Grid (BG), have been developed in the past year. The CFD results from these codes have been compared with ones from the earlier grid modification codes Stretched and Sheared Grid (SSGRID) and Mach Cone Aligned Prism (MCAP) and also with the available experimental results. NASA's unstructured grid suite of software TetrUSS and the automatic sourcing code AUTOSRC were used for base grid generation and flow solutions. The BG method has been evaluated on three wind tunnel models. Pressure signatures have been obtained up to two body lengths below a Gulfstream aircraft wind tunnel model. Good agreement with the wind tunnel results have been obtained for both on-track and off-track (up to 53 degrees) cases. On-track pressure signatures up to ten body lengths below a Straight Line Segmented Leading Edge (SLSLE) wind tunnel model have been extracted. Good agreement with the wind tunnel results have been obtained. Pressure signatures have been obtained at 1.5 body lengths below a Lockheed Martin aircraft wind tunnel model. Good agreement with the wind tunnel results have been obtained for both on-track and off-track (up to 40 degrees) cases. Grid sensitivity studies have been carried out to investigate any grid size related issues. Methods have been evaluated for fully turbulent, mixed laminar/turbulent and fully laminar flow conditions.

  18. Use of upscaled elevation and surface roughness data in two-dimensional surface water models

    USGS Publications Warehouse

    Hughes, J.D.; Decker, J.D.; Langevin, C.D.

    2011-01-01

    In this paper, we present an approach that uses a combination of cell-block- and cell-face-averaging of high-resolution cell elevation and roughness data to upscale hydraulic parameters and accurately simulate surface water flow in relatively low-resolution numerical models. The method developed allows channelized features that preferentially connect large-scale grid cells at cell interfaces to be represented in models where these features are significantly smaller than the selected grid size. The developed upscaling approach has been implemented in a two-dimensional finite difference model that solves a diffusive wave approximation of the depth-integrated shallow surface water equations using preconditioned Newton–Krylov methods. Computational results are presented to show the effectiveness of the mixed cell-block and cell-face averaging upscaling approach in maintaining model accuracy, reducing model run-times, and how decreased grid resolution affects errors. Application examples demonstrate that sub-grid roughness coefficient variations have a larger effect on simulated error than sub-grid elevation variations.

  19. The Impact of Three-Dimensional Effects on the Simulation of Turbulence Kinetic Energy in a Major Alpine Valley

    NASA Astrophysics Data System (ADS)

    Goger, Brigitta; Rotach, Mathias W.; Gohm, Alexander; Fuhrer, Oliver; Stiperski, Ivana; Holtslag, Albert A. M.

    2018-02-01

    The correct simulation of the atmospheric boundary layer (ABL) is crucial for reliable weather forecasts in truly complex terrain. However, common assumptions for model parametrizations are only valid for horizontally homogeneous and flat terrain. Here, we evaluate the turbulence parametrization of the numerical weather prediction model COSMO with a horizontal grid spacing of Δ x = 1.1 km for the Inn Valley, Austria. The long-term, high-resolution turbulence measurements of the i-Box measurement sites provide a useful data pool of the ABL structure in the valley and on slopes. We focus on days and nights when ABL processes dominate and a thermally-driven circulation is present. Simulations are performed for case studies with both a one-dimensional turbulence parametrization, which only considers the vertical turbulent exchange, and a hybrid turbulence parametrization, also including horizontal shear production and advection in the budget of turbulence kinetic energy (TKE). We find a general underestimation of TKE by the model with the one-dimensional turbulence parametrization. In the simulations with the hybrid turbulence parametrization, the modelled TKE has a more realistic structure, especially in situations when the TKE production is dominated by shear related to the afternoon up-valley flow, and during nights, when a stable ABL is present. The model performance also improves for stations on the slopes. An estimation of the horizontal shear production from the observation network suggests that three-dimensional effects are a relevant part of TKE production in the valley.

  20. The Impact of Three-Dimensional Effects on the Simulation of Turbulence Kinetic Energy in a Major Alpine Valley

    NASA Astrophysics Data System (ADS)

    Goger, Brigitta; Rotach, Mathias W.; Gohm, Alexander; Fuhrer, Oliver; Stiperski, Ivana; Holtslag, Albert A. M.

    2018-07-01

    The correct simulation of the atmospheric boundary layer (ABL) is crucial for reliable weather forecasts in truly complex terrain. However, common assumptions for model parametrizations are only valid for horizontally homogeneous and flat terrain. Here, we evaluate the turbulence parametrization of the numerical weather prediction model COSMO with a horizontal grid spacing of Δ x = 1.1 km for the Inn Valley, Austria. The long-term, high-resolution turbulence measurements of the i-Box measurement sites provide a useful data pool of the ABL structure in the valley and on slopes. We focus on days and nights when ABL processes dominate and a thermally-driven circulation is present. Simulations are performed for case studies with both a one-dimensional turbulence parametrization, which only considers the vertical turbulent exchange, and a hybrid turbulence parametrization, also including horizontal shear production and advection in the budget of turbulence kinetic energy (TKE). We find a general underestimation of TKE by the model with the one-dimensional turbulence parametrization. In the simulations with the hybrid turbulence parametrization, the modelled TKE has a more realistic structure, especially in situations when the TKE production is dominated by shear related to the afternoon up-valley flow, and during nights, when a stable ABL is present. The model performance also improves for stations on the slopes. An estimation of the horizontal shear production from the observation network suggests that three-dimensional effects are a relevant part of TKE production in the valley.

  1. Model of interaction in Smart Grid on the basis of multi-agent system

    NASA Astrophysics Data System (ADS)

    Engel, E. A.; Kovalev, I. V.; Engel, N. E.

    2016-11-01

    This paper presents model of interaction in Smart Grid on the basis of multi-agent system. The use of travelling waves in the multi-agent system describes the behavior of the Smart Grid from the local point, which is being the complement of the conventional approach. The simulation results show that the absorption of the wave in the distributed multi-agent systems is effectively simulated the interaction in Smart Grid.

  2. Comparative analysis of existing models for power-grid synchronization

    NASA Astrophysics Data System (ADS)

    Nishikawa, Takashi; Motter, Adilson E.

    2015-01-01

    The dynamics of power-grid networks is becoming an increasingly active area of research within the physics and network science communities. The results from such studies are typically insightful and illustrative, but are often based on simplifying assumptions that can be either difficult to assess or not fully justified for realistic applications. Here we perform a comprehensive comparative analysis of three leading models recently used to study synchronization dynamics in power-grid networks—a fundamental problem of practical significance given that frequency synchronization of all power generators in the same interconnection is a necessary condition for a power grid to operate. We show that each of these models can be derived from first principles within a common framework based on the classical model of a generator, thereby clarifying all assumptions involved. This framework allows us to view power grids as complex networks of coupled second-order phase oscillators with both forcing and damping terms. Using simple illustrative examples, test systems, and real power-grid datasets, we study the inherent frequencies of the oscillators as well as their coupling structure, comparing across the different models. We demonstrate, in particular, that if the network structure is not homogeneous, generators with identical parameters need to be modeled as non-identical oscillators in general. We also discuss an approach to estimate the required (dynamical) system parameters that are unavailable in typical power-grid datasets, their use for computing the constants of each of the three models, and an open-source MATLAB toolbox that we provide for these computations.

  3. Using a composite grid approach in a complex coastal domain to estimate estuarine residence time

    USGS Publications Warehouse

    Warner, John C.; Geyer, W. Rockwell; Arango, Herman G.

    2010-01-01

    We investigate the processes that influence residence time in a partially mixed estuary using a three-dimensional circulation model. The complex geometry of the study region is not optimal for a structured grid model and so we developed a new method of grid connectivity. This involves a novel approach that allows an unlimited number of individual grids to be combined in an efficient manner to produce a composite grid. We then implemented this new method into the numerical Regional Ocean Modeling System (ROMS) and developed a composite grid of the Hudson River estuary region to investigate the residence time of a passive tracer. Results show that the residence time is a strong function of the time of release (spring vs. neap tide), the along-channel location, and the initial vertical placement. During neap tides there is a maximum in residence time near the bottom of the estuary at the mid-salt intrusion length. During spring tides the residence time is primarily a function of along-channel location and does not exhibit a strong vertical variability. This model study of residence time illustrates the utility of the grid connectivity method for circulation and dispersion studies in regions of complex geometry.

  4. The BioGRID interaction database: 2017 update

    PubMed Central

    Chatr-aryamontri, Andrew; Oughtred, Rose; Boucher, Lorrie; Rust, Jennifer; Chang, Christie; Kolas, Nadine K.; O'Donnell, Lara; Oster, Sara; Theesfeld, Chandra; Sellam, Adnane; Stark, Chris; Breitkreutz, Bobby-Joe; Dolinski, Kara; Tyers, Mike

    2017-01-01

    The Biological General Repository for Interaction Datasets (BioGRID: https://thebiogrid.org) is an open access database dedicated to the annotation and archival of protein, genetic and chemical interactions for all major model organism species and humans. As of September 2016 (build 3.4.140), the BioGRID contains 1 072 173 genetic and protein interactions, and 38 559 post-translational modifications, as manually annotated from 48 114 publications. This dataset represents interaction records for 66 model organisms and represents a 30% increase compared to the previous 2015 BioGRID update. BioGRID curates the biomedical literature for major model organism species, including humans, with a recent emphasis on central biological processes and specific human diseases. To facilitate network-based approaches to drug discovery, BioGRID now incorporates 27 501 chemical–protein interactions for human drug targets, as drawn from the DrugBank database. A new dynamic interaction network viewer allows the easy navigation and filtering of all genetic and protein interaction data, as well as for bioactive compounds and their established targets. BioGRID data are directly downloadable without restriction in a variety of standardized formats and are freely distributed through partner model organism databases and meta-databases. PMID:27980099

  5. From the grid to the smart grid, topologically

    NASA Astrophysics Data System (ADS)

    Pagani, Giuliano Andrea; Aiello, Marco

    2016-05-01

    In its more visionary acceptation, the smart grid is a model of energy management in which the users are engaged in producing energy as well as consuming it, while having information systems fully aware of the energy demand-response of the network and of dynamically varying prices. A natural question is then: to make the smart grid a reality will the distribution grid have to be upgraded? We assume a positive answer to the question and we consider the lower layers of medium and low voltage to be the most affected by the change. In our previous work, we analyzed samples of the Dutch distribution grid (Pagani and Aiello, 2011) and we considered possible evolutions of these using synthetic topologies modeled after studies of complex systems in other technological domains (Pagani and Aiello, 2014). In this paper, we take an extra important step by defining a methodology for evolving any existing physical power grid to a good smart grid model, thus laying the foundations for a decision support system for utilities and governmental organizations. In doing so, we consider several possible evolution strategies and apply them to the Dutch distribution grid. We show how increasing connectivity is beneficial in realizing more efficient and reliable networks. Our proposal is topological in nature, enhanced with economic considerations of the costs of such evolutions in terms of cabling expenses and economic benefits of evolving the grid.

  6. Antiapoptotic Effect of Recombinant HMGB1 A-box Protein via Regulation of microRNA-21 in Myocardial Ischemia-Reperfusion Injury Model in Rats.

    PubMed

    Han, Qiang; Zhang, Hua-Yong; Zhong, Bei-Long; Zhang, Bing; Chen, Hua

    2016-04-01

    The ~80 amino acid A box DNA-binding domain of high mobility group box 1 (HMGB1) protein antagonizes proinflammatory responses during myocardial ischemia reperfusion (I/R) injury. The exact role of microRNA-21 (miR-21) is unknown, but its altered levels are evident in I/R injury. This study examined the roles of HMGB1 A-box and miR-21 in rat myocardial I/R injury model. Sixty Sprague-Dawley rats were randomly divided into six equal groups: (1) Sham; (2) I/R; (3) Ischemic postconditioning (IPost); (4) AntagomiR-21 post-treatment; (5) Recombinant HMGB1 A-box pretreatment; and (6) Recombinant HMGB1 A-box + antagomiR-21 post-treatment. Hemodynamic indexes, arrhythmia scores, ischemic area and infarct size, myocardial injury, and related parameters were studied. Expression of miR-21 was detected by real-time quantitative polymerase chain reaction (qRT-PCR) and terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) assay was used to quantify apoptosis. Left ventricular systolic pressure (LVSP), left ventricular end diastolic pressure (LVEDP), maximal rate of pressure rise (+dp/dtmax), and decline (-dp/dtmax) showed clear reduction upon treatment with recombinant HMGB1 A-box. Arrhythmia was relieved and infarct area decreased in the group pretreated with recombinant HMGB1 A-box, compared with other groups. Circulating lactate dehydrogenase (LDH) and malondialdehyde (MDA) levels increased in response to irreversible cellular injury, while creatine kinase MB isoenzymes (CK-MB) and superoxide dismutase (SOD) activities were reduced in the I/R group, which was reversed following recombinant HMGB1 A-box treatment. Interestingly, pretreatment with recombinant HMGB1 A-box showed the most dramatic reductions in miR-21 levels, compared with other groups. Significantly reduced apoptotic index (AI) was seen in recombinant HMGB1 A-box pretreatment group and recombinant HMGB1 A-box + antagomiR-21 post-treatment group, with the former showing a more dramatic lowering in AI than the latter. Bax, caspase-8, and CHOP showed reduced expression, and Bcl-2 and p-AKT levels were upregulated in recombinant HMGB1 A-box pretreatment group. Thus, recombinant HMGB1 A-box treatment protects against I/R injury and the mechanisms may involve inhibition of miR-21 expression.

  7. Surface Modeling, Grid Generation, and Related Issues in Computational Fluid Dynamic (CFD) Solutions

    NASA Technical Reports Server (NTRS)

    Choo, Yung K. (Compiler)

    1995-01-01

    The NASA Steering Committee for Surface Modeling and Grid Generation (SMAGG) sponsored a workshop on surface modeling, grid generation, and related issues in Computational Fluid Dynamics (CFD) solutions at Lewis Research Center, Cleveland, Ohio, May 9-11, 1995. The workshop provided a forum to identify industry needs, strengths, and weaknesses of the five grid technologies (patched structured, overset structured, Cartesian, unstructured, and hybrid), and to exchange thoughts about where each technology will be in 2 to 5 years. The workshop also provided opportunities for engineers and scientists to present new methods, approaches, and applications in SMAGG for CFD. This Conference Publication (CP) consists of papers on industry overview, NASA overview, five grid technologies, new methods/ approaches/applications, and software systems.

  8. The Art of Grid Fields: Geometry of Neuronal Time

    PubMed Central

    Shilnikov, Andrey L.; Maurer, Andrew Porter

    2016-01-01

    The discovery of grid cells in the entorhinal cortex has both elucidated our understanding of spatial representations in the brain, and germinated a large number of theoretical models regarding the mechanisms of these cells’ striking spatial firing characteristics. These models cross multiple neurobiological levels that include intrinsic membrane resonance, dendritic integration, after hyperpolarization characteristics and attractor dynamics. Despite the breadth of the models, to our knowledge, parallels can be drawn between grid fields and other temporal dynamics observed in nature, much of which was described by Art Winfree and colleagues long before the initial description of grid fields. Using theoretical and mathematical investigations of oscillators, in a wide array of mediums far from the neurobiology of grid cells, Art Winfree has provided a substantial amount of research with significant and profound similarities. These theories provide specific inferences into the biological mechanisms and extraordinary resemblances across phenomenon. Therefore, this manuscript provides a novel interpretation on the phenomenon of grid fields, from the perspective of coupled oscillators, postulating that grid fields are the spatial representation of phase resetting curves in the brain. In contrast to prior models of gird cells, the current manuscript provides a sketch by which a small network of neurons, each with oscillatory components can operate to form grid cells, perhaps providing a unique hybrid between the competing attractor neural network and oscillatory interference models. The intention of this new interpretation of the data is to encourage novel testable hypotheses. PMID:27013981

  9. Variable Grid Traveltime Tomography for Near-surface Seismic Imaging

    NASA Astrophysics Data System (ADS)

    Cai, A.; Zhang, J.

    2017-12-01

    We present a new algorithm of traveltime tomography for imaging the subsurface with automated variable grids upon geological structures. The nonlinear traveltime tomography along with Tikhonov regularization using conjugate gradient method is a conventional method for near surface imaging. However, model regularization for any regular and even grids assumes uniform resolution. From geophysical point of view, long-wavelength and large scale structures can be reliably resolved, the details along geological boundaries are difficult to resolve. Therefore, we solve a traveltime tomography problem that automatically identifies large scale structures and aggregates grids within the structures for inversion. As a result, the number of velocity unknowns is reduced significantly, and inversion intends to resolve small-scale structures or the boundaries of large-scale structures. The approach is demonstrated by tests on both synthetic and field data. One synthetic model is a buried basalt model with one horizontal layer. Using the variable grid traveltime tomography, the resulted model is more accurate in top layer velocity, and basalt blocks, and leading to a less number of grids. The field data was collected in an oil field in China. The survey was performed in an area where the subsurface structures were predominantly layered. The data set includes 476 shots with a 10 meter spacing and 1735 receivers with a 10 meter spacing. The first-arrival traveltime of the seismogram is picked for tomography. The reciprocal errors of most shots are between 2ms and 6ms. The normal tomography results in fluctuations in layers and some artifacts in the velocity model. In comparison, the implementation of new method with proper threshold provides blocky model with resolved flat layer and less artifacts. Besides, the number of grids reduces from 205,656 to 4,930 and the inversion produces higher resolution due to less unknowns and relatively fine grids in small structures. The variable grid traveltime tomography provides an alternative imaging solution for blocky structures in the subsurface and builds a good starting model for waveform inversion and statics.

  10. Optimal response to attacks on the open science grids.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altunay, M.; Leyffer, S.; Linderoth, J. T.

    2011-01-01

    Cybersecurity is a growing concern, especially in open grids, where attack propagation is easy because of prevalent collaborations among thousands of users and hundreds of institutions. The collaboration rules that typically govern large science experiments as well as social networks of scientists span across the institutional security boundaries. A common concern is that the increased openness may allow malicious attackers to spread more readily around the grid. We consider how to optimally respond to attacks in open grid environments. To show how and why attacks spread more readily around the grid, we first discuss how collaborations manifest themselves in themore » grids and form the collaboration network graph, and how this collaboration network graph affects the security threat levels of grid participants. We present two mixed-integer program (MIP) models to find the optimal response to attacks in open grid environments, and also calculate the threat level associated with each grid participant. Given an attack scenario, our optimal response model aims to minimize the threat levels at unaffected participants while maximizing the uninterrupted scientific production (continuing collaborations). By adopting some of the collaboration rules (e.g., suspending a collaboration or shutting down a site), the model finds optimal response to subvert an attack scenario.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dalimunthe, Amty Ma’rufah Ardhiyah; Mindara, Jajat Yuda; Panatarani, Camellia

    Smart grid and distributed generation should be the solution of the global climate change and the crisis energy of the main source of electrical power generation which is fossil fuel. In order to meet the rising electrical power demand and increasing service quality demands, as well as reduce pollution, the existing power grid infrastructure should be developed into a smart grid and distributed power generation which provide a great opportunity to address issues related to energy efficiency, energy security, power quality and aging infrastructure systems. The conventional of the existing distributed generation system is an AC grid while for amore » renewable resources requires a DC grid system. This paper explores the model of smart DC grid by introducing a model of smart DC grid with the stable power generation give a minimal and compressed circuitry that can be implemented very cost-effectively with simple components. The PC based application software for controlling was developed to show the condition of the grid and to control the grid become ‘smart’. The model is then subjected to a severe system perturbation, such as incremental change in loads to test the performance of the system again stability. It is concluded that the system able to detect and controlled the voltage stability which indicating the ability of power system to maintain steady voltage within permissible rangers in normal condition.« less

  12. SHORT RANGE ENSEMBLE Products

    Science.gov Websites

    - CONUS Double Resolution (Lambert Conformal - 40km) NEMS Non-hydrostatic Multiscale Model on the B grid AWIPS grid 212 Regional - CONUS Double Resolution (Lambert Conformal - 40km) NEMS Non-hydrostatic 132 - Double Resolution (Lambert Conformal - 16km) NEMS Non-hydrostatic Multiscale Model on the B grid

  13. Smocks and Jocks outside the Box: The Paradigmatic Evolution of Sport and Exercise Psychology

    ERIC Educational Resources Information Center

    Vealey, Robin S.

    2006-01-01

    The objective of this article is to describe the historical development of sport and exercise psychology, with a particular emphasis on the construction and evolution of the "box" through history. The box represents the dominant paradigm that serves as the model for research and application as it evolves through successive historical eras (Kuhn,…

  14. Finding External Indicators of Load on a Web Server via Analysis of Black-Box Performance Measurements

    ERIC Educational Resources Information Center

    Chiarini, Marc A.

    2010-01-01

    Traditional methods for system performance analysis have long relied on a mix of queuing theory, detailed system knowledge, intuition, and trial-and-error. These approaches often require construction of incomplete gray-box models that can be costly to build and difficult to scale or generalize. In this thesis, we present a black-box analysis…

  15. Air Force Global Weather Central System Architecture Study. Final System/Subsystem Summary Report. Volume 4. Systems Analysis and Trade Studies

    DTIC Science & Technology

    1976-03-01

    atmosphere,as well as very fine grid cloud models and cloud probability models. Some of the new requirements that will be supported with this system are a...including the Advanced Prediction Model for the global atmosphere, as well as very fine grid cloud models and cloud proba- bility models. Some of the new...with the mapping and gridding function (imput and output)? Should the capability exist to interface raw ungridded data with the SID interface

  16. Grid computing in large pharmaceutical molecular modeling.

    PubMed

    Claus, Brian L; Johnson, Stephen R

    2008-07-01

    Most major pharmaceutical companies have employed grid computing to expand their compute resources with the intention of minimizing additional financial expenditure. Historically, one of the issues restricting widespread utilization of the grid resources in molecular modeling is the limited set of suitable applications amenable to coarse-grained parallelization. Recent advances in grid infrastructure technology coupled with advances in application research and redesign will enable fine-grained parallel problems, such as quantum mechanics and molecular dynamics, which were previously inaccessible to the grid environment. This will enable new science as well as increase resource flexibility to load balance and schedule existing workloads.

  17. Research on the comparison of extension mechanism of cellular automaton based on hexagon grid and rectangular grid

    NASA Astrophysics Data System (ADS)

    Zhai, Xiaofang; Zhu, Xinyan; Xiao, Zhifeng; Weng, Jie

    2009-10-01

    Historically, cellular automata (CA) is a discrete dynamical mathematical structure defined on spatial grid. Research on cellular automata system (CAS) has focused on rule sets and initial condition and has not discussed its adjacency. Thus, the main focus of our study is the effect of adjacency on CA behavior. This paper is to compare rectangular grids with hexagonal grids on their characteristics, strengths and weaknesses. They have great influence on modeling effects and other applications including the role of nearest neighborhood in experimental design. Our researches present that rectangular and hexagonal grids have different characteristics. They are adapted to distinct aspects, and the regular rectangular or square grid is used more often than the hexagonal grid. But their relative merits have not been widely discussed. The rectangular grid is generally preferred because of its symmetry, especially in orthogonal co-ordinate system and the frequent use of raster from Geographic Information System (GIS). However, in terms of complex terrain, uncertain and multidirectional region, we have preferred hexagonal grids and methods to facilitate and simplify the problem. Hexagonal grids can overcome directional warp and have some unique characteristics. For example, hexagonal grids have a simpler and more symmetric nearest neighborhood, which avoids the ambiguities of the rectangular grids. Movement paths or connectivity, the most compact arrangement of pixels, make hexagonal appear great dominance in the process of modeling and analysis. The selection of an appropriate grid should be based on the requirements and objectives of the application. We use rectangular and hexagonal grids respectively for developing city model. At the same time we make use of remote sensing images and acquire 2002 and 2005 land state of Wuhan. On the base of city land state in 2002, we make use of CA to simulate reasonable form of city in 2005. Hereby, these results provide a proof of concept for hexagonal which has great dominance.

  18. On the uncertainties associated with using gridded rainfall data as a proxy for observed

    NASA Astrophysics Data System (ADS)

    Tozer, C. R.; Kiem, A. S.; Verdon-Kidd, D. C.

    2011-09-01

    Gridded rainfall datasets are used in many hydrological and climatological studies, in Australia and elsewhere, including for hydroclimatic forecasting, climate attribution studies and climate model performance assessments. The attraction of the spatial coverage provided by gridded data is clear, particularly in Australia where the spatial and temporal resolution of the rainfall gauge network is sparse. However, the question that must be asked is whether it is suitable to use gridded data as a proxy for observed point data, given that gridded data is inherently "smoothed" and may not necessarily capture the temporal and spatial variability of Australian rainfall which leads to hydroclimatic extremes (i.e. droughts, floods)? This study investigates this question through a statistical analysis of three monthly gridded Australian rainfall datasets - the Bureau of Meteorology (BOM) dataset, the Australian Water Availability Project (AWAP) and the SILO dataset. To demonstrate the hydrological implications of using gridded data as a proxy for gauged data, a rainfall-runoff model is applied to one catchment in South Australia (SA) initially using gridded data as the source of rainfall input and then gauged rainfall data. The results indicate a markedly different runoff response associated with each of the different sources of rainfall data. It should be noted that this study does not seek to identify which gridded dataset is the "best" for Australia, as each gridded data source has its pros and cons, as does gauged or point data. Rather the intention is to quantify differences between various gridded data sources and how they compare with gauged data so that these differences can be considered and accounted for in studies that utilise these gridded datasets. Ultimately, if key decisions are going to be based on the outputs of models that use gridded data, an estimate (or at least an understanding) of the uncertainties relating to the assumptions made in the development of gridded data and how that gridded data compares with reality should be made.

  19. Vulnerability of the US western electric grid to hydro-climatological conditions: How bad can it get?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voisin, N.; Kintner-Meyer, M.; Skaggs, R.

    Recent studies have highlighted the potential impact of climate change on US electricity generation capacity by exploring the effect of changes in stream temperatures on available capacity of thermo-electric plants that rely on fresh-water cooling. However, little is known about the electric system impacts under extreme climate event such as drought. Vulnerability assessments are usually performed for a baseline water year or a specific drought, which do not provide insights into the full grid stress distribution across the diversity of climate events. In this paper we estimate the impacts of the water availability on the electricity generation and transmission inmore » the Western US grid for a range of historical water availability combinations. We softly couple an integrated water model, which includes climate, hydrology, routing, water resources management and socio-economic water demand models, into a grid model (production cost model) and simulate 30 years of historical hourly power flow conditions in the Western US grid. The experiment allows estimating the grid stress distribution as a function of inter-annual variability in regional water availability. Results indicate a clear correlation between grid vulnerability (as quantified in unmet energy demand and increased production cost) for the summer month of August and annual water availability. There is a 3% chance that at least 6% of the electricity demand cannot be met in August, and 21% chance of not meeting 0.5% of the load in the Western US grid. There is a 3% chance that at least 6% of the electricity demand cannot be met in August, and 21% chance of not meeting 0.1% or more of the load in the Western US grid. The regional variability in water availability contributes significantly to the reliability of the grid and could provide trade off opportunities in times of stress. This paper is the first to explore operational grid impacts imposed by droughts in the Western U.S. grid.« less

  20. Lead/acid battery design and operation

    NASA Astrophysics Data System (ADS)

    Manders, J. E.; Bui, N.; Lambert, D. W. H.; Navarette, J.; Nelson, R. F.; Valeriote, E. M.

    In keeping with the tradition of previous meetings, the Seventh Asian Battery Conference closed with the delegates putting questions to an expert panel of battery scientists and technologies. The proceedings were lively and the subjects were as follows. Grid alloys: gassing characteristics; influence of minor constituents on metallurgical and electrochemical characteristics; latest trends in composition; alloys for cast-on straps. Battery manufacture and operation: plate formation ( α-PbO 2: β-PbO 2 ratio); dendritic shorts. Separators: contribution to battery internal resistance; influence of negative-plate enveloping; reduced backweb. Valve-regulated lead/acid batteries: positive active-material: negative active-material ratio; hydrogen evolution and dry-out; negative-plate self-discharge; tank vs. box formation.

  1. A Comparative Study of Interferometric Regridding Algorithms

    NASA Technical Reports Server (NTRS)

    Hensley, Scott; Safaeinili, Ali

    1999-01-01

    THe paper discusses regridding options: (1) The problem of interpolating data that is not sampled on a uniform grid, that is noisy, and contains gaps is a difficult problem. (2) Several interpolation algorithms have been implemented: (a) Nearest neighbor - Fast and easy but shows some artifacts in shaded relief images. (b) Simplical interpolator - uses plane going through three points containing point where interpolation is required. Reasonably fast and accurate. (c) Convolutional - uses a windowed Gaussian approximating the optimal prolate spheroidal weighting function for a specified bandwidth. (d) First or second order surface fitting - Uses the height data centered in a box about a given point and does a weighted least squares surface fit.

  2. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution

    PubMed Central

    Lo, Kenneth

    2011-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components. PMID:22125375

  3. Flexible mixture modeling via the multivariate t distribution with the Box-Cox transformation: an alternative to the skew-t distribution.

    PubMed

    Lo, Kenneth; Gottardo, Raphael

    2012-01-01

    Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.

  4. Evolution of the F-Box Gene Family in Euarchontoglires: Gene Number Variation and Selection Patterns

    PubMed Central

    Wang, Ailan; Fu, Mingchuan; Jiang, Xiaoqian; Mao, Yuanhui; Li, Xiangchen; Tao, Shiheng

    2014-01-01

    F-box proteins are substrate adaptors used by the SKP1–CUL1–F-box protein (SCF) complex, a type of E3 ubiquitin ligase complex in the ubiquitin proteasome system (UPS). SCF-mediated ubiquitylation regulates proteolysis of hundreds of cellular proteins involved in key signaling and disease systems. However, our knowledge of the evolution of the F-box gene family in Euarchontoglires is limited. In the present study, 559 F-box genes and nine related pseudogenes were identified in eight genomes. Lineage-specific gene gain and loss events occurred during the evolution of Euarchontoglires, resulting in varying F-box gene numbers ranging from 66 to 81 among the eight species. Both tandem duplication and retrotransposition were found to have contributed to the increase of F-box gene number, whereas mutation in the F-box domain was the main mechanism responsible for reduction in the number of F-box genes, resulting in a balance of expansion and contraction in the F-box gene family. Thus, the Euarchontoglire F-box gene family evolved under a birth-and-death model. Signatures of positive selection were detected in substrate-recognizing domains of multiple F-box proteins, and adaptive changes played a role in evolution of the Euarchontoglire F-box gene family. In addition, single nucleotide polymorphism (SNP) distributions were found to be highly non-random among different regions of F-box genes in 1092 human individuals, with domain regions having a significantly lower number of non-synonymous SNPs. PMID:24727786

  5. A Petri Net model for distributed energy system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konopko, Joanna

    2015-12-31

    Electrical networks need to evolve to become more intelligent, more flexible and less costly. The smart grid is the next generation power energy, uses two-way flows of electricity and information to create a distributed automated energy delivery network. Building a comprehensive smart grid is a challenge for system protection, optimization and energy efficient. Proper modeling and analysis is needed to build an extensive distributed energy system and intelligent electricity infrastructure. In this paper, the whole model of smart grid have been proposed using Generalized Stochastic Petri Nets (GSPN). The simulation of created model is also explored. The simulation of themore » model has allowed the analysis of how close the behavior of the model is to the usage of the real smart grid.« less

  6. Maximum capacity model of grid-connected multi-wind farms considering static security constraints in electrical grids

    NASA Astrophysics Data System (ADS)

    Zhou, W.; Qiu, G. Y.; Oodo, S. O.; He, H.

    2013-03-01

    An increasing interest in wind energy and the advance of related technologies have increased the connection of wind power generation into electrical grids. This paper proposes an optimization model for determining the maximum capacity of wind farms in a power system. In this model, generator power output limits, voltage limits and thermal limits of branches in the grid system were considered in order to limit the steady-state security influence of wind generators on the power system. The optimization model was solved by a nonlinear primal-dual interior-point method. An IEEE-30 bus system with two wind farms was tested through simulation studies, plus an analysis conducted to verify the effectiveness of the proposed model. The results indicated that the model is efficient and reasonable.

  7. A New Stellar Atmosphere Grid and Comparisons with HST /STIS CALSPEC Flux Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bohlin, Ralph C.; Fleming, Scott W.; Gordon, Karl D.

    The Space Telescope Imaging Spectrograph has measured the spectral energy distributions for several stars of types O, B, A, F, and G. These absolute fluxes from the CALSPEC database are fit with a new spectral grid computed from the ATLAS-APOGEE ATLAS9 model atmosphere database using a chi-square minimization technique in four parameters. The quality of the fits are compared for complete LTE grids by Castelli and Kurucz (CK04) and our new comprehensive LTE grid (BOSZ). For the cooler stars, the fits with the MARCS LTE grid are also evaluated, while the hottest stars are also fit with the NLTE Lanzmore » and Hubeny OB star grids. Unfortunately, these NLTE models do not transition smoothly in the infrared to agree with our new BOSZ LTE grid at the NLTE lower limit of T {sub eff} = 15,000 K. The new BOSZ grid is available via the Space Telescope Institute MAST archive and has a much finer sampled IR wavelength scale than CK04, which will facilitate the modeling of stars observed by the James Webb Space Telescope . Our result for the angular diameter of Sirius agrees with the ground-based interferometric value.« less

  8. A New Stellar Atmosphere Grid and Comparisons with HST/STIS CALSPEC Flux Distributions

    NASA Astrophysics Data System (ADS)

    Bohlin, Ralph C.; Mészáros, Szabolcs; Fleming, Scott W.; Gordon, Karl D.; Koekemoer, Anton M.; Kovács, József

    2017-05-01

    The Space Telescope Imaging Spectrograph has measured the spectral energy distributions for several stars of types O, B, A, F, and G. These absolute fluxes from the CALSPEC database are fit with a new spectral grid computed from the ATLAS-APOGEE ATLAS9 model atmosphere database using a chi-square minimization technique in four parameters. The quality of the fits are compared for complete LTE grids by Castelli & Kurucz (CK04) and our new comprehensive LTE grid (BOSZ). For the cooler stars, the fits with the MARCS LTE grid are also evaluated, while the hottest stars are also fit with the NLTE Lanz & Hubeny OB star grids. Unfortunately, these NLTE models do not transition smoothly in the infrared to agree with our new BOSZ LTE grid at the NLTE lower limit of T eff = 15,000 K. The new BOSZ grid is available via the Space Telescope Institute MAST archive and has a much finer sampled IR wavelength scale than CK04, which will facilitate the modeling of stars observed by the James Webb Space Telescope. Our result for the angular diameter of Sirius agrees with the ground-based interferometric value.

  9. Assessing the impact of aerosol-atmosphere interactions in convection-permitting regional climate simulations: the Rolf medicane in 2011

    NASA Astrophysics Data System (ADS)

    José Gómez-Navarro, Juan; María López-Romero, José; Palacios-Peña, Laura; Montávez, Juan Pedro; Jiménez-Guerrero, Pedro

    2017-04-01

    A critical challenge for assessing regional climate change projections relies on improving the estimate of atmospheric aerosol impact on clouds and reducing the uncertainty associated with the use of parameterizations. In this sense, the horizontal grid spacing implemented in state-of-the-art regional climate simulations is typically 10-25 kilometers, meaning that very important processes such as convective precipitation are smaller than a grid box, and therefore need to be parameterized. This causes large uncertainties, as closure assumptions and a number of parameters have to be established by model tuning. Convection is a physical process that may be strongly conditioned by atmospheric aerosols, although the solution of aerosol-cloud interactions in warm convective clouds remains nowadays a very important scientific challenge, rendering parametrization of these complex processes an important bottleneck that is responsible from a great part of the uncertainty in current climate change projections. Therefore, the explicit simulation of convective processes might improve the quality and reliability of the simulations of the aerosol-cloud interactions in a wide range of atmospheric phenomena. Particularly over the Mediterranean, the role of aerosol particles is very important, being this a crossroad that fuels the mixing of particles from different sources (sea-salt, biomass burning, anthropogenic, Saharan dust, etc). Still, the role of aerosols in extreme events in this area such as medicanes has been barely addressed. This work aims at assessing the role of aerosol-atmosphere interaction in medicanes with the help of the regional chemistry/climate on-line coupled model WRF-CHEM run at a convection-permitting resolution. The analysis is exemplary based on the "Rolf" medicane (6-8 November 2011). Using this case study as reference, four sets of simulations are run with two spatial resolutions: one at a convection-permitting configuration of 4 km, and other at the lower resolution of 12 km, in whose case the convection has to be parameterized. Each configuration is used to produce two simulations, including and not including aerosol-radiation-cloud interactions. The comparison of the simulated output at different scales allows to evaluate the impact of sub-grid scale mixing of precursors on aerosol production. By focusing on these processes at different resolutions, the differences between convection-permitting models running at resolutions of 4 km to 12 km can be explored. Preliminary results indicate that the inclusion of aerosol effects may indeed impact the severity of this simulated medicane, especially sea salt aerosols, and leads to important spatial shifts and differences in intensity of surface precipitation.

  10. A method for grounding grid corrosion rate prediction

    NASA Astrophysics Data System (ADS)

    Han, Juan; Du, Jingyi

    2017-06-01

    Involved in a variety of factors, prediction of grounding grid corrosion complex, and uncertainty in the acquisition process, we propose a combination of EAHP (extended AHP) and fuzzy nearness degree of effective grounding grid corrosion rate prediction model. EAHP is used to establish judgment matrix and calculate the weight of each factors corrosion of grounding grid; different sample classification properties have different corrosion rate of contribution, and combining the principle of close to predict corrosion rate.The application result shows, the model can better capture data variation, thus to improve the validity of the model to get higher prediction precision.

  11. The Optimization dispatching of Micro Grid Considering Load Control

    NASA Astrophysics Data System (ADS)

    Zhang, Pengfei; Xie, Jiqiang; Yang, Xiu; He, Hongli

    2018-01-01

    This paper proposes an optimization control of micro-grid system economy operation model. It coordinates the new energy and storage operation with diesel generator output, so as to achieve the economic operation purpose of micro-grid. In this paper, the micro-grid network economic operation model is transformed into mixed integer programming problem, which is solved by the mature commercial software, and the new model is proved to be economical, and the load control strategy can reduce the charge and discharge times of energy storage devices, and extend the service life of the energy storage device to a certain extent.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mueller, Juliane

    MISO is an optimization framework for solving computationally expensive mixed-integer, black-box, global optimization problems. MISO uses surrogate models to approximate the computationally expensive objective function. Hence, derivative information, which is generally unavailable for black-box simulation objective functions, is not needed. MISO allows the user to choose the initial experimental design strategy, the type of surrogate model, and the sampling strategy.

  13. Why the Particle-in-a-Box Model Works Well for Cyanine Dyes but Not for Conjugated Polyenes

    ERIC Educational Resources Information Center

    Autschbach, Jochen

    2007-01-01

    We investigate why the particle-in-a-box (PB) model works well for calculating the absorption wavelengths of cyanine dyes and why it does not work for conjugated polyenes. The PB model is immensely useful in the classroom, but owing to its highly approximate character there is little reason to expect that it can yield quantitative agreement with…

  14. Examination of the four-fifths law for longitudinal third-order moments in incompressible magnetohydrodynamic turbulence in a periodic box.

    PubMed

    Yoshimatsu, Katsunori

    2012-06-01

    The four-fifths law for third-order longitudinal moments is examined, using direct numerical simulation (DNS) data on three-dimensional (3D) forced incompressible magnetohydrodynamic (MHD) turbulence without a uniformly imposed magnetic field in a periodic box. The magnetic Prandtl number is set to one, and the number of grid points is 512(3). A generalized Kármán-Howarth-Kolmogorov equation for second-order velocity moments in isotropic MHD turbulence is extended to anisotropic MHD turbulence by means of a spherical average over the direction of r. Here, r is a separation vector. The viscous, forcing, anisotropic and nonstationary terms in the generalized equation are quantified. It is found that the influence of the anisotropic terms on the four-fifths law is negligible at small scales, compared to that of the viscous term. However, the influence of the directional anisotropy, which is measured by the departure of the third-order moments in a particular direction of r from the spherically averaged ones, on the four-fifths law is suggested to be substantial, at least in the case studied here.

  15. Proceedings of the Third International Workshop on Multistrategy Learning, May 23-25 Harpers Ferry, WV.

    DTIC Science & Technology

    1996-09-16

    approaches are: • Adaptive filtering • Single exponential smoothing (Brown, 1963) * The Box-Jenkins methodology ( ARIMA modeling ) - Linear exponential... ARIMA • Linear exponential smoothing: Holt’s two parameter modeling (Box and Jenkins, 1976). However, there are two approach (Holt et al., 1960) very...crucial disadvantages: The most important point in - Winters’ three parameter method (Winters, 1960) ARIMA modeling is model identification. As shown in

  16. HAPEX-Sahel: A large-scale study of land-atmosphere interactions in the semi-arid tropics

    NASA Technical Reports Server (NTRS)

    Gutorbe, J-P.; Lebel, T.; Tinga, A.; Bessemoulin, P.; Brouwer, J.; Dolman, A.J.; Engman, E. T.; Gash, J. H. C.; Hoepffner, M.; Kabat, P.

    1994-01-01

    The Hydrologic Atmospheric Pilot EXperiment in the Sahel (HAPEX-Sahel) was carried out in Niger, West Africa, during 1991-1992, with an intensive observation period (IOP) in August-October 1992. It aims at improving the parameteriztion of land surface atmospheric interactions at the Global Circulation Model (GCM) gridbox scale. The experiment combines remote sensing and ground based measurements with hydrological and meteorological modeling to develop aggregation techniques for use in large scale estimates of the hydrological and meteorological behavior of large areas in the Sahel. The experimental strategy consisted of a period of intensive measurements during the transition period of the rainy to the dry season, backed up by a series of long term measurements in a 1 by 1 deg square in Niger. Three 'supersites' were instrumented with a variety of hydrological and (micro) meteorological equipment to provide detailed information on the surface energy exchange at the local scale. Boundary layer measurements and aircraft measurements were used to provide information at scales of 100-500 sq km. All relevant remote sensing images were obtained for this period. This program of measurements is now being analyzed and an extensive modelling program is under way to aggregate the information at all scales up to the GCM grid box scale. The experimental strategy and some preliminary results of the IOP are described.

  17. Modeling radiative transfer with the doubling and adding approach in a climate GCM setting

    NASA Astrophysics Data System (ADS)

    Lacis, A. A.

    2017-12-01

    The nonlinear dependence of multiply scattered radiation on particle size, optical depth, and solar zenith angle, makes accurate treatment of multiple scattering in the climate GCM setting problematic, due primarily to computational cost issues. In regard to the accurate methods of calculating multiple scattering that are available, their computational cost is far too prohibitive for climate GCM applications. Utilization of two-stream-type radiative transfer approximations may be computationally fast enough, but at the cost of reduced accuracy. We describe here a parameterization of the doubling/adding method that is being used in the GISS climate GCM, which is an adaptation of the doubling/adding formalism configured to operate with a look-up table utilizing a single gauss quadrature point with an extra-angle formulation. It is designed to closely reproduce the accuracy of full-angle doubling and adding for the multiple scattering effects of clouds and aerosols in a realistic atmosphere as a function of particle size, optical depth, and solar zenith angle. With an additional inverse look-up table, this single-gauss-point doubling/adding approach can be adapted to model fractional cloud cover for any GCM grid-box in the independent pixel approximation as a function of the fractional cloud particle sizes, optical depths, and solar zenith angle dependence.

  18. Impact of Lightning-NO Emissions on Summertime U.S. Photochemistry as Determined Using the CMAQ Model with NLDN-Constrained Flash Rates

    NASA Technical Reports Server (NTRS)

    Allen, Dale; Pickering, Kenneth; Pinder, Robert; Koshak, William; Pierce, Thomas

    2011-01-01

    Lightning-NO emissions are responsible for 15-30 ppbv enhancements in upper tropospheric ozone over the eastern United States during the summer time. Enhancements vary from year to year but were particularly large during the summer of 2006, a period during which meteorological conditions were particularly conducive to ozone formation. A lightning-NO parameterization has been developed that can be used with the CMAQ model. Lightning-NO emissions in this scheme are assumed to be proportional to convective precipitation rate and scaled so that monthly average flash rates in each grid box match National Lightning Detection Network (NLDN) observed flash rates after adjusting for climatological intracloud to cloud-to-ground (IC/CG) ratios. The contribution of lightning-NO emissions to eastern United States NOx and ozone distributions during the summer of 2006 will be evaluated by comparing results of 12- km CMAQ simulations with and without lightning-NO emissions to measurements from the IONS field campaign and to satellite retrievals from the Tropospheric Emission Spectrometer (TES) and the Ozone Monitoring Instrument (OMI) aboard the Aura satellite. Special attention will be paid to the impact of the assumed vertical distribution of emissions on upper tropospheric NOx and ozone amounts.

  19. Toward a Unified Representation of Atmospheric Convection in Variable-Resolution Climate Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walko, Robert

    2016-11-07

    The purpose of this project was to improve the representation of convection in atmospheric weather and climate models that employ computational grids with spatially-variable resolution. Specifically, our work targeted models whose grids are fine enough over selected regions that convection is resolved explicitly, while over other regions the grid is coarser and convection is represented as a subgrid-scale process. The working criterion for a successful scheme for representing convection over this range of grid resolution was that identical convective environments must produce very similar convective responses (i.e., the same precipitation amount, rate, and timing, and the same modification of themore » atmospheric profile) regardless of grid scale. The need for such a convective scheme has increased in recent years as more global weather and climate models have adopted variable resolution meshes that are often extended into the range of resolving convection in selected locations.« less

  20. Finite-difference modeling with variable grid-size and adaptive time-step in porous media

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yin, Xingyao; Wu, Guochen

    2014-04-01

    Forward modeling of elastic wave propagation in porous media has great importance for understanding and interpreting the influences of rock properties on characteristics of seismic wavefield. However, the finite-difference forward-modeling method is usually implemented with global spatial grid-size and time-step; it consumes large amounts of computational cost when small-scaled oil/gas-bearing structures or large velocity-contrast exist underground. To overcome this handicap, combined with variable grid-size and time-step, this paper developed a staggered-grid finite-difference scheme for elastic wave modeling in porous media. Variable finite-difference coefficients and wavefield interpolation were used to realize the transition of wave propagation between regions of different grid-size. The accuracy and efficiency of the algorithm were shown by numerical examples. The proposed method is advanced with low computational cost in elastic wave simulation for heterogeneous oil/gas reservoirs.

  1. The Adoption of Grid Computing Technology by Organizations: A Quantitative Study Using Technology Acceptance Model

    ERIC Educational Resources Information Center

    Udoh, Emmanuel E.

    2010-01-01

    Advances in grid technology have enabled some organizations to harness enormous computational power on demand. However, the prediction of widespread adoption of the grid technology has not materialized despite the obvious grid advantages. This situation has encouraged intense efforts to close the research gap in the grid adoption process. In this…

  2. A new ghost-node method for linking different models and initial investigations of heterogeneity and nonmatching grids

    USGS Publications Warehouse

    Dickinson, J.E.; James, S.C.; Mehl, S.; Hill, M.C.; Leake, S.A.; Zyvoloski, G.A.; Faunt, C.C.; Eddebbarh, A.-A.

    2007-01-01

    A flexible, robust method for linking parent (regional-scale) and child (local-scale) grids of locally refined models that use different numerical methods is developed based on a new, iterative ghost-node method. Tests are presented for two-dimensional and three-dimensional pumped systems that are homogeneous or that have simple heterogeneity. The parent and child grids are simulated using the block-centered finite-difference MODFLOW and control-volume finite-element FEHM models, respectively. The models are solved iteratively through head-dependent (child model) and specified-flow (parent model) boundary conditions. Boundary conditions for models with nonmatching grids or zones of different hydraulic conductivity are derived and tested against heads and flows from analytical or globally-refined models. Results indicate that for homogeneous two- and three-dimensional models with matched grids (integer number of child cells per parent cell), the new method is nearly as accurate as the coupling of two MODFLOW models using the shared-node method and, surprisingly, errors are slightly lower for nonmatching grids (noninteger number of child cells per parent cell). For heterogeneous three-dimensional systems, this paper compares two methods for each of the two sets of boundary conditions: external heads at head-dependent boundary conditions for the child model are calculated using bilinear interpolation or a Darcy-weighted interpolation; specified-flow boundary conditions for the parent model are calculated using model-grid or hydrogeologic-unit hydraulic conductivities. Results suggest that significantly more accurate heads and flows are produced when both Darcy-weighted interpolation and hydrogeologic-unit hydraulic conductivities are used, while the other methods produce larger errors at the boundary between the regional and local models. The tests suggest that, if posed correctly, the ghost-node method performs well. Additional testing is needed for highly heterogeneous systems. ?? 2007 Elsevier Ltd. All rights reserved.

  3. Development of a large scale Chimera grid system for the Space Shuttle Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Pearce, Daniel G.; Stanley, Scott A.; Martin, Fred W., Jr.; Gomez, Ray J.; Le Beau, Gerald J.; Buning, Pieter G.; Chan, William M.; Chiu, Ing-Tsau; Wulf, Armin; Akdag, Vedat

    1993-01-01

    The application of CFD techniques to large problems has dictated the need for large team efforts. This paper offers an opportunity to examine the motivations, goals, needs, problems, as well as the methods, tools, and constraints that defined NASA's development of a 111 grid/16 million point grid system model for the Space Shuttle Launch Vehicle. The Chimera approach used for domain decomposition encouraged separation of the complex geometry into several major components each of which was modeled by an autonomous team. ICEM-CFD, a CAD based grid generation package, simplified the geometry and grid topology definition by provoding mature CAD tools and patch independent meshing. The resulting grid system has, on average, a four inch resolution along the surface.

  4. Summary of Data from the Fifth AIAA CFD Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Levy, David W.; Laflin, Kelly R.; Tinoco, Edward N.; Vassberg, John C.; Mani, Mori; Rider, Ben; Rumsey, Chris; Wahls, Richard A.; Morrison, Joseph H.; Brodersen, Olaf P.; hide

    2013-01-01

    Results from the Fifth AIAA CFD Drag Prediction Workshop (DPW-V) are presented. As with past workshops, numerical calculations are performed using industry-relevant geometry, methodology, and test cases. This workshop focused on force/moment predictions for the NASA Common Research Model wing-body configuration, including a grid refinement study and an optional buffet study. The grid refinement study used a common grid sequence derived from a multiblock topology structured grid. Six levels of refinement were created resulting in grids ranging from 0.64x10(exp 6) to 138x10(exp 6) hexahedra - a much larger range than is typically seen. The grids were then transformed into structured overset and hexahedral, prismatic, tetrahedral, and hybrid unstructured formats all using the same basic cloud of points. This unique collection of grids was designed to isolate the effects of grid type and solution algorithm by using identical point distributions. This study showed reduced scatter and standard deviation from previous workshops. The second test case studied buffet onset at M=0.85 using the Medium grid (5.1x106 nodes) from the above described sequence. The prescribed alpha sweep used finely spaced intervals through the zone where wing separation was expected to begin. Some solutions exhibited a large side of body separation bubble that was not observed in the wind tunnel results. An optional third case used three sets of geometry, grids, and conditions from the Turbulence Model Resource website prepared by the Turbulence Model Benchmarking Working Group. These simple cases were intended to help identify potential differences in turbulence model implementation. Although a few outliers and issues affecting consistency were identified, the majority of participants produced consistent results.

  5. Atmospheric Boundary Layer Modeling for Combined Meteorology and Air Quality Systems

    EPA Science Inventory

    Atmospheric Eulerian grid models for mesoscale and larger applications require sub-grid models for turbulent vertical exchange processes, particularly within the Planetary Boundary Layer (PSL). In combined meteorology and air quality modeling systems consistent PSL modeling of wi...

  6. Applying Turbulence Models to Hydroturbine Flows: A Sensitivity Analysis Using the GAMM Francis Turbine

    NASA Astrophysics Data System (ADS)

    Lewis, Bryan; Cimbala, John; Wouden, Alex

    2011-11-01

    Turbulence models are generally developed to study common academic geometries, such as flat plates and channels. Creating quality computational grids for such geometries is trivial, and allows stringent requirements to be met for boundary layer grid refinement. However, engineering applications, such as flow through hydroturbines, require the analysis of complex, highly curved geometries. To produce body-fitted grids for such geometries, the mesh quality requirements must be relaxed. Relaxing these requirements, along with the complexity of rotating flows, forces turbulence models to be employed beyond their developed scope. This study explores the solution sensitivity to boundary layer grid quality for various turbulence models and boundary conditions currently implemented in OpenFOAM. The following models are resented: k-omega, k-omega SST, k-epsilon, realizable k-epsilon, and RNG k-epsilon. Standard wall functions, adaptive wall functions, and sub-grid integration are compared using various grid refinements. The chosen geometry is the GAMM Francis Turbine because experimental data and comparison computational results are available for this turbine. This research was supported by a grant from the DoE and a National Defense Science and Engineering Graduate Fellowship.

  7. A guide to the use of the pressure disk rotor model as implemented in INS3D-UP

    NASA Technical Reports Server (NTRS)

    Chaffin, Mark S.

    1995-01-01

    This is a guide for the use of the pressure disk rotor model that has been placed in the incompressible Navier-Stokes code INS3D-UP. The pressure disk rotor model approximates a helicopter rotor or propeller in a time averaged manner and is intended to simulate the effect of a rotor in forward flight on the fuselage or the effect of a propeller on other aerodynamic components. The model uses a modified actuator disk that allows the pressure jump across the disk to vary with radius and azimuth. The cyclic and collective blade pitch angles needed to achieve a specified thrust coefficient and zero moment about the hub are predicted. The method has been validated with experimentally measured mean induced inflow velocities as well as surface pressures on a generic fuselage. Overset grids, sometimes referred to as Chimera grids, are used to simplify the grid generation process. The pressure disk model is applied to a cylindrical grid which is embedded in the grid or grids used for the rest of the configuration. This document will outline the development of the method, and present input and results for a sample case.

  8. Ecology and Economics of Using Native Managed Bees for Almond Pollination.

    PubMed

    Koh, Insu; Lonsdorf, Eric V; Artz, Derek R; Pitts-Singer, Theresa L; Ricketts, Taylor H

    2018-02-09

    Native managed bees can improve crop pollination, but a general framework for evaluating the associated economic costs and benefits has not been developed. We conducted a cost-benefit analysis to assess how managing blue orchard bees (Osmia lignaria Say [Hymenoptera: Megachildae]) alongside honey bees (Apis mellifera Linnaeus [Hymenoptera: Apidae]) can affect profits for almond growers in California. Specifically, we studied how adjusting three strategies can influence profits: (1) number of released O. lignaria bees, (2) density of artificial nest boxes, and (3) number of nest cavities (tubes) per box. We developed an ecological model for the effects of pollinator activity on almond yields, validated the model with published data, and then estimated changes in profits for different management strategies. Our model shows that almond yields increase with O. lignaria foraging density, even where honey bees are already in use. Our cost-benefit analysis shows that profit ranged from -US$1,800 to US$2,800/acre given different combinations of the three strategies. Adding nest boxes had the greatest effect; we predict an increase in profit between low and high nest box density strategies (2.5 and 10 boxes/acre). In fact, the number of released bees and the availability of nest tubes had relatively small effects in the high nest box density strategies. This suggests that growers could improve profits by simply adding more nest boxes with moderate number of tubes in each. Our approach can support grower decisions regarding integrated crop pollination and highlight the importance of a comprehensive ecological economic framework for assessing these decisions. © The Author(s) 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Ancestral and more recently acquired syntenic relationships of MADS-box genes uncovered by the Physcomitrella patens pseudochromosomal genome assembly.

    PubMed

    Barker, Elizabeth I; Ashton, Neil W

    2016-03-01

    The Physcomitrella pseudochromosomal genome assembly revealed previously invisible synteny enabling realisation of the full potential of shared synteny as a tool for probing evolution of this plant's MADS-box gene family. Assembly of the sequenced genome of Physcomitrella patens into 27 mega-scaffolds (pseudochromosomes) has confirmed the major predictions of our earlier model of expansion of the MADS-box gene family in the Physcomitrella lineage. Additionally, microsynteny has been conserved in the immediate vicinity of some recent duplicates of MADS-box genes. However, comparison of non-syntenic MIKC MADS-box genes and neighbouring genes indicates that chromosomal rearrangements and/or sequence degeneration have destroyed shared synteny over longer distances (macrosynteny) around MADS-box genes despite subsets comprising two or three MIKC genes having remained syntenic. In contrast, half of the type I MADS-box genes have been transposed creating new syntenic relations with MIKC genes. This implies that conservation of ancient ancestral synteny of MIKC genes and of more recently acquired synteny of type I and MIKC genes may be selectively advantageous. Our revised model predicts the birth rate of MIKC genes in Physcomitrella is higher than that of type I genes. However, this difference is attributable to an early tandem duplication and an early segmental duplication of MIKC genes prior to the two polyploidisations that account for most of the expansion of the MADS-box gene family in Physcomitrella. Furthermore, this early segmental duplication spawned two chromosomal lineages: one with a MIKC (C) gene, belonging to the PPM2 clade, in close proximity to one or a pair of MIKC* genes and another with a MIKC (C) gene, belonging to the PpMADS-S clade, characterised by greater separation from syntenic MIKC* genes. Our model has evolutionary implications for the Physcomitrella karyotype.

  10. Impact of surface coupling grids on tropical cyclone extremes in high-resolution atmospheric simulations

    DOE PAGES

    Zarzycki, Colin M.; Reed, Kevin A.; Bacmeister, Julio T.; ...

    2016-02-25

    This article discusses the sensitivity of tropical cyclone climatology to surface coupling strategy in high-resolution configurations of the Community Earth System Model. Using two supported model setups, we demonstrate that the choice of grid on which the lowest model level wind stress and surface fluxes are computed may lead to differences in cyclone strength in multi-decadal climate simulations, particularly for the most intense cyclones. Using a deterministic framework, we show that when these surface quantities are calculated on an ocean grid that is coarser than the atmosphere, the computed frictional stress is misaligned with wind vectors in individual atmospheric gridmore » cells. This reduces the effective surface drag, and results in more intense cyclones when compared to a model configuration where the ocean and atmosphere are of equivalent resolution. Our results demonstrate that the choice of computation grid for atmosphere–ocean interactions is non-negligible when considering climate extremes at high horizontal resolution, especially when model components are on highly disparate grids.« less

  11. PHOTOCHEMICAL SIMULATIONS OF POINT SOURCE EMISSIONS WITH THE MODELS-3 CMAQ PLUME-IN-GRID APPROACH

    EPA Science Inventory

    A plume-in-grid (PinG) approach has been designed to provide a realistic treatment for the simulation the dynamic and chemical processes impacting pollutant species in major point source plumes during a subgrid scale phase within an Eulerian grid modeling framework. The PinG sci...

  12. A variable resolution nonhydrostatic global atmospheric semi-implicit semi-Lagrangian model

    NASA Astrophysics Data System (ADS)

    Pouliot, George Antoine

    2000-10-01

    The objective of this project is to develop a variable-resolution finite difference adiabatic global nonhydrostatic semi-implicit semi-Lagrangian (SISL) model based on the fully compressible nonhydrostatic atmospheric equations. To achieve this goal, a three-dimensional variable resolution dynamical core was developed and tested. The main characteristics of the dynamical core can be summarized as follows: Spherical coordinates were used in a global domain. A hydrostatic/nonhydrostatic switch was incorporated into the dynamical equations to use the fully compressible atmospheric equations. A generalized horizontal variable resolution grid was developed and incorporated into the model. For a variable resolution grid, in contrast to a uniform resolution grid, the order of accuracy of finite difference approximations is formally lost but remains close to the order of accuracy associated with the uniform resolution grid provided the grid stretching is not too significant. The SISL numerical scheme was implemented for the fully compressible set of equations. In addition, the generalized minimum residual (GMRES) method with restart and preconditioner was used to solve the three-dimensional elliptic equation derived from the discretized system of equations. The three-dimensional momentum equation was integrated in vector-form to incorporate the metric terms in the calculations of the trajectories. Using global re-analysis data for a specific test case, the model was compared to similar SISL models previously developed. Reasonable agreement between the model and the other independently developed models was obtained. The Held-Suarez test for dynamical cores was used for a long integration and the model was successfully integrated for up to 1200 days. Idealized topography was used to test the variable resolution component of the model. Nonhydrostatic effects were simulated at grid spacings of 400 meters with idealized topography and uniform flow. Using a high-resolution topographic data set and the variable resolution grid, sets of experiments with increasing resolution were performed over specific regions of interest. Using realistic initial conditions derived from re-analysis fields, nonhydrostatic effects were significant for grid spacings on the order of 0.1 degrees with orographic forcing. If the model code was adapted for use in a message passing interface (MPI) on a parallel supercomputer today, it was estimated that a global grid spacing of 0.1 degrees would be achievable for a global model. In this case, nonhydrostatic effects would be significant for most areas. A variable resolution grid in a global model provides a unified and flexible approach to many climate and numerical weather prediction problems. The ability to configure the model from very fine to very coarse resolutions allows for the simulation of atmospheric phenomena at different scales using the same code. We have developed a dynamical core illustrating the feasibility of using a variable resolution in a global model.

  13. A multi-resolution approach to electromagnetic modelling

    NASA Astrophysics Data System (ADS)

    Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu

    2018-07-01

    We present a multi-resolution approach for 3-D magnetotelluric forward modelling. Our approach is motivated by the fact that fine-grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. With a conventional structured finite difference grid, the fine discretization required to adequately represent rapid variations near the surface is continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modelling is especially important for solving regularized inversion problems. We implement a multi-resolution finite difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of subgrids, with each subgrid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modelling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modelling operators on interfaces between adjacent subgrids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models shows that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.

  14. Grid Block Design Based on Monte Carlo Simulated Dosimetry, the Linear Quadratic and Hug–Kellerer Radiobiological Models

    PubMed Central

    Gholami, Somayeh; Nedaie, Hassan Ali; Longo, Francesco; Ay, Mohammad Reza; Dini, Sharifeh A.; Meigooni, Ali S.

    2017-01-01

    Purpose: The clinical efficacy of Grid therapy has been examined by several investigators. In this project, the hole diameter and hole spacing in Grid blocks were examined to determine the optimum parameters that give a therapeutic advantage. Methods: The evaluations were performed using Monte Carlo (MC) simulation and commonly used radiobiological models. The Geant4 MC code was used to simulate the dose distributions for 25 different Grid blocks with different hole diameters and center-to-center spacing. The therapeutic parameters of these blocks, namely, the therapeutic ratio (TR) and geometrical sparing factor (GSF) were calculated using two different radiobiological models, including the linear quadratic and Hug–Kellerer models. In addition, the ratio of the open to blocked area (ROTBA) is also used as a geometrical parameter for each block design. Comparisons of the TR, GSF, and ROTBA for all of the blocks were used to derive the parameters for an optimum Grid block with the maximum TR, minimum GSF, and optimal ROTBA. A sample of the optimum Grid block was fabricated at our institution. Dosimetric characteristics of this Grid block were measured using an ionization chamber in water phantom, Gafchromic film, and thermoluminescent dosimeters in Solid Water™ phantom materials. Results: The results of these investigations indicated that Grid blocks with hole diameters between 1.00 and 1.25 cm and spacing of 1.7 or 1.8 cm have optimal therapeutic parameters (TR > 1.3 and GSF~0.90). The measured dosimetric characteristics of the optimum Grid blocks including dose profiles, percentage depth dose, dose output factor (cGy/MU), and valley-to-peak ratio were in good agreement (±5%) with the simulated data. Conclusion: In summary, using MC-based dosimetry, two radiobiological models, and previously published clinical data, we have introduced a method to design a Grid block with optimum therapeutic response. The simulated data were reproduced by experimental data. PMID:29296035

  15. Sub-grid drag model for immersed vertical cylinders in fluidized beds

    DOE PAGES

    Verma, Vikrant; Li, Tingwen; Dietiker, Jean -Francois; ...

    2017-01-03

    Immersed vertical cylinders are often used as heat exchanger in gas-solid fluidized beds. Computational Fluid Dynamics (CFD) simulations are computationally expensive for large scale systems with bundles of cylinders. Therefore sub-grid models are required to facilitate simulations on a coarse grid, where internal cylinders are treated as a porous medium. The influence of cylinders on the gas-solid flow tends to enhance segregation and affect the gas-solid drag. A correction to gas-solid drag must be modeled using a suitable sub-grid constitutive relationship. In the past, Sarkar et al. have developed a sub-grid drag model for horizontal cylinder arrays based on 2Dmore » simulations. However, the effect of a vertical cylinder arrangement was not considered due to computational complexities. In this study, highly resolved 3D simulations with vertical cylinders were performed in small periodic domains. These simulations were filtered to construct a sub-grid drag model which can then be implemented in coarse-grid simulations. Gas-solid drag was filtered for different solids fractions and a significant reduction in drag was identified when compared with simulation without cylinders and simulation with horizontal cylinders. Slip velocities significantly increase when vertical cylinders are present. Lastly, vertical suspension drag due to vertical cylinders is insignificant however substantial horizontal suspension drag is observed which is consistent to the finding for horizontal cylinders.« less

  16. GWM-2005 - A Groundwater-Management Process for MODFLOW-2005 with Local Grid Refinement (LGR) Capability

    USGS Publications Warehouse

    Ahlfeld, David P.; Baker, Kristine M.; Barlow, Paul M.

    2009-01-01

    This report describes the Groundwater-Management (GWM) Process for MODFLOW-2005, the 2005 version of the U.S. Geological Survey modular three-dimensional groundwater model. GWM can solve a broad range of groundwater-management problems by combined use of simulation- and optimization-modeling techniques. These problems include limiting groundwater-level declines or streamflow depletions, managing groundwater withdrawals, and conjunctively using groundwater and surface-water resources. GWM was initially released for the 2000 version of MODFLOW. Several modifications and enhancements have been made to GWM since its initial release to increase the scope of the program's capabilities and to improve its operation and reporting of results. The new code, which is called GWM-2005, also was designed to support the local grid refinement capability of MODFLOW-2005. Local grid refinement allows for the simulation of one or more higher resolution local grids (referred to as child models) within a coarser grid parent model. Local grid refinement is often needed to improve simulation accuracy in regions where hydraulic gradients change substantially over short distances or in areas requiring detailed representation of aquifer heterogeneity. GWM-2005 can be used to formulate and solve groundwater-management problems that include components in both parent and child models. Although local grid refinement increases simulation accuracy, it can also substantially increase simulation run times.

  17. The Canadian Precipitation Analysis (CaPA): Evaluation of the statistical interpolation scheme

    NASA Astrophysics Data System (ADS)

    Evans, Andrea; Rasmussen, Peter; Fortin, Vincent

    2013-04-01

    CaPA (Canadian Precipitation Analysis) is a data assimilation system which employs statistical interpolation to combine observed precipitation with gridded precipitation fields produced by Environment Canada's Global Environmental Multiscale (GEM) climate model into a final gridded precipitation analysis. Precipitation is important in many fields and applications, including agricultural water management projects, flood control programs, and hydroelectric power generation planning. Precipitation is a key input to hydrological models, and there is a desire to have access to the best available information about precipitation in time and space. The principal goal of CaPA is to produce this type of information. In order to perform the necessary statistical interpolation, CaPA requires the estimation of a semi-variogram. This semi-variogram is used to describe the spatial correlations between precipitation innovations, defined as the observed precipitation amounts minus the GEM forecasted amounts predicted at the observation locations. Currently, CaPA uses a single isotropic variogram across the entire analysis domain. The present project investigates the implications of this choice by first conducting a basic variographic analysis of precipitation innovation data across the Canadian prairies, with specific interest in identifying and quantifying potential anisotropy within the domain. This focus is further expanded by identifying the effect of storm type on the variogram. The ultimate goal of the variographic analysis is to develop improved semi-variograms for CaPA that better capture the spatial complexities of precipitation over the Canadian prairies. CaPA presently applies a Box-Cox data transformation to both the observations and the GEM data, prior to the calculation of the innovations. The data transformation is necessary to satisfy the normal distribution assumption, but introduces a significant bias. The second part of the investigation aims at devising a bias correction scheme based on a moving-window averaging technique. For both the variogram and bias correction components of this investigation, a series of trial runs are conducted to evaluate the impact of these changes on the resulting CaPA precipitation analyses.

  18. Rugged: an operational, open-source solution for Sentinel-2 mapping

    NASA Astrophysics Data System (ADS)

    Maisonobe, Luc; Seyral, Jean; Prat, Guylaine; Guinet, Jonathan; Espesset, Aude

    2015-10-01

    When you map the entire Earth every 5 days with the aim of generating high-quality time series over land, there is no room for geometrical error: the algorithms have to be stable, reliable, and precise. Rugged, a new open-source library for pixel geolocation, is at the geometrical heart of the operational processing for Sentinel-2. Rugged performs sensor-to-terrain mapping taking into account ground Digital Elevation Models, Earth rotation with all its small irregularities, on-board sensor pixel individual lines-of-sight, spacecraft motion and attitude, and all significant physical effects. It provides direct and inverse location, i.e. it allows the accurate computation of which ground point is viewed from a specific pixel in a spacecraft instrument, and conversely which pixel will view a specified ground point. Direct and inverse location can be used to perform full ortho-rectification of images and correlation between sensors observing the same area. Implemented as an add-on for Orekit (Orbits Extrapolation KIT; a low-level space dynamics library), Rugged also offers the possibility of simulating satellite motion and attitude auxiliary data using Orekit's full orbit propagation capability. This is a considerable advantage for test data generation and mission simulation activities. Together with the Orfeo ToolBox (OTB) image processing library, Rugged provides the algorithmic core of Sentinel-2 Instrument Processing Facilities. The S2 complex viewing model - with 12 staggered push-broom detectors and 13 spectral bands - is built using Rugged objects, enabling the computation of rectification grids for mapping between cartographic and focal plane coordinates. These grids are passed to the OTB library for further image resampling, thus completing the ortho-rectification chain. Sentinel-2 stringent operational requirements to process several terabytes of data per week represented a tough challenge, though one that was well met by Rugged in terms of the robustness and performance of the library.

  19. Genome-wide analysis of the SBP-box gene family in Chinese cabbage (Brassica rapa subsp. pekinensis).

    PubMed

    Tan, Hua-Wei; Song, Xiao-Ming; Duan, Wei-Ke; Wang, Yan; Hou, Xi-Lin

    2015-11-01

    The SQUAMOSA PROMOTER BINDING PROTEIN (SBP)-box gene family contains highly conserved plant-specific transcription factors that play an important role in plant development, especially in flowering. Chinese cabbage (Brassica rapa subsp. pekinensis) is a leafy vegetable grown worldwide and is used as a model crop for research in genome duplication. The present study aimed to characterize the SBP-box transcription factor genes in Chinese cabbage. Twenty-nine SBP-box genes were identified in the Chinese cabbage genome and classified into six groups. We identified 23 orthologous and 5 co-orthologous SBP-box gene pairs between Chinese cabbage and Arabidopsis. An interaction network among these genes was constructed. Sixteen SBP-box genes were expressed more abundantly in flowers than in other tissues, suggesting their involvement in flowering. We show that the MiR156/157 family members may regulate the coding regions or 3'-UTR regions of Chinese cabbage SBP-box genes. As SBP-box genes were found to potentially participate in some plant development pathways, quantitative real-time PCR analysis was performed and showed that Chinese cabbage SBP-box genes were also sensitive to the exogenous hormones methyl jasmonic acid and salicylic acid. The SBP-box genes have undergone gene duplication and loss, evolving a more refined regulation for diverse stimulation in plant tissues. Our comprehensive genome-wide analysis provides insights into the SBP-box gene family of Chinese cabbage.

  20. The importance of topography controlled sub-grid process heterogeneity in distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Nijzink, R. C.; Samaniego, L.; Mai, J.; Kumar, R.; Thober, S.; Zink, M.; Schäfer, D.; Savenije, H. H. G.; Hrachowitz, M.

    2015-12-01

    Heterogeneity of landscape features like terrain, soil, and vegetation properties affect the partitioning of water and energy. However, it remains unclear to which extent an explicit representation of this heterogeneity at the sub-grid scale of distributed hydrological models can improve the hydrological consistency and the robustness of such models. In this study, hydrological process complexity arising from sub-grid topography heterogeneity was incorporated in the distributed mesoscale Hydrologic Model (mHM). Seven study catchments across Europe were used to test whether (1) the incorporation of additional sub-grid variability on the basis of landscape-derived response units improves model internal dynamics, (2) the application of semi-quantitative, expert-knowledge based model constraints reduces model uncertainty; and (3) the combined use of sub-grid response units and model constraints improves the spatial transferability of the model. Unconstrained and constrained versions of both, the original mHM and mHMtopo, which allows for topography-based sub-grid heterogeneity, were calibrated for each catchment individually following a multi-objective calibration strategy. In addition, four of the study catchments were simultaneously calibrated and their feasible parameter sets were transferred to the remaining three receiver catchments. In a post-calibration evaluation procedure the probabilities of model and transferability improvement, when accounting for sub-grid variability and/or applying expert-knowledge based model constraints, were assessed on the basis of a set of hydrological signatures. In terms of the Euclidian distance to the optimal model, used as overall measure for model performance with respect to the individual signatures, the model improvement achieved by introducing sub-grid heterogeneity to mHM in mHMtopo was on average 13 %. The addition of semi-quantitative constraints to mHM and mHMtopo resulted in improvements of 13 and 19 % respectively, compared to the base case of the unconstrained mHM. Most significant improvements in signature representations were, in particular, achieved for low flow statistics. The application of prior semi-quantitative constraints further improved the partitioning between runoff and evaporative fluxes. Besides, it was shown that suitable semi-quantitative prior constraints in combination with the transfer function based regularization approach of mHM, can be beneficial for spatial model transferability as the Euclidian distances for the signatures improved on average by 2 %. The effect of semi-quantitative prior constraints combined with topography-guided sub-grid heterogeneity on transferability showed a more variable picture of improvements and deteriorations, but most improvements were observed for low flow statistics.

  1. The importance of topography-controlled sub-grid process heterogeneity and semi-quantitative prior constraints in distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Nijzink, Remko C.; Samaniego, Luis; Mai, Juliane; Kumar, Rohini; Thober, Stephan; Zink, Matthias; Schäfer, David; Savenije, Hubert H. G.; Hrachowitz, Markus

    2016-03-01

    Heterogeneity of landscape features like terrain, soil, and vegetation properties affects the partitioning of water and energy. However, it remains unclear to what extent an explicit representation of this heterogeneity at the sub-grid scale of distributed hydrological models can improve the hydrological consistency and the robustness of such models. In this study, hydrological process complexity arising from sub-grid topography heterogeneity was incorporated into the distributed mesoscale Hydrologic Model (mHM). Seven study catchments across Europe were used to test whether (1) the incorporation of additional sub-grid variability on the basis of landscape-derived response units improves model internal dynamics, (2) the application of semi-quantitative, expert-knowledge-based model constraints reduces model uncertainty, and whether (3) the combined use of sub-grid response units and model constraints improves the spatial transferability of the model. Unconstrained and constrained versions of both the original mHM and mHMtopo, which allows for topography-based sub-grid heterogeneity, were calibrated for each catchment individually following a multi-objective calibration strategy. In addition, four of the study catchments were simultaneously calibrated and their feasible parameter sets were transferred to the remaining three receiver catchments. In a post-calibration evaluation procedure the probabilities of model and transferability improvement, when accounting for sub-grid variability and/or applying expert-knowledge-based model constraints, were assessed on the basis of a set of hydrological signatures. In terms of the Euclidian distance to the optimal model, used as an overall measure of model performance with respect to the individual signatures, the model improvement achieved by introducing sub-grid heterogeneity to mHM in mHMtopo was on average 13 %. The addition of semi-quantitative constraints to mHM and mHMtopo resulted in improvements of 13 and 19 %, respectively, compared to the base case of the unconstrained mHM. Most significant improvements in signature representations were, in particular, achieved for low flow statistics. The application of prior semi-quantitative constraints further improved the partitioning between runoff and evaporative fluxes. In addition, it was shown that suitable semi-quantitative prior constraints in combination with the transfer-function-based regularization approach of mHM can be beneficial for spatial model transferability as the Euclidian distances for the signatures improved on average by 2 %. The effect of semi-quantitative prior constraints combined with topography-guided sub-grid heterogeneity on transferability showed a more variable picture of improvements and deteriorations, but most improvements were observed for low flow statistics.

  2. Analytical Computation of Effective Grid Parameters for the Finite-Difference Seismic Waveform Modeling With the PREM, IASP91, SP6, and AK135

    NASA Astrophysics Data System (ADS)

    Toyokuni, G.; Takenaka, H.

    2007-12-01

    We propose a method to obtain effective grid parameters for the finite-difference (FD) method with standard Earth models using analytical ways. In spite of the broad use of the heterogeneous FD formulation for seismic waveform modeling, accurate treatment of material discontinuities inside the grid cells has been a serious problem for many years. One possible way to solve this problem is to introduce effective grid elastic moduli and densities (effective parameters) calculated by the volume harmonic averaging of elastic moduli and volume arithmetic averaging of density in grid cells. This scheme enables us to put a material discontinuity into an arbitrary position in the spatial grids. Most of the methods used for synthetic seismogram calculation today receives the blessing of the standard Earth models, such as the PREM, IASP91, SP6, and AK135, represented as functions of normalized radius. For the FD computation of seismic waveform with such models, we first need accurate treatment of material discontinuities in radius. This study provides a numerical scheme for analytical calculations of the effective parameters for an arbitrary spatial grids in radial direction as to these major four standard Earth models making the best use of their functional features. This scheme can analytically obtain the integral volume averages through partial fraction decompositions (PFDs) and integral formulae. We have developed a FORTRAN subroutine to perform the computations, which is opened to utilization in a large variety of FD schemes ranging from 1-D to 3-D, with conventional- and staggered-grids. In the presentation, we show some numerical examples displaying the accuracy of the FD synthetics simulated with the analytical effective parameters.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaunak, S.K.; Soni, B.K.

    With research interests shifting away from primarily military or industrial applications to more environmental applications, the area of ocean modelling has become an increasingly popular and exciting area of research. This paper presents a CIPS (Computation Field Simulation) system customized for the solution of oceanographic problems. This system deals primarily with the generation of simple, yet efficient grids for coastal areas. The two primary grid approaches are both structured in methodology. The first approach is a standard approach which is used in such popular grid generation softwares as GE-NIE++, EAGLEVIEW, and TIGER, where the user defines boundaries via points, lines,more » or curves, varies the distribution of points along these boundaries and then creates the interior grid. The second approach is to allow the user to interactively select points on the screen to form the boundary curves and then create the interior grid from these spline curves. The program has been designed with the needs of the ocean modeller in mind so that the modeller can obtain results in a timely yet elegant manner. The modeller performs four basic steps in using the program. First, he selects a region of interest from a popular database. Then, he creates a grid for that region. Next, he sets up boundary and input conditions and runs a circulation model. Finally, the modeller visualizes the output.« less

  4. Box/peanut and bar structures in edge-on and face-on nearby galaxies in the Sloan Digital Sky Survey - I. Catalogue

    NASA Astrophysics Data System (ADS)

    Yoshino, Akira; Yamauchi, Chisato

    2015-02-01

    We investigate box/peanut and bar structures in image data of edge-on and face-on nearby galaxies taken from the Sloan Digital Sky Survey (SDSS) to present catalogues containing the surface brightness parameters and the morphology classification. About 1700 edge-on galaxies and 2600 face-on galaxies are selected from SDSS DR7 in the g, r and i-bands. The images of each galaxy are fitted with the model of two-dimensional surface brightness of the Sérsic bulge and exponential disk. After removing some irregular data, the box/peanut, bar and other structures are easily distinguished by eye using residual (observed minus model) images. We find 292 box/peanut structures in the 1329 edge-on samples and 630 bar structures in 1890 face-on samples in the i-band, after removing some irregular data. The fraction of box/peanut galaxies is about 22 per cent against the edge-on samples, and that of bar galaxies is about 33 per cent (about 50 per cent if 629 elliptical galaxies are removed) against the face-on samples. Furthermore the strengths of the box/peanuts and bars are evaluated as strong, standard or weak. We find that the strength increases slightly with increasing B/T (bulge-to-total flux ratio), and that the fraction of box/peanuts is generally about a half of that of bars, irrespective of the strength and B/T. Our result supports the idea that a box/peanut is a bar seen edge-on.

  5. Adapting the iSNOBAL model for improved visualization in a GIS environment

    NASA Astrophysics Data System (ADS)

    Johansen, W. J.; Delparte, D.

    2014-12-01

    Snowmelt is a primary means of crucial water resources in much of the western United States. Researchers are developing models that estimate snowmelt to aid in water resource management. One such model is the image snowcover energy and mass balance (iSNOBAL) model. It uses input climate grids to simulate the development and melting of snowpack in mountainous regions. This study looks at applying this model to the Reynolds Creek Experimental Watershed in southwestern Idaho, utilizing novel approaches incorporating geographic information systems (GIS). To improve visualization of the iSNOBAL model, we have adapted it to run in a GIS environment. This type of environment is suited to both the input grid creation and the visualization of results. The data used for input grid creation can be stored locally or on a web-server. Kriging interpolation embedded within Python scripts are used to create air temperature, soil temperature, humidity, and precipitation grids, while built-in GIS and existing tools are used to create solar radiation and wind grids. Additional Python scripting is then used to perform model calculations. The final product is a user-friendly and accessible version of the iSNOBAL model, including the ability to easily visualize and interact with model results, all within a web- or desktop-based GIS environment. This environment allows for interactive manipulation of model parameters and visualization of the resulting input grids for the model calculations. Future work is moving towards adapting the model further for use in a 3D gaming engine for improved visualization and interaction.

  6. Modelling tidal current energy extraction in large area using a three-dimensional estuary model

    NASA Astrophysics Data System (ADS)

    Chen, Yaling; Lin, Binliang; Lin, Jie

    2014-11-01

    This paper presents a three-dimensional modelling study for simulating tidal current energy extraction in large areas, with a momentum sink term being added into the momentum equations. Due to the limits of computational capacity, the grid size of the numerical model is generally much larger than the turbine rotor diameter. Two models, i.e. a local grid refinement model and a coarse grid model, are employed and an idealized estuary is set up. The local grid refinement model is constructed to simulate the power generation of an isolated turbine and its impacts on hydrodynamics. The model is then used to determine the deployment of turbine farm and quantify a combined thrust coefficient for multiple turbines located in a grid element of coarse grid model. The model results indicate that the performance of power extraction is affected by array deployment, with more power generation from outer rows than inner rows due to velocity deficit influence of upstream turbines. Model results also demonstrate that the large-scale turbine farm has significant effects on the hydrodynamics. The tidal currents are attenuated within the turbine swept area, and both upstream and downstream of the array. While the currents are accelerated above and below turbines, which is contributed to speeding up the wake mixing process behind the arrays. The water levels are heightened in both low and high water levels as the turbine array spanning the full width of estuary. The magnitude of water level change is found to increase with the array expansion, especially at the low water level.

  7. A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model

    DOE PAGES

    Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.; ...

    2016-09-16

    Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less

  8. A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.

    Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less

  9. Comparison of local grid refinement methods for MODFLOW

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.; Leake, S.A.

    2006-01-01

    Many ground water modeling efforts use a finite-difference method to solve the ground water flow equation, and many of these models require a relatively fine-grid discretization to accurately represent the selected process in limited areas of interest. Use of a fine grid over the entire domain can be computationally prohibitive; using a variably spaced grid can lead to cells with a large aspect ratio and refinement in areas where detail is not needed. One solution is to use local-grid refinement (LGR) whereby the grid is only refined in the area of interest. This work reviews some LGR methods and identifies advantages and drawbacks in test cases using MODFLOW-2000. The first test case is two dimensional and heterogeneous; the second is three dimensional and includes interaction with a meandering river. Results include simulations using a uniform fine grid, a variably spaced grid, a traditional method of LGR without feedback, and a new shared node method with feedback. Discrepancies from the solution obtained with the uniform fine grid are investigated. For the models tested, the traditional one-way coupled approaches produced discrepancies in head up to 6.8% and discrepancies in cell-to-cell fluxes up to 7.1%, while the new method has head and cell-to-cell flux discrepancies of 0.089% and 0.14%, respectively. Additional results highlight the accuracy, flexibility, and CPU time trade-off of these methods and demonstrate how the new method can be successfully implemented to model surface water-ground water interactions. Copyright ?? 2006 The Author(s).

  10. Grid Enabled Geospatial Catalogue Web Service

    NASA Technical Reports Server (NTRS)

    Chen, Ai-Jun; Di, Li-Ping; Wei, Ya-Xing; Liu, Yang; Bui, Yu-Qi; Hu, Chau-Min; Mehrotra, Piyush

    2004-01-01

    Geospatial Catalogue Web Service is a vital service for sharing and interoperating volumes of distributed heterogeneous geospatial resources, such as data, services, applications, and their replicas over the web. Based on the Grid technology and the Open Geospatial Consortium (0GC) s Catalogue Service - Web Information Model, this paper proposes a new information model for Geospatial Catalogue Web Service, named as GCWS which can securely provides Grid-based publishing, managing and querying geospatial data and services, and the transparent access to the replica data and related services under the Grid environment. This information model integrates the information model of the Grid Replica Location Service (RLS)/Monitoring & Discovery Service (MDS) with the information model of OGC Catalogue Service (CSW), and refers to the geospatial data metadata standards from IS0 19115, FGDC and NASA EOS Core System and service metadata standards from IS0 191 19 to extend itself for expressing geospatial resources. Using GCWS, any valid geospatial user, who belongs to an authorized Virtual Organization (VO), can securely publish and manage geospatial resources, especially query on-demand data in the virtual community and get back it through the data-related services which provide functions such as subsetting, reformatting, reprojection etc. This work facilitates the geospatial resources sharing and interoperating under the Grid environment, and implements geospatial resources Grid enabled and Grid technologies geospatial enabled. It 2!so makes researcher to focus on science, 2nd not cn issues with computing ability, data locztic, processir,g and management. GCWS also is a key component for workflow-based virtual geospatial data producing.

  11. Quantifying the uncertainty introduced by discretization and time-averaging in two-fluid model predictions

    DOE PAGES

    Syamlal, Madhava; Celik, Ismail B.; Benyahia, Sofiane

    2017-07-12

    The two-fluid model (TFM) has become a tool for the design and troubleshooting of industrial fluidized bed reactors. To use TFM for scale up with confidence, the uncertainty in its predictions must be quantified. Here, we study two sources of uncertainty: discretization and time-averaging. First, we show that successive grid refinement may not yield grid-independent transient quantities, including cross-section–averaged quantities. Successive grid refinement would yield grid-independent time-averaged quantities on sufficiently fine grids. A Richardson extrapolation can then be used to estimate the discretization error, and the grid convergence index gives an estimate of the uncertainty. Richardson extrapolation may not workmore » for industrial-scale simulations that use coarse grids. We present an alternative method for coarse grids and assess its ability to estimate the discretization error. Second, we assess two methods (autocorrelation and binning) and find that the autocorrelation method is more reliable for estimating the uncertainty introduced by time-averaging TFM data.« less

  12. Workshop on Grid Generation and Related Areas

    NASA Technical Reports Server (NTRS)

    1992-01-01

    A collection of papers given at the Workshop on Grid Generation and Related Areas is presented. The purpose of this workshop was to assemble engineers and scientists who are currently working on grid generation for computational fluid dynamics (CFD), surface modeling, and related areas. The objectives were to provide an informal forum on grid generation and related topics, to assess user experience, to identify needs, and to help promote synergy among engineers and scientists working in this area. The workshop consisted of four sessions representative of grid generation and surface modeling research and application within NASA LeRC. Each session contained presentations and an open discussion period.

  13. Modeling and Simulation for an 8 kW Three-Phase Grid-Connected Photo-Voltaic Power System

    NASA Astrophysics Data System (ADS)

    Cen, Zhaohui

    2017-09-01

    Gird-connected Photo-Voltaic (PV) systems rated as 5-10 kW level have advantages of scalability and energy-saving, so they are very typical for small-scale household solar applications. In this paper, an 8 kW three-phase grid-connected PV system model is proposed and studied. In this high-fidelity model, some basic PV system components such as solar panels, DC-DC converters, DC-AC inverters and three-phase utility grids are mathematically modelled and organized as a complete simulation model. Also, an overall power controller with Maximum Power Point Control (MPPT) is proposed to achieve both high-efficiency for solar energy harvesting and grid-connection stability. Finally, simulation results demonstrate the effectiveness of the PV system model and the proposed controller, and power quality issues are discussed.

  14. The Impact of Varying the Physics Grid Resolution Relative to the Dynamical Core Resolution in CAM-SE-CSLAM

    NASA Astrophysics Data System (ADS)

    Herrington, A. R.; Lauritzen, P. H.; Reed, K. A.

    2017-12-01

    The spectral element dynamical core of the Community Atmosphere Model (CAM) has recently been coupled to an approximately isotropic, finite-volume grid per implementation of the conservative semi-Lagrangian multi-tracer transport scheme (CAM-SE-CSLAM; Lauritzen et al. 2017). In this framework, the semi-Lagrangian transport of tracers are computed on the finite-volume grid, while the adiabatic dynamics are solved using the spectral element grid. The physical parameterizations are evaluated on the finite-volume grid, as opposed to the unevenly spaced Gauss-Lobatto-Legendre nodes of the spectral element grid. Computing the physics on the finite-volume grid reduces numerical artifacts such as grid imprinting, possibly because the forcing terms are no longer computed at element boundaries where the resolved dynamics are least smooth. The separation of the physics grid and the dynamics grid allows for a unique opportunity to understand the resolution sensitivity in CAM-SE-CSLAM. The observed large sensitivity of CAM to horizontal resolution is a poorly understood impediment to improved simulations of regional climate using global, variable resolution grids. Here, a series of idealized moist simulations are presented in which the finite-volume grid resolution is varied relative to the spectral element grid resolution in CAM-SE-CSLAM. The simulations are carried out at multiple spectral element grid resolutions, in part to provide a companion set of simulations, in which the spectral element grid resolution is varied relative to the finite-volume grid resolution, but more generally to understand if the sensitivity to the finite-volume grid resolution is consistent across a wider spectrum of resolved scales. Results are interpreted in the context of prior ideas regarding resolution sensitivity of global atmospheric models.

  15. A computer program for converting rectangular coordinates to latitude-longitude coordinates

    USGS Publications Warehouse

    Rutledge, A.T.

    1989-01-01

    A computer program was developed for converting the coordinates of any rectangular grid on a map to coordinates on a grid that is parallel to lines of equal latitude and longitude. Using this program in conjunction with groundwater flow models, the user can extract data and results from models with varying grid orientations and place these data into grid structure that is oriented parallel to lines of equal latitude and longitude. All cells in the rectangular grid must have equal dimensions, and all cells in the latitude-longitude grid measure one minute by one minute. This program is applicable if the map used shows lines of equal latitude as arcs and lines of equal longitude as straight lines and assumes that the Earth 's surface can be approximated as a sphere. The program user enters the row number , column number, and latitude and longitude of the midpoint of the cell for three test cells on the rectangular grid. The latitude and longitude of boundaries of the rectangular grid also are entered. By solving sets of simultaneous linear equations, the program calculates coefficients that are used for making the conversion. As an option in the program, the user may build a groundwater model file based on a grid that is parallel to lines of equal latitude and longitude. The program reads a data file based on the rectangular coordinates and automatically forms the new data file. (USGS)

  16. Differences in Visual-Spatial Input May Underlie Different Compression Properties of Firing Fields for Grid Cell Modules in Medial Entorhinal Cortex

    PubMed Central

    Raudies, Florian; Hasselmo, Michael E.

    2015-01-01

    Firing fields of grid cells in medial entorhinal cortex show compression or expansion after manipulations of the location of environmental barriers. This compression or expansion could be selective for individual grid cell modules with particular properties of spatial scaling. We present a model for differences in the response of modules to barrier location that arise from different mechanisms for the influence of visual features on the computation of location that drives grid cell firing patterns. These differences could arise from differences in the position of visual features within the visual field. When location was computed from the movement of visual features on the ground plane (optic flow) in the ventral visual field, this resulted in grid cell spatial firing that was not sensitive to barrier location in modules modeled with small spacing between grid cell firing fields. In contrast, when location was computed from static visual features on walls of barriers, i.e. in the more dorsal visual field, this resulted in grid cell spatial firing that compressed or expanded based on the barrier locations in modules modeled with large spacing between grid cell firing fields. This indicates that different grid cell modules might have differential properties for computing location based on visual cues, or the spatial radius of sensitivity to visual cues might differ between modules. PMID:26584432

  17. Bayesian Non-Stationary Index Gauge Modeling of Gridded Precipitation Extremes

    NASA Astrophysics Data System (ADS)

    Verdin, A.; Bracken, C.; Caldwell, J.; Balaji, R.; Funk, C. C.

    2017-12-01

    We propose a Bayesian non-stationary model to generate watershed scale gridded estimates of extreme precipitation return levels. The Climate Hazards Group Infrared Precipitation with Stations (CHIRPS) dataset is used to obtain gridded seasonal precipitation extremes over the Taylor Park watershed in Colorado for the period 1981-2016. For each year, grid cells within the Taylor Park watershed are aggregated to a representative "index gauge," which is input to the model. Precipitation-frequency curves for the index gauge are estimated for each year, using climate variables with significant teleconnections as proxies. Such proxies enable short-term forecasting of extremes for the upcoming season. Disaggregation ratios of the index gauge to the grid cells within the watershed are computed for each year and preserved to translate the index gauge precipitation-frequency curve to gridded precipitation-frequency maps for select return periods. Gridded precipitation-frequency maps are of the same spatial resolution as CHIRPS (0.05° x 0.05°). We verify that the disaggregation method preserves spatial coherency of extremes in the Taylor Park watershed. Validation of the index gauge extreme precipitation-frequency method consists of ensuring extreme value statistics are preserved on a grid cell basis. To this end, a non-stationary extreme precipitation-frequency analysis is performed on each grid cell individually, and the resulting frequency curves are compared to those produced by the index gauge disaggregation method.

  18. Impact of dose size in single fraction spatially fractionated (grid) radiotherapy for melanoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hualin, E-mail: hualin.zhang@northwestern.edu, E-mail: hualinzhang@yahoo.com; Zhong, Hualiang; Barth, Rolf F.

    2014-02-15

    Purpose: To evaluate the impact of dose size in single fraction, spatially fractionated (grid) radiotherapy for selectively killing infiltrated melanoma cancer cells of different tumor sizes, using different radiobiological models. Methods: A Monte Carlo technique was employed to calculate the 3D dose distribution of a commercially available megavoltage grid collimator in a 6 MV beam. The linear-quadratic (LQ) and modified linear quadratic (MLQ) models were used separately to evaluate the therapeutic outcome of a series of single fraction regimens that employed grid therapy to treat both acute and late responding melanomas of varying sizes. The dose prescription point was atmore » the center of the tumor volume. Dose sizes ranging from 1 to 30 Gy at 100% dose line were modeled. Tumors were either touching the skin surface or having their centers at a depth of 3 cm. The equivalent uniform dose (EUD) to the melanoma cells and the therapeutic ratio (TR) were defined by comparing grid therapy with the traditional open debulking field. The clinical outcomes from recent reports were used to verify the authors’ model. Results: Dose profiles at different depths and 3D dose distributions in a series of 3D melanomas treated with grid therapy were obtained. The EUDs and TRs for all sizes of 3D tumors involved at different doses were derived through the LQ and MLQ models, and a practical equation was derived. The EUD was only one fifth of the prescribed dose. The TR was dependent on the prescribed dose and on the LQ parameters of both the interspersed cancer and normal tissue cells. The results from the LQ model were consistent with those of the MLQ model. At 20 Gy, the EUD and TR by the LQ model were 2.8% higher and 1% lower than by the MLQ, while at 10 Gy, the EUD and TR as defined by the LQ model were only 1.4% higher and 0.8% lower, respectively. The dose volume histograms of grid therapy for a 10 cm tumor showed different dosimetric characteristics from those of conventional radiotherapy. A significant portion of the tumor volume received a very large dose in grid therapy, which ensures significant tumor cell killing in these regions. Conversely, some areas received a relatively small dose, thereby sparing interspersed normal cells and increasing radiation tolerance. The radiobiology modeling results indicated that grid therapy could be useful for treating acutely responding melanomas infiltrating radiosensitive normal tissues. The theoretical model predictions were supported by the clinical outcomes. Conclusions: Grid therapy functions by selectively killing infiltrating tumor cells and concomitantly sparing interspersed normal cells. The TR depends on the radiosensitivity of the cell population, dose, tumor size, and location. Because the volumes of very high dose regions are small, the LQ model can be used safely to predict the clinical outcomes of grid therapy. When treating melanomas with a dose of 15 Gy or higher, single fraction grid therapy is clearly advantageous for sparing interspersed normal cells. The existence of a threshold fraction dose, which was found in the authors’ theoretical simulations, was confirmed by clinical observations.« less

  19. Elliptic surface grid generation on minimal and parmetrized surfaces

    NASA Technical Reports Server (NTRS)

    Spekreijse, S. P.; Nijhuis, G. H.; Boerstoel, J. W.

    1995-01-01

    An elliptic grid generation method is presented which generates excellent boundary conforming grids in domains in 2D physical space. The method is based on the composition of an algebraic and elliptic transformation. The composite mapping obeys the familiar Poisson grid generation system with control functions specified by the algebraic transformation. New expressions are given for the control functions. Grid orthogonality at the boundary is achieved by modification of the algebraic transformation. It is shown that grid generation on a minimal surface in 3D physical space is in fact equivalent to grid generation in a domain in 2D physical space. A second elliptic grid generation method is presented which generates excellent boundary conforming grids on smooth surfaces. It is assumed that the surfaces are parametrized and that the grid only depends on the shape of the surface and is independent of the parametrization. Concerning surface modeling, it is shown that bicubic Hermite interpolation is an excellent method to generate a smooth surface which is passing through a given discrete set of control points. In contrast to bicubic spline interpolation, there is extra freedom to model the tangent and twist vectors such that spurious oscillations are prevented.

  20. Evaluating Anthropogenic Carbon Emissions in the Urban Salt Lake Valley through Inverse Modeling: Combining Long-term CO2 Observations and an Emission Inventory using a Multiple-box Atmospheric Model

    NASA Astrophysics Data System (ADS)

    Catharine, D.; Strong, C.; Lin, J. C.; Cherkaev, E.; Mitchell, L.; Stephens, B. B.; Ehleringer, J. R.

    2016-12-01

    The rising level of atmospheric carbon dioxide (CO2), driven by anthropogenic emissions, is the leading cause of enhanced radiative forcing. Increasing societal interest in reducing anthropogenic greenhouse gas emissions call for a computationally efficient method of evaluating anthropogenic CO2 source emissions, particularly if future mitigation actions are to be developed. A multiple-box atmospheric transport model was constructed in conjunction with a pre-existing fossil fuel CO2 emission inventory to estimate near-surface CO2 mole fractions and the associated anthropogenic CO2 emissions in the Salt Lake Valley (SLV) of northern Utah, a metropolitan area with a population of 1 million. A 15-year multi-site dataset of observed CO2 mole fractions is used in conjunction with the multiple-box model to develop an efficient method to constrain anthropogenic emissions through inverse modeling. Preliminary results of the multiple-box model CO2 inversion indicate that the pre-existing anthropogenic emission inventory may over-estimate CO2 emissions in the SLV. In addition, inversion results displaying a complex spatial and temporal distribution of urban emissions, including the effects of residential development and vehicular traffic will be discussed.

  1. Spatial Pattern of Cell Damage in Tissue from Heavy Ions

    NASA Technical Reports Server (NTRS)

    Ponomarev, Artem L.; Huff, Janice L.; Cucinotta, Francis A.

    2007-01-01

    A new Monte Carlo algorithm was developed that can model passage of heavy ions in a tissue, and their action on the cellular matrix for 2- or 3-dimensional cases. The build-up of secondaries such as projectile fragments, target fragments, other light fragments, and delta-rays was simulated. Cells were modeled as a cell culture monolayer in one example, where the data were taken directly from microscopy (2-d cell matrix). A simple model of tissue was given as abstract spheres with close approximation to real cell geometries (3-d cell matrix), as well as a realistic model of tissue was proposed based on microscopy images. Image segmentation was used to identify cells in an irradiated cell culture monolayer, or slices of tissue. The cells were then inserted into the model box pixel by pixel. In the case of cell monolayers (2-d), the image size may exceed the modeled box size. Such image was is moved with respect to the box in order to sample as many cells as possible. In the case of the simple tissue (3-d), the tissue box is modeled with periodic boundary conditions, which extrapolate the technique to macroscopic volumes of tissue. For real tissue, specific spatial patterns for cell apoptosis and necrosis are expected. The cell patterns were modeled based on action cross sections for apoptosis and necrosis estimated based on BNL data, and other experimental data.

  2. Particle in a box in PT-symmetric quantum mechanics and an electromagnetic analog

    NASA Astrophysics Data System (ADS)

    Dasarathy, Anirudh; Isaacson, Joshua P.; Jones-Smith, Katherine; Tabachnik, Jason; Mathur, Harsh

    2013-06-01

    In PT-symmetric quantum mechanics a fundamental principle of quantum mechanics, that the Hamiltonian must be Hermitian, is replaced by another set of requirements, including notably symmetry under PT, where P denotes parity and T denotes time reversal. Here we study the role of boundary conditions in PT-symmetric quantum mechanics by constructing a simple model that is the PT-symmetric analog of a particle in a box. The model has the usual particle-in-a-box Hamiltonian but boundary conditions that respect PT symmetry rather than Hermiticity. We find that for a broad class of PT-symmetric boundary conditions the model respects the condition of unbroken PT symmetry, namely, that the Hamiltonian and the symmetry operator PT have simultaneous eigenfunctions, implying that the energy eigenvalues are real. We also find that the Hamiltonian is self-adjoint under the PT-symmetric inner product. Thus we obtain a simple soluble model that fulfills all the requirements of PT-symmetric quantum mechanics. In the second part of this paper we formulate a variational principle for PT-symmetric quantum mechanics that is the analog of the textbook Rayleigh-Ritz principle. Finally we consider electromagnetic analogs of the PT-symmetric particle in a box. We show that the isolated particle in a box may be realized as a Fabry-Perot cavity between an absorbing medium and its conjugate gain medium. Coupling the cavity to an external continuum of incoming and outgoing states turns the energy levels of the box into sharp resonances. Remarkably we find that the resonances have a Breit-Wigner line shape in transmission and a Fano line shape in reflection; by contrast, in the corresponding Hermitian case the line shapes always have a Breit-Wigner form in both transmission and reflection.

  3. A Novel Grid SINS/DVL Integrated Navigation Algorithm for Marine Application

    PubMed Central

    Kang, Yingyao; Zhao, Lin; Cheng, Jianhua; Fan, Xiaoliang

    2018-01-01

    Integrated navigation algorithms under the grid frame have been proposed based on the Kalman filter (KF) to solve the problem of navigation in some special regions. However, in the existing study of grid strapdown inertial navigation system (SINS)/Doppler velocity log (DVL) integrated navigation algorithms, the Earth models of the filter dynamic model and the SINS mechanization are not unified. Besides, traditional integrated systems with the KF based correction scheme are susceptible to measurement errors, which would decrease the accuracy and robustness of the system. In this paper, an adaptive robust Kalman filter (ARKF) based hybrid-correction grid SINS/DVL integrated navigation algorithm is designed with the unified reference ellipsoid Earth model to improve the navigation accuracy in middle-high latitude regions for marine application. Firstly, to unify the Earth models, the mechanization of grid SINS is introduced and the error equations are derived based on the same reference ellipsoid Earth model. Then, a more accurate grid SINS/DVL filter model is designed according to the new error equations. Finally, a hybrid-correction scheme based on the ARKF is proposed to resist the effect of measurement errors. Simulation and experiment results show that, compared with the traditional algorithms, the proposed navigation algorithm can effectively improve the navigation performance in middle-high latitude regions by the unified Earth models and the ARKF based hybrid-correction scheme. PMID:29373549

  4. Modeling Hidden Circuits: An Authentic Research Experience in One Lab Period

    ERIC Educational Resources Information Center

    Moore, J. Christopher; Rubbo, Louis J.

    2016-01-01

    Two wires exit a black box that has three exposed light bulbs connected together in an unknown configuration. The task for students is to determine the circuit configuration without opening the box. In the activity described in this paper, we navigate students through the process of making models, developing and conducting experiments that can…

  5. Analysis of Time-Series Quasi-Experiments. Final Report.

    ERIC Educational Resources Information Center

    Glass, Gene V.; Maguire, Thomas O.

    The objective of this project was to investigate the adequacy of statistical models developed by G. E. P. Box and G. C. Tiao for the analysis of time-series quasi-experiments: (1) The basic model developed by Box and Tiao is applied to actual time-series experiment data from two separate experiments, one in psychology and one in educational…

  6. The impact of model detail on power grid resilience measures

    NASA Astrophysics Data System (ADS)

    Auer, S.; Kleis, K.; Schultz, P.; Kurths, J.; Hellmann, F.

    2016-05-01

    Extreme events are a challenge to natural as well as man-made systems. For critical infrastructure like power grids, we need to understand their resilience against large disturbances. Recently, new measures of the resilience of dynamical systems have been developed in the complex system literature. Basin stability and survivability respectively assess the asymptotic and transient behavior of a system when subjected to arbitrary, localized but large perturbations in frequency and phase. To employ these methods that assess power grid resilience, we need to choose a certain model detail of the power grid. For the grid topology we considered the Scandinavian grid and an ensemble of power grids generated with a random growth model. So far the most popular model that has been studied is the classical swing equation model for the frequency response of generators and motors. In this paper we study a more sophisticated model of synchronous machines that also takes voltage dynamics into account, and compare it to the previously studied model. This model has been found to give an accurate picture of the long term evolution of synchronous machines in the engineering literature for post fault studies. We find evidence that some stable fix points of the swing equation become unstable when we add voltage dynamics. If this occurs the asymptotic behavior of the system can be dramatically altered, and basin stability estimates obtained with the swing equation can be dramatically wrong. We also find that the survivability does not change significantly when taking the voltage dynamics into account. Further, the limit cycle type asymptotic behaviour is strongly correlated with transient voltages that violate typical operational voltage bounds. Thus, transient voltage bounds are dominated by transient frequency bounds and play no large role for realistic parameters.

  7. Global Gridded Crop Model Evaluation: Benchmarking, Skills, Deficiencies and Implications.

    NASA Technical Reports Server (NTRS)

    Muller, Christoph; Elliott, Joshua; Chryssanthacopoulos, James; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; Deryng, Delphine; Folberth, Christian; Glotter, Michael; Hoek, Steven; hide

    2017-01-01

    Crop models are increasingly used to simulate crop yields at the global scale, but so far there is no general framework on how to assess model performance. Here we evaluate the simulation results of 14 global gridded crop modeling groups that have contributed historic crop yield simulations for maize, wheat, rice and soybean to the Global Gridded Crop Model Intercomparison (GGCMI) of the Agricultural Model Intercomparison and Improvement Project (AgMIP). Simulation results are compared to reference data at global, national and grid cell scales and we evaluate model performance with respect to time series correlation, spatial correlation and mean bias. We find that global gridded crop models (GGCMs) show mixed skill in reproducing time series correlations or spatial patterns at the different spatial scales. Generally, maize, wheat and soybean simulations of many GGCMs are capable of reproducing larger parts of observed temporal variability (time series correlation coefficients (r) of up to 0.888 for maize, 0.673 for wheat and 0.643 for soybean at the global scale) but rice yield variability cannot be well reproduced by most models. Yield variability can be well reproduced for most major producing countries by many GGCMs and for all countries by at least some. A comparison with gridded yield data and a statistical analysis of the effects of weather variability on yield variability shows that the ensemble of GGCMs can explain more of the yield variability than an ensemble of regression models for maize and soybean, but not for wheat and rice. We identify future research needs in global gridded crop modeling and for all individual crop modeling groups. In the absence of a purely observation-based benchmark for model evaluation, we propose that the best performing crop model per crop and region establishes the benchmark for all others, and modelers are encouraged to investigate how crop model performance can be increased. We make our evaluation system accessible to all crop modelers so that other modeling groups can also test their model performance against the reference data and the GGCMI benchmark.

  8. On the uncertainties associated with using gridded rainfall data as a proxy for observed

    NASA Astrophysics Data System (ADS)

    Tozer, C. R.; Kiem, A. S.; Verdon-Kidd, D. C.

    2012-05-01

    Gridded rainfall datasets are used in many hydrological and climatological studies, in Australia and elsewhere, including for hydroclimatic forecasting, climate attribution studies and climate model performance assessments. The attraction of the spatial coverage provided by gridded data is clear, particularly in Australia where the spatial and temporal resolution of the rainfall gauge network is sparse. However, the question that must be asked is whether it is suitable to use gridded data as a proxy for observed point data, given that gridded data is inherently "smoothed" and may not necessarily capture the temporal and spatial variability of Australian rainfall which leads to hydroclimatic extremes (i.e. droughts, floods). This study investigates this question through a statistical analysis of three monthly gridded Australian rainfall datasets - the Bureau of Meteorology (BOM) dataset, the Australian Water Availability Project (AWAP) and the SILO dataset. The results of the monthly, seasonal and annual comparisons show that not only are the three gridded datasets different relative to each other, there are also marked differences between the gridded rainfall data and the rainfall observed at gauges within the corresponding grids - particularly for extremely wet or extremely dry conditions. Also important is that the differences observed appear to be non-systematic. To demonstrate the hydrological implications of using gridded data as a proxy for gauged data, a rainfall-runoff model is applied to one catchment in South Australia initially using gauged data as the source of rainfall input and then gridded rainfall data. The results indicate a markedly different runoff response associated with each of the different sources of rainfall data. It should be noted that this study does not seek to identify which gridded dataset is the "best" for Australia, as each gridded data source has its pros and cons, as does gauged data. Rather, the intention is to quantify differences between various gridded data sources and how they compare with gauged data so that these differences can be considered and accounted for in studies that utilise these gridded datasets. Ultimately, if key decisions are going to be based on the outputs of models that use gridded data, an estimate (or at least an understanding) of the uncertainties relating to the assumptions made in the development of gridded data and how that gridded data compares with reality should be made.

  9. Grid Frequency Extreme Event Analysis and Modeling: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Florita, Anthony R; Clark, Kara; Gevorgian, Vahan

    Sudden losses of generation or load can lead to instantaneous changes in electric grid frequency and voltage. Extreme frequency events pose a major threat to grid stability. As renewable energy sources supply power to grids in increasing proportions, it becomes increasingly important to examine when and why extreme events occur to prevent destabilization of the grid. To better understand frequency events, including extrema, historic data were analyzed to fit probability distribution functions to various frequency metrics. Results showed that a standard Cauchy distribution fit the difference between the frequency nadir and prefault frequency (f_(C-A)) metric well, a standard Cauchy distributionmore » fit the settling frequency (f_B) metric well, and a standard normal distribution fit the difference between the settling frequency and frequency nadir (f_(B-C)) metric very well. Results were inconclusive for the frequency nadir (f_C) metric, meaning it likely has a more complex distribution than those tested. This probabilistic modeling should facilitate more realistic modeling of grid faults.« less

  10. The study on the control strategy of micro grid considering the economy of energy storage operation

    NASA Astrophysics Data System (ADS)

    Ma, Zhiwei; Liu, Yiqun; Wang, Xin; Li, Bei; Zeng, Ming

    2017-08-01

    To optimize the running of micro grid to guarantee the supply and demand balance of electricity, and to promote the utilization of renewable energy. The control strategy of micro grid energy storage system is studied. Firstly, the mixed integer linear programming model is established based on the receding horizon control. Secondly, the modified cuckoo search algorithm is proposed to calculate the model. Finally, a case study is carried out to study the signal characteristic of micro grid and batteries under the optimal control strategy, and the convergence of the modified cuckoo search algorithm is compared with others to verify the validity of the proposed model and method. The results show that, different micro grid running targets can affect the control strategy of energy storage system, which further affect the signal characteristics of the micro grid. Meanwhile, the convergent speed, computing time and the economy of the modified cuckoo search algorithm are improved compared with the traditional cuckoo search algorithm and differential evolution algorithm.

  11. Hydroacoustic propagation grids for the CTBT knowledge databaes BBN technical memorandum W1303

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. Angell

    1998-05-01

    The Hydroacoustic Coverage Assessment Model (HydroCAM) has been used to develop components of the hydroacoustic knowledge database required by operational monitoring systems, particularly the US National Data Center (NDC). The database, which consists of travel time, amplitude correction and travel time standard deviation grids, is planned to support source location, discrimination and estimation functions of the monitoring network. The grids will also be used under the current BBN subcontract to support an analysis of the performance of the International Monitoring System (IMS) and national sensor systems. This report describes the format and contents of the hydroacoustic knowledgebase grids, and themore » procedures and model parameters used to generate these grids. Comparisons between the knowledge grids, measured data and other modeled results are presented to illustrate the strengths and weaknesses of the current approach. A recommended approach for augmenting the knowledge database with a database of expected spectral/waveform characteristics is provided in the final section of the report.« less

  12. Molgenis-impute: imputation pipeline in a box.

    PubMed

    Kanterakis, Alexandros; Deelen, Patrick; van Dijk, Freerk; Byelas, Heorhiy; Dijkstra, Martijn; Swertz, Morris A

    2015-08-19

    Genotype imputation is an important procedure in current genomic analysis such as genome-wide association studies, meta-analyses and fine mapping. Although high quality tools are available that perform the steps of this process, considerable effort and expertise is required to set up and run a best practice imputation pipeline, particularly for larger genotype datasets, where imputation has to scale out in parallel on computer clusters. Here we present MOLGENIS-impute, an 'imputation in a box' solution that seamlessly and transparently automates the set up and running of all the steps of the imputation process. These steps include genome build liftover (liftovering), genotype phasing with SHAPEIT2, quality control, sample and chromosomal chunking/merging, and imputation with IMPUTE2. MOLGENIS-impute builds on MOLGENIS-compute, a simple pipeline management platform for submission and monitoring of bioinformatics tasks in High Performance Computing (HPC) environments like local/cloud servers, clusters and grids. All the required tools, data and scripts are downloaded and installed in a single step. Researchers with diverse backgrounds and expertise have tested MOLGENIS-impute on different locations and imputed over 30,000 samples so far using the 1,000 Genomes Project and new Genome of the Netherlands data as the imputation reference. The tests have been performed on PBS/SGE clusters, cloud VMs and in a grid HPC environment. MOLGENIS-impute gives priority to the ease of setting up, configuring and running an imputation. It has minimal dependencies and wraps the pipeline in a simple command line interface, without sacrificing flexibility to adapt or limiting the options of underlying imputation tools. It does not require knowledge of a workflow system or programming, and is targeted at researchers who just want to apply best practices in imputation via simple commands. It is built on the MOLGENIS compute workflow framework to enable customization with additional computational steps or it can be included in other bioinformatics pipelines. It is available as open source from: https://github.com/molgenis/molgenis-imputation.

  13. Multiscale Methods for Accurate, Efficient, and Scale-Aware Models of the Earth System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldhaber, Steve; Holland, Marika

    The major goal of this project was to contribute improvements to the infrastructure of an Earth System Model in order to support research in the Multiscale Methods for Accurate, Efficient, and Scale-Aware models of the Earth System project. In support of this, the NCAR team accomplished two main tasks: improving input/output performance of the model and improving atmospheric model simulation quality. Improvement of the performance and scalability of data input and diagnostic output within the model required a new infrastructure which can efficiently handle the unstructured grids common in multiscale simulations. This allows for a more computationally efficient model, enablingmore » more years of Earth System simulation. The quality of the model simulations was improved by reducing grid-point noise in the spectral element version of the Community Atmosphere Model (CAM-SE). This was achieved by running the physics of the model using grid-cell data on a finite-volume grid.« less

  14. DEAD-box Helicases as Integrators of RNA, Nucleotide and Protein Binding

    PubMed Central

    Putnam, Andrea A.

    2013-01-01

    DEAD-box helicases perform diverse cellular functions in virtually all steps of RNA metabolism from Bacteria to Humans. Although DEAD-box helicases share a highly conserved core domain, the enzymes catalyze a wide range of biochemical reactions. In addition to the well established RNA unwinding and corresponding ATPase activities, DEAD-box helicases promote duplex formation and displace proteins from RNA. They can also function as assembly platforms for larger ribonucleoprotein complexes, and as metabolite sensors. This review aims to provide a perspective on the diverse biochemical features of DEAD-box helicases and connections to structural information. We discuss these data in the context of a model that views the enzymes as integrators of RNA, nucleotide, and protein binding. PMID:23416748

  15. Current Grid Generation Strategies and Future Requirements in Hypersonic Vehicle Design, Analysis and Testing

    NASA Technical Reports Server (NTRS)

    Papadopoulos, Periklis; Venkatapathy, Ethiraj; Prabhu, Dinesh; Loomis, Mark P.; Olynick, Dave; Arnold, James O. (Technical Monitor)

    1998-01-01

    Recent advances in computational power enable computational fluid dynamic modeling of increasingly complex configurations. A review of grid generation methodologies implemented in support of the computational work performed for the X-38 and X-33 are presented. In strategizing topological constructs and blocking structures factors considered are the geometric configuration, optimal grid size, numerical algorithms, accuracy requirements, physics of the problem at hand, computational expense, and the available computer hardware. Also addressed are grid refinement strategies, the effects of wall spacing, and convergence. The significance of grid is demonstrated through a comparison of computational and experimental results of the aeroheating environment experienced by the X-38 vehicle. Special topics on grid generation strategies are also addressed to model control surface deflections, and material mapping.

  16. Study of a close-grid geodynamic measurement system

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The Clogeos (Close-Grid Geodynamic Measurement System) concept, a complete range or range-rate measurement terminal installed in a satellite in a near-polar orbit with a network of relatively simple transponders or retro-reflectors on the ground at intervals of 0.1 to 10 km was reviewed. The distortion of the grid was measured in three dimensions to accuracies of + or - 1 cm with important applications to geodynamics, glaciology, and geodesy. User requirements are considered, and a typical grid, designed for earthquake prediction, was laid out along the San Andreas, Hayward, and Calaceras faults in southern California. The sensitivity of both range and range-rate measurements to small grid motions was determined by a simplified model. Variables in the model are satellite altitude and elevation angle plus grid displacements in latitude, and height.

  17. Advanced grid-stiffened composite shells for applications in heavy-lift helicopter rotor blade spars

    NASA Astrophysics Data System (ADS)

    Narayanan Nampy, Sreenivas

    Modern rotor blades are constructed using composite materials to exploit their superior structural performance compared to metals. Helicopter rotor blade spars are conventionally designed as monocoque structures. Blades of the proposed Heavy Lift Helicopter are envisioned to be as heavy as 800 lbs when designed using the monocoque spar design. A new and innovative design is proposed to replace the conventional spar designs with light weight grid-stiffened composite shell. Composite stiffened shells have been known to provide excellent strength to weight ratio and damage tolerance with an excellent potential to reduce weight. Conventional stringer--rib stiffened construction is not suitable for rotor blade spars since they are limited in generating high torsion stiffness that is required for aeroelastic stability of the rotor. As a result, off-axis (helical) stiffeners must be provided. This is a new design space where innovative modeling techniques are needed. The structural behavior of grid-stiffened structures under axial, bending, and torsion loads, typically experienced by rotor blades need to be accurately predicted. The overall objective of the present research is to develop and integrate the necessary design analysis tools to conduct a feasibility study in employing grid-stiffened shells for heavy-lift rotor blade spars. Upon evaluating the limitations in state-of-the-art analytical models in predicting the axial, bending, and torsion stiffness coefficients of grid and grid-stiffened structures, a new analytical model was developed. The new analytical model based on the smeared stiffness approach was developed employing the stiffness matrices of the constituent members of the grid structure such as an arch, helical, or straight beam representing circumferential, helical, and longitudinal stiffeners. This analysis has the capability to model various stiffening configurations such as angle-grid, ortho-grid, and general-grid. Analyses were performed using an existing state-of-the-art and newly developed model to predict the torsion, bending, and axial stiffness of grid and grid-stiffened structures with various stiffening configurations. These predictions were compared to results generated using finite element analysis (FEA) to observe excellent correlation (within 6%) for a range of parameters for grid and grid-stiffened structures such as grid density, stiffener angle, and aspect ratio of the stiffener cross-section. Experimental results from cylindrical grid specimen testing were compared with analytical prediction using the new analysis. The new analysis predicted stiffness coefficients with nearly 7% error compared to FEA results. From the parametric studies conducted, it was observed that the previous state-of-the-art analysis on the other hand exhibited errors of the order of 39% for certain designs. Stability evaluations were also conducted by integrating the new analysis with established stability formulations. A design study was conducted to evaluate the potential weight savings of a simple grid-stiffened rotor blade spar structure compared to a baseline monocoque design. Various design constraints such as stiffness, strength, and stability were imposed. A manual search was conducted for design parameters such as stiffener density, stiffener angle, shell laminate, and stiffener aspect ratio that provide lightweight grid-stiffened designs compared to the baseline. It was found that a weight saving of 9.1% compared to the baseline is possible without violating any of the design constraints.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakob, Christian

    This report summarises an investigation into the relationship of tropical thunderstorms to the atmospheric conditions they are embedded in. The study is based on the use of radar observations at the Atmospheric Radiation Measurement site in Darwin run under the auspices of the DOE Atmospheric Systems Research program. Linking the larger scales of the atmosphere with the smaller scales of thunderstorms is crucial for the development of the representation of thunderstorms in weather and climate models, which is carried out by a process termed parametrisation. Through the analysis of radar and wind profiler observations the project made several fundamental discoveriesmore » about tropical storms and quantified the relationship of the occurrence and intensity of these storms to the large-scale atmosphere. We were able to show that the rainfall averaged over an area the size of a typical climate model grid-box is largely controlled by the number of storms in the area, and less so by the storm intensity. This allows us to completely rethink the way we represent such storms in climate models. We also found that storms occur in three distinct categories based on their depth and that the transition between these categories is strongly related to the larger scale dynamical features of the atmosphere more so than its thermodynamic state. Finally, we used our observational findings to test and refine a new approach to cumulus parametrisation which relies on the stochastic modelling of the area covered by different convective cloud types.« less

  19. Future requirements in surface modeling and grid generation

    NASA Technical Reports Server (NTRS)

    Cosner, Raymond R.

    1995-01-01

    The past ten years have seen steady progress in surface modeling procedures, and wholesale changes in grid generation technology. Today, it seems fair to state that a satisfactory grid can be developed to model nearly any configuration of interest. The issues at present focus on operational concerns such as cost and quality. Continuing evolution of the engineering process is placing new demands on the technologies of surface modeling and grid generation. In the evolution toward a multidisciplinary analysis-bascd design environment, methods developed for Computational Fluid Dynamics are finding acceptance in many additional applications. These two trends, the normal evolution of the process and a watershed shift toward concurrent and multidisciplinary analysis, will be considered in assessing current capabilities and needed technological improvements.

  20. Verification of sub-grid filtered drag models for gas-particle fluidized beds with immersed cylinder arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarkar, Avik; Sun, Xin; Sundaresan, Sankaran

    2014-04-23

    The accuracy of coarse-grid multiphase CFD simulations of fluidized beds may be improved via the inclusion of filtered constitutive models. In our previous study (Sarkar et al., Chem. Eng. Sci., 104, 399-412), we developed such a set of filtered drag relationships for beds with immersed arrays of cooling tubes. Verification of these filtered drag models is addressed in this work. Predictions from coarse-grid simulations with the sub-grid filtered corrections are compared against accurate, highly-resolved simulations of full-scale turbulent and bubbling fluidized beds. The filtered drag models offer a computationally efficient yet accurate alternative for obtaining macroscopic predictions, but the spatialmore » resolution of meso-scale clustering heterogeneities is sacrificed.« less

Top