Sample records for sub-grid physics parameters

  1. A global data set of soil hydraulic properties and sub-grid variability of soil water retention and hydraulic conductivity curves

    NASA Astrophysics Data System (ADS)

    Montzka, Carsten; Herbst, Michael; Weihermüller, Lutz; Verhoef, Anne; Vereecken, Harry

    2017-07-01

    Agroecosystem models, regional and global climate models, and numerical weather prediction models require adequate parameterization of soil hydraulic properties. These properties are fundamental for describing and predicting water and energy exchange processes at the transition zone between solid earth and atmosphere, and regulate evapotranspiration, infiltration and runoff generation. Hydraulic parameters describing the soil water retention (WRC) and hydraulic conductivity (HCC) curves are typically derived from soil texture via pedotransfer functions (PTFs). Resampling of those parameters for specific model grids is typically performed by different aggregation approaches such a spatial averaging and the use of dominant textural properties or soil classes. These aggregation approaches introduce uncertainty, bias and parameter inconsistencies throughout spatial scales due to nonlinear relationships between hydraulic parameters and soil texture. Therefore, we present a method to scale hydraulic parameters to individual model grids and provide a global data set that overcomes the mentioned problems. The approach is based on Miller-Miller scaling in the relaxed form by Warrick, that fits the parameters of the WRC through all sub-grid WRCs to provide an effective parameterization for the grid cell at model resolution; at the same time it preserves the information of sub-grid variability of the water retention curve by deriving local scaling parameters. Based on the Mualem-van Genuchten approach we also derive the unsaturated hydraulic conductivity from the water retention functions, thereby assuming that the local parameters are also valid for this function. In addition, via the Warrick scaling parameter λ, information on global sub-grid scaling variance is given that enables modellers to improve dynamical downscaling of (regional) climate models or to perturb hydraulic parameters for model ensemble output generation. The present analysis is based on the ROSETTA PTF of Schaap et al. (2001) applied to the SoilGrids1km data set of Hengl et al. (2014). The example data set is provided at a global resolution of 0.25° at https://doi.org/10.1594/PANGAEA.870605.

  2. GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no

    2013-11-10

    We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3)more » better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.« less

  3. Investigation of CO 2 capture using solid sorbents in a fluidized bed reactor: Cold flow hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Tingwen; Dietiker, Jean -Francois; Rogers, William

    2016-07-29

    Both experimental tests and numerical simulations were conducted to investigate the fluidization behavior of a solid CO 2 sorbent with a mean diameter of 100 μm and density of about 480 kg/m, which belongs to Geldart's Group A powder. A carefully designed fluidized bed facility was used to perform a series of experimental tests to study the flow hydrodynamics. Numerical simulations using the two-fluid model indicated that the grid resolution has a significant impact on the bed expansion and bubbling flow behavior. Due to the limited computational resource, no good grid independent results were achieved using the standard models asmore » far as the bed expansion is concerned. In addition, all simulations tended to under-predict the bubble size substantially. Effects of various model settings including both numerical and physical parameters have been investigated with no significant improvement observed. The latest filtered sub-grid drag model was then tested in the numerical simulations. Compared to the standard drag model, the filtered drag model with two markers not only predicted reasonable bed expansion but also yielded realistic bubbling behavior. As a result, a grid sensitivity study was conducted for the filtered sub-grid model and its applicability and limitation were discussed.« less

  4. Evidence for Sub-Chandrasekhar Mass Type Ia Supernovae from an Extensive Survey of Radiative Transfer Models

    NASA Astrophysics Data System (ADS)

    Goldstein, Daniel A.; Kasen, Daniel

    2018-01-01

    There are two classes of viable progenitors for normal Type Ia supernovae (SNe Ia): systems in which a white dwarf explodes at the Chandrasekhar mass ({M}{ch}), and systems in which a white dwarf explodes below the Chandrasekhar mass (sub-{M}{ch}). It is not clear which of these channels is dominant; observations and light-curve modeling have provided evidence for both. Here we use an extensive grid of 4500 time-dependent, multiwavelength radiation transport simulations to show that the sub-{M}{ch} model can reproduce the entirety of the width–luminosity relation, while the {M}{ch} model can only produce the brighter events (0.8< {{Δ }}{M}15(B)< 1.55), implying that fast-declining SNe Ia come from sub-{M}{ch} explosions. We do not assume a particular theoretical paradigm for the progenitor or explosion mechanism, but instead construct parameterized models that vary the mass, kinetic energy, and compositional structure of the ejecta, thereby realizing a broad range of possible outcomes of white dwarf explosions. We provide fitting functions based on our large grid of detailed simulations that map observable properties of SNe Ia, such as peak brightness and light-curve width, to physical parameters such as {}56{Ni} and total ejected mass. These can be used to estimate the physical properties of observed SNe Ia.

  5. Enhanced light extraction from organic light-emitting devices using a sub-anode grid (Presentation Recording)

    NASA Astrophysics Data System (ADS)

    Qu, Yue; Slootsky, Michael; Forrest, Stephen

    2015-10-01

    We demonstrate a method for extracting waveguided light trapped in the organic and indium tin oxide layers of bottom emission organic light emitting devices (OLEDs) using a patterned planar grid layer (sub-anode grid) between the anode and the substrate. The scattering layer consists of two transparent materials with different refractive indices on a period sufficiently large to avoid diffraction and other unwanted wavelength-dependent effects. The position of the sub-anode grid outside of the OLED active region allows complete freedom in varying its dimensions and materials from which it is made without impacting the electrical characteristics of the device itself. Full wave electromagnetic simulation is used to study the efficiency dependence on refractive indices and geometric parameters of the grid. We show the fabrication process and characterization of OLEDs with two different grids: a buried sub-anode grid consisting of two dielectric materials, and an air sub-anode grid consisting of a dielectric material and gridline voids. Using a sub-anode grid, substrate plus air modes quantum efficiency of an OLED is enhanced from (33+/-2)% to (40+/-2)%, resulting in an increase in external quantum efficiency from (14+/-1)% to (18+/-1)%, with identical electrical characteristics to that of a conventional device. By varying the thickness of the electron transport layer (ETL) of sub-anode grid OLEDs, we find that all power launched into the waveguide modes is scattered into substrate. We also demonstrate a sub-anode grid combined with a thick ETL significantly reduces surface plasmon polaritons, and results in an increase in substrate plus air modes by a >50% compared with a conventional OLED. The wavelength, viewing angle and molecular orientational independence provided by this approach make this an attractive and general solution to the problem of extracting waveguided light and reducing plasmon losses in OLEDs.

  6. Enhanced Elliptic Grid Generation

    NASA Technical Reports Server (NTRS)

    Kaul, Upender K.

    2007-01-01

    An enhanced method of elliptic grid generation has been invented. Whereas prior methods require user input of certain grid parameters, this method provides for these parameters to be determined automatically. "Elliptic grid generation" signifies generation of generalized curvilinear coordinate grids through solution of elliptic partial differential equations (PDEs). Usually, such grids are fitted to bounding bodies and used in numerical solution of other PDEs like those of fluid flow, heat flow, and electromagnetics. Such a grid is smooth and has continuous first and second derivatives (and possibly also continuous higher-order derivatives), grid lines are appropriately stretched or clustered, and grid lines are orthogonal or nearly so over most of the grid domain. The source terms in the grid-generating PDEs (hereafter called "defining" PDEs) make it possible for the grid to satisfy requirements for clustering and orthogonality properties in the vicinity of specific surfaces in three dimensions or in the vicinity of specific lines in two dimensions. The grid parameters in question are decay parameters that appear in the source terms of the inhomogeneous defining PDEs. The decay parameters are characteristic lengths in exponential- decay factors that express how the influences of the boundaries decrease with distance from the boundaries. These terms govern the rates at which distance between adjacent grid lines change with distance from nearby boundaries. Heretofore, users have arbitrarily specified decay parameters. However, the characteristic lengths are coupled with the strengths of the source terms, such that arbitrary specification could lead to conflicts among parameter values. Moreover, the manual insertion of decay parameters is cumbersome for static grids and infeasible for dynamically changing grids. In the present method, manual insertion and user specification of decay parameters are neither required nor allowed. Instead, the decay parameters are determined automatically as part of the solution of the defining PDEs. Depending on the shape of the boundary segments and the physical nature of the problem to be solved on the grid, the solution of the defining PDEs may provide for rates of decay to vary along and among the boundary segments and may lend itself to interpretation in terms of one or more physical quantities associated with the problem.

  7. 3-D frequency-domain seismic wave modelling in heterogeneous, anisotropic media using a Gaussian quadrature grid approach

    NASA Astrophysics Data System (ADS)

    Zhou, Bing; Greenhalgh, S. A.

    2011-01-01

    We present an extension of the 3-D spectral element method (SEM), called the Gaussian quadrature grid (GQG) approach, to simulate in the frequency-domain seismic waves in 3-D heterogeneous anisotropic media involving a complex free-surface topography and/or sub-surface geometry. It differs from the conventional SEM in two ways. The first is the replacement of the hexahedral element mesh with 3-D Gaussian quadrature abscissae to directly sample the physical properties or model parameters. This gives a point-gridded model which more exactly and easily matches the free-surface topography and/or any sub-surface interfaces. It does not require that the topography be highly smooth, a condition required in the curved finite difference method and the spectral method. The second is the derivation of a complex-valued elastic tensor expression for the perfectly matched layer (PML) model parameters for a general anisotropic medium, whose imaginary parts are determined by the PML formulation rather than having to choose a specific class of viscoelastic material. Furthermore, the new formulation is much simpler than the time-domain-oriented PML implementation. The specified imaginary parts of the density and elastic moduli are valid for arbitrary anisotropic media. We give two numerical solutions in full-space homogeneous, isotropic and anisotropic media, respectively, and compare them with the analytical solutions, as well as show the excellent effectiveness of the PML model parameters. In addition, we perform numerical simulations for 3-D seismic waves in a heterogeneous, anisotropic model incorporating a free-surface ridge topography and validate the results against the 2.5-D modelling solution, and demonstrate the capability of the approach to handle realistic situations.

  8. ANFIS-based modelling for coagulant dosage in drinking water treatment plant: a case study.

    PubMed

    Heddam, Salim; Bermad, Abdelmalek; Dechemi, Noureddine

    2012-04-01

    Coagulation is the most important stage in drinking water treatment processes for the maintenance of acceptable treated water quality and economic plant operation, which involves many complex physical and chemical phenomena. Moreover, coagulant dosing rate is non-linearly correlated to raw water characteristics such as turbidity, conductivity, pH, temperature, etc. As such, coagulation reaction is hard or even impossible to control satisfactorily by conventional methods. Traditionally, jar tests are used to determine the optimum coagulant dosage. However, this is expensive and time-consuming and does not enable responses to changes in raw water quality in real time. Modelling can be used to overcome these limitations. In this study, an Adaptive Neuro-Fuzzy Inference System (ANFIS) was used for modelling of coagulant dosage in drinking water treatment plant of Boudouaou, Algeria. Six on-line variables of raw water quality including turbidity, conductivity, temperature, dissolved oxygen, ultraviolet absorbance, and the pH of water, and alum dosage were used to build the coagulant dosage model. Two ANFIS-based Neuro-fuzzy systems are presented. The two Neuro-fuzzy systems are: (1) grid partition-based fuzzy inference system (FIS), named ANFIS-GRID, and (2) subtractive clustering based (FIS), named ANFIS-SUB. The low root mean square error and high correlation coefficient values were obtained with ANFIS-SUB method of a first-order Sugeno type inference. This study demonstrates that ANFIS-SUB outperforms ANFIS-GRID due to its simplicity in parameter selection and its fitness in the target problem.

  9. A satellite simulator for TRMM PR applied to climate model simulations

    NASA Astrophysics Data System (ADS)

    Spangehl, T.; Schroeder, M.; Bodas-Salcedo, A.; Hollmann, R.; Riley Dellaripa, E. M.; Schumacher, C.

    2017-12-01

    Climate model simulations have to be compared against observation based datasets in order to assess their skill in representing precipitation characteristics. Here we use a satellite simulator for TRMM PR in order to evaluate simulations performed with MPI-ESM (Earth system model of the Max Planck Institute for Meteorology in Hamburg, Germany) performed within the MiKlip project (https://www.fona-miklip.de/, funded by Federal Ministry of Education and Research in Germany). While classical evaluation methods focus on geophysical parameters such as precipitation amounts, the application of the satellite simulator enables an evaluation in the instrument's parameter space thereby reducing uncertainties on the reference side. The CFMIP Observation Simulator Package (COSP) provides a framework for the application of satellite simulators to climate model simulations. The approach requires the introduction of sub-grid cloud and precipitation variability. Radar reflectivities are obtained by applying Mie theory, with the microphysical assumptions being chosen to match the atmosphere component of MPI-ESM (ECHAM6). The results are found to be sensitive to the methods used to distribute the convective precipitation over the sub-grid boxes. Simple parameterization methods are used to introduce sub-grid variability of convective clouds and precipitation. In order to constrain uncertainties a comprehensive comparison with sub-grid scale convective precipitation variability which is deduced from TRMM PR observations is carried out.

  10. Uncertainty quantification in LES of channel flow

    DOE PAGES

    Safta, Cosmin; Blaylock, Myra; Templeton, Jeremy; ...

    2016-07-12

    Here, in this paper, we present a Bayesian framework for estimating joint densities for large eddy simulation (LES) sub-grid scale model parameters based on canonical forced isotropic turbulence direct numerical simulation (DNS) data. The framework accounts for noise in the independent variables, and we present alternative formulations for accounting for discrepancies between model and data. To generate probability densities for flow characteristics, posterior densities for sub-grid scale model parameters are propagated forward through LES of channel flow and compared with DNS data. Synthesis of the calibration and prediction results demonstrates that model parameters have an explicit filter width dependence andmore » are highly correlated. Discrepancies between DNS and calibrated LES results point to additional model form inadequacies that need to be accounted for.« less

  11. BayeSED: A GENERAL APPROACH TO FITTING THE SPECTRAL ENERGY DISTRIBUTION OF GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Yunkun; Han, Zhanwen, E-mail: hanyk@ynao.ac.cn, E-mail: zhanwenhan@ynao.ac.cn

    2014-11-01

    We present a newly developed version of BayeSED, a general Bayesian approach to the spectral energy distribution (SED) fitting of galaxies. The new BayeSED code has been systematically tested on a mock sample of galaxies. The comparison between the estimated and input values of the parameters shows that BayeSED can recover the physical parameters of galaxies reasonably well. We then applied BayeSED to interpret the SEDs of a large K{sub s} -selected sample of galaxies in the COSMOS/UltraVISTA field with stellar population synthesis models. Using the new BayeSED code, a Bayesian model comparison of stellar population synthesis models has beenmore » performed for the first time. We found that the 2003 model by Bruzual and Charlot, statistically speaking, has greater Bayesian evidence than the 2005 model by Maraston for the K{sub s} -selected sample. In addition, while setting the stellar metallicity as a free parameter obviously increases the Bayesian evidence of both models, varying the initial mass function has a notable effect only on the Maraston model. Meanwhile, the physical parameters estimated with BayeSED are found to be generally consistent with those obtained using the popular grid-based FAST code, while the former parameters exhibit more natural distributions. Based on the estimated physical parameters of the galaxies in the sample, we qualitatively classified the galaxies in the sample into five populations that may represent galaxies at different evolution stages or in different environments. We conclude that BayeSED could be a reliable and powerful tool for investigating the formation and evolution of galaxies from the rich multi-wavelength observations currently available. A binary version of the BayeSED code parallelized with Message Passing Interface is publicly available at https://bitbucket.org/hanyk/bayesed.« less

  12. A gridded hourly rainfall dataset for the UK applied to a national physically-based modelling system

    NASA Astrophysics Data System (ADS)

    Lewis, Elizabeth; Blenkinsop, Stephen; Quinn, Niall; Freer, Jim; Coxon, Gemma; Woods, Ross; Bates, Paul; Fowler, Hayley

    2016-04-01

    An hourly gridded rainfall product has great potential for use in many hydrological applications that require high temporal resolution meteorological data. One important example of this is flood risk management, with flooding in the UK highly dependent on sub-daily rainfall intensities amongst other factors. Knowledge of sub-daily rainfall intensities is therefore critical to designing hydraulic structures or flood defences to appropriate levels of service. Sub-daily rainfall rates are also essential inputs for flood forecasting, allowing for estimates of peak flows and stage for flood warning and response. In addition, an hourly gridded rainfall dataset has significant potential for practical applications such as better representation of extremes and pluvial flash flooding, validation of high resolution climate models and improving the representation of sub-daily rainfall in weather generators. A new 1km gridded hourly rainfall dataset for the UK has been created by disaggregating the daily Gridded Estimates of Areal Rainfall (CEH-GEAR) dataset using comprehensively quality-controlled hourly rain gauge data from over 1300 observation stations across the country. Quality control measures include identification of frequent tips, daily accumulations and dry spells, comparison of daily totals against the CEH-GEAR daily dataset, and nearest neighbour checks. The quality control procedure was validated against historic extreme rainfall events and the UKCP09 5km daily rainfall dataset. General use of the dataset has been demonstrated by testing the sensitivity of a physically-based hydrological modelling system for Great Britain to the distribution and rates of rainfall and potential evapotranspiration. Of the sensitivity tests undertaken, the largest improvements in model performance were seen when an hourly gridded rainfall dataset was combined with potential evapotranspiration disaggregated to hourly intervals, with 61% of catchments showing an increase in NSE between observed and simulated streamflows as a result of more realistic sub-daily meteorological forcing.

  13. DirtyGrid I: 3D Dust Radiative Transfer Modeling of Spectral Energy Distributions of Dusty Stellar Populations

    NASA Astrophysics Data System (ADS)

    Law, Ka-Hei; Gordon, Karl D.; Misselt, Karl A.

    2018-06-01

    Understanding the properties of stellar populations and interstellar dust has important implications for galaxy evolution. In normal star-forming galaxies, stars and the interstellar medium dominate the radiation from ultraviolet (UV) to infrared (IR). In particular, interstellar dust absorbs and scatters UV and optical light, re-emitting the absorbed energy in the IR. This is a strongly nonlinear process that makes independent studies of the UV-optical and IR susceptible to large uncertainties and degeneracies. Over the years, UV to IR spectral energy distribution (SED) fitting utilizing varying approximations has revealed important results on the stellar and dust properties of galaxies. Yet the approximations limit the fidelity of the derived properties. There is sufficient computer power now available that it is now possible to remove these approximations and map out of landscape of galaxy SEDs using full dust radiative transfer. This improves upon previous work by directly connecting the UV, optical, and IR through dust grain physics. We present the DIRTYGrid, a grid of radiative transfer models of SEDs of dusty stellar populations in galactic environments designed to span the full range of physical parameters of galaxies. Using the stellar and gas radiation input from the stellar population synthesis model PEGASE, our radiative transfer model DIRTY self-consistently computes the UV to far-IR/sub-mm SEDs for each set of parameters in our grid. DIRTY computes the dust absorption, scattering, and emission from the local radiation field and a dust grain model, thereby physically connecting the UV-optical to the IR. We describe the computational method and explain the choices of parameters in DIRTYGrid. The computation took millions of CPU hours on supercomputers, and the SEDs produced are an invaluable tool for fitting multi-wavelength data sets. We provide the complete set of SEDs in an online table.

  14. Validation of Land-Surface Mosaic Heterogeneity in the GEOS DAS

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Molod, Andrea; Houser, Paul R.; Schubert, Siegfried

    1999-01-01

    The Mosaic Land-surface Model (LSM) has been included into the current GEOS Data Assimilation System (DAS). The LSM uses a more advanced representation of physical processes than previous versions of the GEOS DAS, including the representation of sub-grid heterogeneity of the land-surface through the Mosaic approach. As a first approximation, Mosaic assumes that all similar surface types within a grid-cell can be lumped together as a single'tile'. Within one GCM grid-cell, there might be 1 - 5 different tiles or surface types. All tiles are subjected to the grid-scale forcing (radiation, air temperature and specific humidity, and precipitation), and the sub-grid variability is a function of the tile characteristics. In this paper, we validate the LSM sub-grid scale variability (tiles) using a variety of surface observing stations from the Southern Great Plains (SGP) site of the Atmospheric Radiation Measurement (ARM) Program. One of the primary goals of SGP ARM is to study the variability of atmospheric radiation within a G,CM grid-cell. Enough surface data has been collected by ARM to extend this goal to sub-grid variability of the land-surface energy and water budgets. The time period of this study is the Summer of 1998 (June I - September 1). The ARM site data consists of surface meteorology, energy flux (eddy correlation and bowen ratio), soil water observations spread over an area similar to the size of a G-CM grid-cell. Various ARM stations are described as wheat and alfalfa crops, pasture and range land. The LSM tiles considered at the grid-space (2 x 2.5) nearest the ARM site include, grassland, deciduous forests, bare soil and dwarf trees. Surface energy and water balances for each tile type are compared with observations. Furthermore, we will discuss the land-surface sub-grid variability of both the ARM observations and the DAS.

  15. X-ray Reflected Spectra from Accretion Disk Models. III. A Complete Grid of Ionized Reflection Calculations

    NASA Technical Reports Server (NTRS)

    Garcia, J.; Dauser, T.; Reynolds, C. S.; Kallman, T. R.; McClintock, J. E.; Wilms, J.; Ekmann, W.

    2013-01-01

    We present a new and complete library of synthetic spectra for modeling the component of emission that is reflected from an illuminated accretion disk. The spectra were computed using an updated version of our code xillver that incorporates new routines and a richer atomic data base. We offer in the form of a table model an extensive grid of reflection models that cover a wide range of parameters. Each individual model is characterized by the photon index Gamma of the illuminating radiation, the ionization parameter zeta at the surface of the disk (i.e., the ratio of the X-ray flux to the gas density), and the iron abundance A(sub Fe) relative to the solar value. The ranges of the parameters covered are: 1.2 <= Gamma <= 3.4, 1 <= zeta <= 104, and 0.5 <= A(sub Fe) <= 10. These ranges capture the physical conditions typically inferred from observations of active galactic nuclei, and also stellar-mass black holes in the hard state. This library is intended for use when the thermal disk flux is faint compared to the incident power-law flux. The models are expected to provide an accurate description of the Fe K emission line, which is the crucial spectral feature used to measure black hole spin. A total of 720 reflection spectra are provided in a single FITS file suitable for the analysis of X-ray observations via the atable model in xspec. Detailed comparisons with previous reflection models illustrate the improvements incorporated in this version of xillver.

  16. The importance of topography controlled sub-grid process heterogeneity in distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Nijzink, R. C.; Samaniego, L.; Mai, J.; Kumar, R.; Thober, S.; Zink, M.; Schäfer, D.; Savenije, H. H. G.; Hrachowitz, M.

    2015-12-01

    Heterogeneity of landscape features like terrain, soil, and vegetation properties affect the partitioning of water and energy. However, it remains unclear to which extent an explicit representation of this heterogeneity at the sub-grid scale of distributed hydrological models can improve the hydrological consistency and the robustness of such models. In this study, hydrological process complexity arising from sub-grid topography heterogeneity was incorporated in the distributed mesoscale Hydrologic Model (mHM). Seven study catchments across Europe were used to test whether (1) the incorporation of additional sub-grid variability on the basis of landscape-derived response units improves model internal dynamics, (2) the application of semi-quantitative, expert-knowledge based model constraints reduces model uncertainty; and (3) the combined use of sub-grid response units and model constraints improves the spatial transferability of the model. Unconstrained and constrained versions of both, the original mHM and mHMtopo, which allows for topography-based sub-grid heterogeneity, were calibrated for each catchment individually following a multi-objective calibration strategy. In addition, four of the study catchments were simultaneously calibrated and their feasible parameter sets were transferred to the remaining three receiver catchments. In a post-calibration evaluation procedure the probabilities of model and transferability improvement, when accounting for sub-grid variability and/or applying expert-knowledge based model constraints, were assessed on the basis of a set of hydrological signatures. In terms of the Euclidian distance to the optimal model, used as overall measure for model performance with respect to the individual signatures, the model improvement achieved by introducing sub-grid heterogeneity to mHM in mHMtopo was on average 13 %. The addition of semi-quantitative constraints to mHM and mHMtopo resulted in improvements of 13 and 19 % respectively, compared to the base case of the unconstrained mHM. Most significant improvements in signature representations were, in particular, achieved for low flow statistics. The application of prior semi-quantitative constraints further improved the partitioning between runoff and evaporative fluxes. Besides, it was shown that suitable semi-quantitative prior constraints in combination with the transfer function based regularization approach of mHM, can be beneficial for spatial model transferability as the Euclidian distances for the signatures improved on average by 2 %. The effect of semi-quantitative prior constraints combined with topography-guided sub-grid heterogeneity on transferability showed a more variable picture of improvements and deteriorations, but most improvements were observed for low flow statistics.

  17. The importance of topography-controlled sub-grid process heterogeneity and semi-quantitative prior constraints in distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Nijzink, Remko C.; Samaniego, Luis; Mai, Juliane; Kumar, Rohini; Thober, Stephan; Zink, Matthias; Schäfer, David; Savenije, Hubert H. G.; Hrachowitz, Markus

    2016-03-01

    Heterogeneity of landscape features like terrain, soil, and vegetation properties affects the partitioning of water and energy. However, it remains unclear to what extent an explicit representation of this heterogeneity at the sub-grid scale of distributed hydrological models can improve the hydrological consistency and the robustness of such models. In this study, hydrological process complexity arising from sub-grid topography heterogeneity was incorporated into the distributed mesoscale Hydrologic Model (mHM). Seven study catchments across Europe were used to test whether (1) the incorporation of additional sub-grid variability on the basis of landscape-derived response units improves model internal dynamics, (2) the application of semi-quantitative, expert-knowledge-based model constraints reduces model uncertainty, and whether (3) the combined use of sub-grid response units and model constraints improves the spatial transferability of the model. Unconstrained and constrained versions of both the original mHM and mHMtopo, which allows for topography-based sub-grid heterogeneity, were calibrated for each catchment individually following a multi-objective calibration strategy. In addition, four of the study catchments were simultaneously calibrated and their feasible parameter sets were transferred to the remaining three receiver catchments. In a post-calibration evaluation procedure the probabilities of model and transferability improvement, when accounting for sub-grid variability and/or applying expert-knowledge-based model constraints, were assessed on the basis of a set of hydrological signatures. In terms of the Euclidian distance to the optimal model, used as an overall measure of model performance with respect to the individual signatures, the model improvement achieved by introducing sub-grid heterogeneity to mHM in mHMtopo was on average 13 %. The addition of semi-quantitative constraints to mHM and mHMtopo resulted in improvements of 13 and 19 %, respectively, compared to the base case of the unconstrained mHM. Most significant improvements in signature representations were, in particular, achieved for low flow statistics. The application of prior semi-quantitative constraints further improved the partitioning between runoff and evaporative fluxes. In addition, it was shown that suitable semi-quantitative prior constraints in combination with the transfer-function-based regularization approach of mHM can be beneficial for spatial model transferability as the Euclidian distances for the signatures improved on average by 2 %. The effect of semi-quantitative prior constraints combined with topography-guided sub-grid heterogeneity on transferability showed a more variable picture of improvements and deteriorations, but most improvements were observed for low flow statistics.

  18. Use of upscaled elevation and surface roughness data in two-dimensional surface water models

    USGS Publications Warehouse

    Hughes, J.D.; Decker, J.D.; Langevin, C.D.

    2011-01-01

    In this paper, we present an approach that uses a combination of cell-block- and cell-face-averaging of high-resolution cell elevation and roughness data to upscale hydraulic parameters and accurately simulate surface water flow in relatively low-resolution numerical models. The method developed allows channelized features that preferentially connect large-scale grid cells at cell interfaces to be represented in models where these features are significantly smaller than the selected grid size. The developed upscaling approach has been implemented in a two-dimensional finite difference model that solves a diffusive wave approximation of the depth-integrated shallow surface water equations using preconditioned Newton–Krylov methods. Computational results are presented to show the effectiveness of the mixed cell-block and cell-face averaging upscaling approach in maintaining model accuracy, reducing model run-times, and how decreased grid resolution affects errors. Application examples demonstrate that sub-grid roughness coefficient variations have a larger effect on simulated error than sub-grid elevation variations.

  19. The power of structural modeling of sub-grid scales - application to astrophysical plasmas

    NASA Astrophysics Data System (ADS)

    Georgiev Vlaykov, Dimitar; Grete, Philipp

    2015-08-01

    In numerous astrophysical phenomena the dynamical range can span 10s of orders of magnitude. This implies more than billions of degrees-of-freedom and precludes direct numerical simulations from ever being a realistic possibility. A physical model is necessary to capture the unresolved physics occurring at the sub-grid scales (SGS).Structural modeling is a powerful concept which renders itself applicable to various physical systems. It stems from the idea of capturing the structure of the SGS terms in the evolution equations based on the scale-separation mechanism and independently of the underlying physics. It originates in the hydrodynamics field of large-eddy simulations. We apply it to the study of astrophysical MHD.Here, we present a non-linear SGS model for compressible MHD turbulence. The model is validated a priori at the tensorial, vectorial and scalar levels against of set of high-resolution simulations of stochastically forced homogeneous isotropic turbulence in a periodic box. The parameter space spans 2 decades in sonic Mach numbers (0.2 - 20) and approximately one decade in magnetic Mach number ~(1-8). This covers the super-Alfvenic sub-, trans-, and hyper-sonic regimes, with a range of plasma beta from 0.05 to 25. The Reynolds number is of the order of 103.At the tensor level, the model components correlate well with the turbulence ones, at the level of 0.8 and above. Vectorially, the alignment with the true SGS terms is encouraging with more than 50% of the model within 30° of the data. At the scalar level we look at the dynamics of the SGS energy and cross-helicity. The corresponding SGS flux terms have median correlations of ~0.8. Physically, the model represents well the two directions of the energy cascade.In comparison, traditional functional models exhibit poor local correlations with the data already at the scalar level. Vectorially, they are indifferent to the anisotropy of the SGS terms. They often struggle to represent the energy backscatter from small to large scales as well as the turbulent dynamo mechanism.Overall, the new model surpasses the traditional ones in all tests by a large margin.

  20. PVWatts Version 1 Technical Reference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobos, A. P.

    2013-10-01

    The NREL PVWatts(TM) calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes several hidden assumptions about performance parameters. This technical reference details the individual sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimation.

  1. X-RAY REFLECTED SPECTRA FROM ACCRETION DISK MODELS. III. A COMPLETE GRID OF IONIZED REFLECTION CALCULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, J.; McClintock, J. E.; Dauser, T.

    2013-05-10

    We present a new and complete library of synthetic spectra for modeling the component of emission that is reflected from an illuminated accretion disk. The spectra were computed using an updated version of our code XILLVER that incorporates new routines and a richer atomic database. We offer in the form of a table model an extensive grid of reflection models that cover a wide range of parameters. Each individual model is characterized by the photon index {Gamma} of the illuminating radiation, the ionization parameter {xi} at the surface of the disk (i.e., the ratio of the X-ray flux to themore » gas density), and the iron abundance A{sub Fe} relative to the solar value. The ranges of the parameters covered are 1.2 {<=} {Gamma} {<=} 3.4, 1 {<=} {xi} {<=} 10{sup 4}, and 0.5 {<=} A{sub Fe} {<=} 10. These ranges capture the physical conditions typically inferred from observations of active galactic nuclei, and also stellar-mass black holes in the hard state. This library is intended for use when the thermal disk flux is faint compared to the incident power-law flux. The models are expected to provide an accurate description of the Fe K emission line, which is the crucial spectral feature used to measure black hole spin. A total of 720 reflection spectra are provided in a single FITS file (http://hea-www.cfa.harvard.edu/{approx}javier/xillver/) suitable for the analysis of X-ray observations via the atable model in XSPEC. Detailed comparisons with previous reflection models illustrate the improvements incorporated in this version of XILLVER.« less

  2. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    DOE PAGES

    Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.; ...

    2017-09-14

    Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less

  3. Towards improved parameterization of a macroscale hydrologic model in a discontinuous permafrost boreal forest ecosystem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endalamaw, Abraham; Bolton, W. Robert; Young-Robertson, Jessica M.

    Modeling hydrological processes in the Alaskan sub-arctic is challenging because of the extreme spatial heterogeneity in soil properties and vegetation communities. Nevertheless, modeling and predicting hydrological processes is critical in this region due to its vulnerability to the effects of climate change. Coarse-spatial-resolution datasets used in land surface modeling pose a new challenge in simulating the spatially distributed and basin-integrated processes since these datasets do not adequately represent the small-scale hydrological, thermal, and ecological heterogeneity. The goal of this study is to improve the prediction capacity of mesoscale to large-scale hydrological models by introducing a small-scale parameterization scheme, which bettermore » represents the spatial heterogeneity of soil properties and vegetation cover in the Alaskan sub-arctic. The small-scale parameterization schemes are derived from observations and a sub-grid parameterization method in the two contrasting sub-basins of the Caribou Poker Creek Research Watershed (CPCRW) in Interior Alaska: one nearly permafrost-free (LowP) sub-basin and one permafrost-dominated (HighP) sub-basin. The sub-grid parameterization method used in the small-scale parameterization scheme is derived from the watershed topography. We found that observed soil thermal and hydraulic properties – including the distribution of permafrost and vegetation cover heterogeneity – are better represented in the sub-grid parameterization method than the coarse-resolution datasets. Parameters derived from the coarse-resolution datasets and from the sub-grid parameterization method are implemented into the variable infiltration capacity (VIC) mesoscale hydrological model to simulate runoff, evapotranspiration (ET), and soil moisture in the two sub-basins of the CPCRW. Simulated hydrographs based on the small-scale parameterization capture most of the peak and low flows, with similar accuracy in both sub-basins, compared to simulated hydrographs based on the coarse-resolution datasets. On average, the small-scale parameterization scheme improves the total runoff simulation by up to 50 % in the LowP sub-basin and by up to 10 % in the HighP sub-basin from the large-scale parameterization. This study shows that the proposed sub-grid parameterization method can be used to improve the performance of mesoscale hydrological models in the Alaskan sub-arctic watersheds.« less

  4. Black hole feeding and feedback: the physics inside the `sub-grid'

    NASA Astrophysics Data System (ADS)

    Negri, A.; Volonteri, M.

    2017-05-01

    Black holes (BHs) are believed to be a key ingredient of galaxy formation. However, the galaxy-BH interplay is challenging to study due to the large dynamical range and complex physics involved. As a consequence, hydrodynamical cosmological simulations normally adopt sub-grid models to track the unresolved physical processes, in particular BH accretion; usually the spatial scale where the BH dominates the hydrodynamical processes (the Bondi radius) is unresolved, and an approximate Bondi-Hoyle accretion rate is used to estimate the growth of the BH. By comparing hydrodynamical simulations at different resolutions (300, 30, 3 pc) using a Bondi-Hoyle approximation to sub-parsec runs with non-parametrized accretion, our aim is to probe how well an approximated Bondi accretion is able to capture the BH accretion physics and the subsequent feedback on the galaxy. We analyse an isolated galaxy simulation that includes cooling, star formation, Type Ia and Type II supernovae, BH accretion and active galactic nuclei feedback (radiation pressure, Compton heating/cooling) where mass, momentum and energy are deposited in the interstellar medium through conical winds. We find that on average the approximated Bondi formalism can lead to both over- and underestimations of the BH growth, depending on resolution and on how the variables entering into the Bondi-Hoyle formalism are calculated.

  5. PVWatts Version 5 Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dobos, A. P.

    2014-09-01

    The NREL PVWatts calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes includes several built-in parameters that are hidden from the user. This technical reference describes the sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimate. This reference is applicable to the significantly revised version of PVWatts released by NREL in 2014.

  6. MAFIA Version 4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weiland, T.; Bartsch, M.; Becker, U.

    1997-02-01

    MAFIA Version 4.0 is an almost completely new version of the general purpose electromagnetic simulator known since 13 years. The major improvements concern the new graphical user interface based on state of the art technology as well as a series of new solvers for new physics problems. MAFIA now covers heat distribution, electro-quasistatics, S-parameters in frequency domain, particle beam tracking in linear accelerators, acoustics and even elastodynamics. The solvers that were available in earlier versions have also been improved and/or extended, as for example the complex eigenmode solver, the 2D--3D coupled PIC solvers. Time domain solvers have new waveguide boundarymore » conditions with an extremely low reflection even near cutoff frequency, concentrated elements are available as well as a variety of signal processing options. Probably the most valuable addition are recursive sub-grid capabilities that enable modeling of very small details in large structures. {copyright} {ital 1997 American Institute of Physics.}« less

  7. Stochastic Approaches Within a High Resolution Rapid Refresh Ensemble

    NASA Astrophysics Data System (ADS)

    Jankov, I.

    2017-12-01

    It is well known that global and regional numerical weather prediction (NWP) ensemble systems are under-dispersive, producing unreliable and overconfident ensemble forecasts. Typical approaches to alleviate this problem include the use of multiple dynamic cores, multiple physics suite configurations, or a combination of the two. While these approaches may produce desirable results, they have practical and theoretical deficiencies and are more difficult and costly to maintain. An active area of research that promotes a more unified and sustainable system is the use of stochastic physics. Stochastic approaches include Stochastic Parameter Perturbations (SPP), Stochastic Kinetic Energy Backscatter (SKEB), and Stochastic Perturbation of Physics Tendencies (SPPT). The focus of this study is to assess model performance within a convection-permitting ensemble at 3-km grid spacing across the Contiguous United States (CONUS) using a variety of stochastic approaches. A single physics suite configuration based on the operational High-Resolution Rapid Refresh (HRRR) model was utilized and ensemble members produced by employing stochastic methods. Parameter perturbations (using SPP) for select fields were employed in the Rapid Update Cycle (RUC) land surface model (LSM) and Mellor-Yamada-Nakanishi-Niino (MYNN) Planetary Boundary Layer (PBL) schemes. Within MYNN, SPP was applied to sub-grid cloud fraction, mixing length, roughness length, mass fluxes and Prandtl number. In the RUC LSM, SPP was applied to hydraulic conductivity and tested perturbing soil moisture at initial time. First iterative testing was conducted to assess the initial performance of several configuration settings (e.g. variety of spatial and temporal de-correlation lengths). Upon selection of the most promising candidate configurations using SPP, a 10-day time period was run and more robust statistics were gathered. SKEB and SPPT were included in additional retrospective tests to assess the impact of using all three stochastic approaches to address model uncertainty. Results from the stochastic perturbation testing were compared to a baseline multi-physics control ensemble. For probabilistic forecast performance the Model Evaluation Tools (MET) verification package was used.

  8. A Study of the Ozone Formation by Ensemble Back Trajectory-process Analysis Using the Eta-CMAQ Forecast Model over the Northeastern U.S. During the 2004 ICARTT Period

    EPA Science Inventory

    The integrated process rates (IPR) estimated by the Eta-CMAQ model at grid cells along the trajectory of the air mass transport path were analyzed to quantitatively investigate the relative importance of physical and chemical processes for O3 formation and evolution ov...

  9. Influence of grid resolution, parcel size and drag models on bubbling fluidized bed simulation

    DOE PAGES

    Lu, Liqiang; Konan, Arthur; Benyahia, Sofiane

    2017-06-02

    Here in this paper, a bubbling fluidized bed is simulated with different numerical parameters, such as grid resolution and parcel size. We examined also the effect of using two homogeneous drag correlations and a heterogeneous drag based on the energy minimization method. A fast and reliable bubble detection algorithm was developed based on the connected component labeling. The radial and axial solids volume fraction profiles are compared with experiment data and previous simulation results. These results show a significant influence of drag models on bubble size and voidage distributions and a much less dependence on numerical parameters. With a heterogeneousmore » drag model that accounts for sub-scale structures, the void fraction in the bubbling fluidized bed can be well captured with coarse grid and large computation parcels. Refining the CFD grid and reducing the parcel size can improve the simulation results but with a large increase in computation cost.« less

  10. Wave Resource Characterization Using an Unstructured Grid Modeling Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Wei-Cheng; Yang, Zhaoqing; Wang, Taiping

    This paper presents a modeling study conducted on the central Oregon coast for wave resource characterization using the unstructured-grid SWAN model coupled with a nested-grid WWIII model. The flexibility of models of various spatial resolutions and the effects of open- boundary conditions simulated by a nested-grid WWIII model with different physics packages were evaluated. The model results demonstrate the advantage of the unstructured-grid modeling approach for flexible model resolution and good model skills in simulating the six wave resource parameters recommended by the International Electrotechnical Commission in comparison to the observed data in Year 2009 at National Data Buoy Centermore » Buoy 46050. Notably, spectral analysis indicates that the ST4 physics package improves upon the model skill of the ST2 physics package for predicting wave power density for large waves, which is important for wave resource assessment, device load calculation, and risk management. In addition, bivariate distributions show the simulated sea state of maximum occurrence with the ST4 physics package matched the observed data better than that with the ST2 physics package. This study demonstrated that the unstructured-grid wave modeling approach, driven by the nested-grid regional WWIII outputs with the ST4 physics package, can efficiently provide accurate wave hindcasts to support wave resource characterization. Our study also suggests that wind effects need to be considered if the dimension of the model domain is greater than approximately 100 km, or O (10^2 km).« less

  11. Simulating large-scale crop yield by using perturbed-parameter ensemble method

    NASA Astrophysics Data System (ADS)

    Iizumi, T.; Yokozawa, M.; Sakurai, G.; Nishimori, M.

    2010-12-01

    Toshichika Iizumi, Masayuki Yokozawa, Gen Sakurai, Motoki Nishimori Agro-Meteorology Division, National Institute for Agro-Environmental Sciences, Japan Abstract One of concerning issues of food security under changing climate is to predict the inter-annual variation of crop production induced by climate extremes and modulated climate. To secure food supply for growing world population, methodology that can accurately predict crop yield on a large scale is needed. However, for developing a process-based large-scale crop model with a scale of general circulation models (GCMs), 100 km in latitude and longitude, researchers encounter the difficulties in spatial heterogeneity of available information on crop production such as cultivated cultivars and management. This study proposed an ensemble-based simulation method that uses a process-based crop model and systematic parameter perturbation procedure, taking maize in U.S., China, and Brazil as examples. The crop model was developed modifying the fundamental structure of the Soil and Water Assessment Tool (SWAT) to incorporate the effect of heat stress on yield. We called the new model PRYSBI: the Process-based Regional-scale Yield Simulator with Bayesian Inference. The posterior probability density function (PDF) of 17 parameters, which represents the crop- and grid-specific features of the crop and its uncertainty under given data, was estimated by the Bayesian inversion analysis. We then take 1500 ensemble members of simulated yield values based on the parameter sets sampled from the posterior PDF to describe yearly changes of the yield, i.e. perturbed-parameter ensemble method. The ensemble median for 27 years (1980-2006) was compared with the data aggregated from the county yield. On a country scale, the ensemble median of the simulated yield showed a good correspondence with the reported yield: the Pearson’s correlation coefficient is over 0.6 for all countries. In contrast, on a grid scale, the correspondence is still high in most grids regardless of the countries. However, the model showed comparatively low reproducibility in the slope areas, such as around the Rocky Mountains in South Dakota, around the Great Xing'anling Mountains in Heilongjiang, and around the Brazilian Plateau. As there is a wide-ranging local climate conditions in the complex terrain, such as the slope of mountain, the GCM grid-scale weather inputs is likely one of major sources of error. The results of this study highlight the benefits of the perturbed-parameter ensemble method in simulating crop yield on a GCM grid scale: (1) the posterior PDF of parameter could quantify the uncertainty of parameter value of the crop model associated with the local crop production aspects; (2) the method can explicitly account for the uncertainty of parameter value in the crop model simulations; (3) the method achieve a Monte Carlo approximation of probability of sub-grid scale yield, accounting for the nonlinear response of crop yield to weather and management; (4) the method is therefore appropriate to aggregate the simulated sub-grid scale yields to a grid-scale yield and it may be a reason for high performance of the model in capturing inter-annual variation of yield.

  12. Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model

    NASA Astrophysics Data System (ADS)

    O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.

    2015-12-01

    Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.

  13. New Boundary Constraints for Elliptic Systems used in Grid Generation Problems

    NASA Technical Reports Server (NTRS)

    Kaul, Upender K.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper discusses new boundary constraints for elliptic partial differential equations as used in grid generation problems in generalized curvilinear coordinate systems. These constraints, based on the principle of local conservation of thermal energy in the vicinity of the boundaries, are derived using the Green's Theorem. They uniquely determine the so called decay parameters in the source terms of these elliptic systems. These constraints' are designed for boundary clustered grids where large gradients in physical quantities need to be resolved adequately. It is observed that the present formulation also works satisfactorily for mild clustering. Therefore, a closure for the decay parameter specification for elliptic grid generation problems has been provided resulting in a fully automated elliptic grid generation technique. Thus, there is no need for a parametric study of these decay parameters since the new constraints fix them uniquely. It is also shown that for Neumann type boundary conditions, these boundary constraints uniquely determine the solution to the internal elliptic problem thus eliminating the non-uniqueness of the solution of an internal Neumann boundary value grid generation problem.

  14. Method for contact resistivity measurements on photovoltaic cells and cell adapted for such measurement

    NASA Technical Reports Server (NTRS)

    Burger, Dale R. (Inventor)

    1986-01-01

    A method is disclosed for scribing at least three grid contacts of a photovoltaic cell to electrically isolate them from the grid contact pattern used to collect solar current generated by the cell, and using the scribed segments for determining parameters of the cell by a combination of contact end resistance (CER) measurements using a minimum of three equally or unequally spaced lines, and transmission line modal (TLM) measurements using a minimum of four unequally spaced lines. TLM measurements may be used to determine sheet resistance under the contact, R.sub.sk, while CER measurements are used to determine contact resistivity, .rho..sub.c, from a nomograph of contact resistivity as a function of contact end resistance and sheet resistivity under the contact. In some cases, such as the case of silicon photovoltaic cells, sheet resistivity under the contact may be assumed to be equal to the known sheet resistance, R.sub.s, of the semiconductor material, thereby obviating the need for TLM measurements to determine R.sub.sk.

  15. The natural emergence of the correlation between H2 and star formation rate surface densities in galaxy simulations

    NASA Astrophysics Data System (ADS)

    Lupi, Alessandro; Bovino, Stefano; Capelo, Pedro R.; Volonteri, Marta; Silk, Joseph

    2018-03-01

    In this study, we present a suite of high-resolution numerical simulations of an isolated galaxy to test a sub-grid framework to consistently follow the formation and dissociation of H2 with non-equilibrium chemistry. The latter is solved via the package KROME, coupled to the mesh-less hydrodynamic code GIZMO. We include the effect of star formation (SF), modelled with a physically motivated prescription independent of H2, supernova feedback and mass-losses from low-mass stars, extragalactic and local stellar radiation, and dust and H2 shielding, to investigate the emergence of the observed correlation between H2 and SF rate surface densities. We present two different sub-grid models and compare them with on-the-fly radiative transfer (RT) calculations, to assess the main differences and limits of the different approaches. We also discuss a sub-grid clumping factor model to enhance the H2 formation, consistent with our SF prescription, which is crucial, at the achieved resolution, to reproduce the correlation with H2. We find that both sub-grid models perform very well relative to the RT simulation, giving comparable results, with moderate differences, but at much lower computational cost. We also find that, while the Kennicutt-Schmidt relation for the total gas is not strongly affected by the different ingredients included in the simulations, the H2-based counterpart is much more sensitive, because of the crucial role played by the dissociating radiative flux and the gas shielding.

  16. A Control Algorithm for Chaotic Physical Systems

    DTIC Science & Technology

    1991-10-01

    revision expands the grid to cover the entire area of any attractor that is present. 5 Map Selection The final choices of the state- space mapping process...interval h?; overrange R0 ; control parameter interval AkO and range [kbro, khigh]; iteration depth. "* State- space mapping : 1. Set up grid by expanding

  17. Subgrid Modeling Geomorphological and Ecological Processes in Salt Marsh Evolution

    NASA Astrophysics Data System (ADS)

    Shi, F.; Kirby, J. T., Jr.; Wu, G.; Abdolali, A.; Deb, M.

    2016-12-01

    Numerical modeling a long-term evolution of salt marshes is challenging because it requires an extensive use of computational resources. Due to the presence of narrow tidal creeks, variations of salt marsh topography can be significant over spatial length scales on the order of a meter. With growing availability of high-resolution bathymetry measurements, like LiDAR-derived DEM data, it is increasingly desirable to run a high-resolution model in a large domain and for a long period of time to get trends of sedimentation patterns, morphological change and marsh evolution. However, high spatial-resolution poses a big challenge in both computational time and memory storage, when simulating a salt marsh with dimensions of up to O(100 km^2) with a small time step. In this study, we have developed a so-called Pre-storage, Sub-grid Model (PSM, Wu et al., 2015) for simulating flooding and draining processes in salt marshes. The simulation of Brokenbridge salt marsh, Delaware, shows that, with the combination of the sub-grid model and the pre-storage method, over 2 orders of magnitude computational speed-up can be achieved with minimal loss of model accuracy. We recently extended PSM to include a sediment transport component and models for biomass growth and sedimentation in the sub-grid model framework. The sediment transport model is formulated based on a newly derived sub-grid sediment concentration equation following Defina's (2000) area-averaging procedure. Suspended sediment transport is modeled by the advection-diffusion equation in the coarse grid level, but the local erosion and sedimentation rates are integrated over the sub-grid level. The morphological model is based on the existing morphological model in NearCoM (Shi et al., 2013), extended to include organic production from the biomass model. The vegetation biomass is predicted by a simple logistic equation model proposed by Marani et al. (2010). The biomass component is loosely coupled with hydrodynamic and sedimentation models owing to the different time scales of the physical and ecological processes. The coupled model is being applied to Delaware marsh evolution in response to rising sea level and changing sediment supplies.

  18. Interplay Between Energy-Market Dynamics and Physical Stability of a Smart Power Grid

    NASA Astrophysics Data System (ADS)

    Picozzi, Sergio; Mammoli, Andrea; Sorrentino, Francesco

    2013-03-01

    A smart power grid is being envisioned for the future which, among other features, should enable users to play the dual role of consumers as well as producers and traders of energy, thanks to emerging renewable energy production and energy storage technologies. As a complex dynamical system, any power grid is subject to physical instabilities. With existing grids, such instabilities tend to be caused by natural disasters, human errors, or weather-related peaks in demand. In this work we analyze the impact, upon the stability of a smart grid, of the energy-market dynamics arising from users' ability to buy from and sell energy to other users. The stability analysis of the resulting dynamical system is performed assuming different proposed models for this market of the future, and the corresponding stability regions in parameter space are identified. We test our theoretical findings by comparing them with data collected from some existing prototype systems.

  19. HiPEP Ion Optics System Evaluation Using Gridlets

    NASA Technical Reports Server (NTRS)

    Willliams, John D.; Farnell, Cody C.; Laufer, D. Mark; Martinez, Rafael A.

    2004-01-01

    Experimental measurements are presented for sub-scale ion optics systems comprised of 7 and 19 aperture pairs with geometrical features that are similar to the HiPEP ion optics system. Effects of hole diameter and grid-to-grid spacing are presented as functions of applied voltage and beamlet current. Recommendations are made for the beamlet current range where the ion optics system can be safely operated without experiencing direct impingement of high energy ions on the accelerator grid surface. Measurements are also presented of the accelerator grid voltage where beam plasma electrons backstream through the ion optics system. Results of numerical simulations obtained with the ffx code are compared to both the impingement limit and backstreaming measurements. An emphasis is placed on identifying differences between measurements and simulation predictions to highlight areas where more research is needed. Relatively large effects are observed in simulations when the discharge chamber plasma properties and ion optics geometry are varied. Parameters investigated using simulations include the applied voltages, grid spacing, hole-to-hole spacing, doubles-to-singles ratio, plasma potential, and electron temperature; and estimates are provided for the sensitivity of impingement limits on these parameters.

  20. Variation in aerosol nucleation and growth in coal-fired power plant plumes due to background aerosol, meteorology and emissions: sensitivity analysis and parameterization.

    NASA Astrophysics Data System (ADS)

    Stevens, R. G.; Lonsdale, C. L.; Brock, C. A.; Reed, M. K.; Crawford, J. H.; Holloway, J. S.; Ryerson, T. B.; Huey, L. G.; Nowak, J. B.; Pierce, J. R.

    2012-04-01

    New-particle formation in the plumes of coal-fired power plants and other anthropogenic sulphur sources may be an important source of particles in the atmosphere. It remains unclear, however, how best to reproduce this formation in global and regional aerosol models with grid-box lengths that are 10s of kilometres and larger. The predictive power of these models is thus limited by the resultant uncertainties in aerosol size distributions. In this presentation, we focus on sub-grid sulphate aerosol processes within coal-fired power plant plumes: the sub-grid oxidation of SO2 with condensation of H2SO4 onto newly-formed and pre-existing particles. Based on the results of the System for Atmospheric Modelling (SAM), a Large-Eddy Simulation/Cloud-Resolving Model (LES/CRM) with online TwO Moment Aerosol Sectional (TOMAS) microphysics, we develop a computationally efficient, but physically based, parameterization that predicts the characteristics of aerosol formed within coal-fired power plant plumes based on parameters commonly available in global and regional-scale models. Given large-scale mean meteorological parameters, emissions from the power plant, mean background condensation sink, and the desired distance from the source, the parameterization will predict the fraction of the emitted SO2 that is oxidized to H2SO4, the fraction of that H2SO4 that forms new particles instead of condensing onto preexisting particles, the median diameter of the newly-formed particles, and the number of newly-formed particles per kilogram SO2 emitted. We perform a sensitivity analysis of these characteristics of the aerosol size distribution to the meteorological parameters, the condensation sink, and the emissions. In general, new-particle formation and growth is greatly reduced during polluted conditions due to the large preexisting aerosol surface area for H2SO4 condensation and particle coagulation. The new-particle formation and growth rates are also a strong function of the amount of sunlight and NOx since both control OH concentrations. Decreases in NOx emissions without simultaneous decreases in SO2 emissions increase new-particle formation and growth due to increased oxidation of SO2. The parameterization we describe here should allow for more accurate predictions of aerosol size distributions and a greater confidence in the effects of aerosols in climate and health studies.

  1. ON JOINT DETERMINISTIC GRID MODELING AND SUB-GRID VARIABILITY CONCEPTUAL FRAMEWORK FOR MODEL EVALUATION

    EPA Science Inventory

    The general situation, (but exemplified in urban areas), where a significant degree of sub-grid variability (SGV) exists in grid models poses problems when comparing gridbased air quality modeling results with observations. Typically, grid models ignore or parameterize processes ...

  2. Evaluation of a vortex-based subgrid stress model using DNS databases

    NASA Technical Reports Server (NTRS)

    Misra, Ashish; Lund, Thomas S.

    1996-01-01

    The performance of a SubGrid Stress (SGS) model for Large-Eddy Simulation (LES) developed by Misra k Pullin (1996) is studied for forced and decaying isotropic turbulence on a 32(exp 3) grid. The physical viability of the model assumptions are tested using DNS databases. The results from LES of forced turbulence at Taylor Reynolds number R(sub (lambda)) approximately equals 90 are compared with filtered DNS fields. Probability density functions (pdfs) of the subgrid energy transfer, total dissipation, and the stretch of the subgrid vorticity by the resolved velocity-gradient tensor show reasonable agreement with the DNS data. The model is also tested in LES of decaying isotropic turbulence where it correctly predicts the decay rate and energy spectra measured by Comte-Bellot & Corrsin (1971).

  3. A GRID OF THREE-DIMENSIONAL STELLAR ATMOSPHERE MODELS OF SOLAR METALLICITY. I. GENERAL PROPERTIES, GRANULATION, AND ATMOSPHERIC EXPANSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trampedach, Regner; Asplund, Martin; Collet, Remo

    2013-05-20

    Present grids of stellar atmosphere models are the workhorses in interpreting stellar observations and determining their fundamental parameters. These models rely on greatly simplified models of convection, however, lending less predictive power to such models of late-type stars. We present a grid of improved and more reliable stellar atmosphere models of late-type stars, based on deep, three-dimensional (3D), convective, stellar atmosphere simulations. This grid is to be used in general for interpreting observations and improving stellar and asteroseismic modeling. We solve the Navier Stokes equations in 3D and concurrent with the radiative transfer equation, for a range of atmospheric parameters,more » covering most of stellar evolution with convection at the surface. We emphasize the use of the best available atomic physics for quantitative predictions and comparisons with observations. We present granulation size, convective expansion of the acoustic cavity, and asymptotic adiabat as functions of atmospheric parameters.« less

  4. Assessment of dynamic closure for premixed combustion large eddy simulation

    NASA Astrophysics Data System (ADS)

    Langella, Ivan; Swaminathan, Nedunchezhian; Gao, Yuan; Chakraborty, Nilanjan

    2015-09-01

    Turbulent piloted Bunsen flames of stoichiometric methane-air mixtures are computed using the large eddy simulation (LES) paradigm involving an algebraic closure for the filtered reaction rate. This closure involves the filtered scalar dissipation rate of a reaction progress variable. The model for this dissipation rate involves a parameter βc representing the flame front curvature effects induced by turbulence, chemical reactions, molecular dissipation, and their interactions at the sub-grid level, suggesting that this parameter may vary with filter width or be a scale-dependent. Thus, it would be ideal to evaluate this parameter dynamically by LES. A procedure for this evaluation is discussed and assessed using direct numerical simulation (DNS) data and LES calculations. The probability density functions of βc obtained from the DNS and LES calculations are very similar when the turbulent Reynolds number is sufficiently large and when the filter width normalised by the laminar flame thermal thickness is larger than unity. Results obtained using a constant (static) value for this parameter are also used for comparative evaluation. Detailed discussion presented in this paper suggests that the dynamic procedure works well and physical insights and reasonings are provided to explain the observed behaviour.

  5. Use of In-Situ and Remotely Sensed Snow Observations for the National Water Model in Both an Analysis and Calibration Framework.

    NASA Astrophysics Data System (ADS)

    Karsten, L. R.; Gochis, D.; Dugger, A. L.; McCreight, J. L.; Barlage, M. J.; Fall, G. M.; Olheiser, C.

    2017-12-01

    Since version 1.0 of the National Water Model (NWM) has gone operational in Summer 2016, several upgrades to the model have occurred to improve hydrologic prediction for the continental United States. Version 1.1 of the NWM (Spring 2017) includes upgrades to parameter datasets impacting land surface hydrologic processes. These parameter datasets were upgraded using an automated calibration workflow that utilizes the Dynamic Data Search (DDS) algorithm to adjust parameter values using observed streamflow. As such, these upgrades to parameter values took advantage of various observations collected for snow analysis. In particular, in-situ SNOTEL observations in the Western US, volunteer in-situ observations across the entire US, gamma-derived snow water equivalent (SWE) observations courtesy of the NWS NOAA Corps program, gridded snow depth and SWE products from the Jet Propulsion Laboratory (JPL) Airborne Snow Observatory (ASO), gridded remotely sensed satellite-based snow products (MODIS,AMSR2,VIIRS,ATMS), and gridded SWE from the NWS Snow Data Assimilation System (SNODAS). This study explores the use of these observations to quantify NWM error and improvements from version 1.0 to version 1.1, along with subsequent work since then. In addition, this study explores the use of snow observations for use within the automated calibration workflow. Gridded parameter fields impacting the accumulation and ablation of snow states in the NWM were adjusted and calibrated using gridded remotely sensed snow states, SNODAS products, and in-situ snow observations. This calibration adjustment took place over various ecological regions in snow-dominated parts of the US for a retrospective period of time to capture a variety of climatological conditions. Specifically, the latest calibrated parameters impacting streamflow were held constant and only parameters impacting snow physics were tuned using snow observations and analysis. The adjusted parameter datasets were then used to force the model over an independent period for analysis against both snow and streamflow observations to see if improvements took place. The goal of this work is to further improve snow physics in the NWM, along with identifying areas where further work will take place in the future, such as data assimilation or further forcing improvements.

  6. THE MASS-LOSS RETURN FROM EVOLVED STARS TO THE LARGE MAGELLANIC CLOUD. VI. LUMINOSITIES AND MASS-LOSS RATES ON POPULATION SCALES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riebel, D.; Meixner, M.; Srinivasan, S.

    We present results from the first application of the Grid of Red Supergiant and Asymptotic Giant Branch ModelS (GRAMS) model grid to the entire evolved stellar population of the Large Magellanic Cloud (LMC). GRAMS is a pre-computed grid of 80,843 radiative transfer models of evolved stars and circumstellar dust shells composed of either silicate or carbonaceous dust. We fit GRAMS models to {approx}30,000 asymptotic giant branch (AGB) and red supergiant (RSG) stars in the LMC, using 12 bands of photometry from the optical to the mid-infrared. Our published data set consists of thousands of evolved stars with individually determined evolutionarymore » parameters such as luminosity and mass-loss rate. The GRAMS grid has a greater than 80% accuracy rate discriminating between oxygen- and carbon-rich chemistry. The global dust injection rate to the interstellar medium (ISM) of the LMC from RSGs and AGB stars is on the order of 2.1 Multiplication-Sign 10{sup -5} M{sub Sun} yr{sup -1}, equivalent to a total mass injection rate (including the gas) into the ISM of {approx}6 Multiplication-Sign 10{sup -3} M{sub Sun} yr{sup -1}. Carbon stars inject two and a half times as much dust into the ISM as do O-rich AGB stars, but the same amount of mass. We determine a bolometric correction factor for C-rich AGB stars in the K{sub s} band as a function of J - K{sub s} color, BC{sub K{sub s}}= -0.40(J-K{sub s}){sup 2} + 1.83(J-K{sub s}) + 1.29. We determine several IR color proxies for the dust mass-loss rate (M-dot{sub d}) from C-rich AGB stars, such as log M-dot{sub d} = (-18.90/((K{sub s}-[8.0])+3.37) - 5.93. We find that a larger fraction of AGB stars exhibiting the 'long-secondary period' phenomenon are more O-rich than stars dominated by radial pulsations, and AGB stars without detectable mass loss do not appear on either the first-overtone or fundamental-mode pulsation sequences.« less

  7. Using a Virtual Experiment to Analyze Infiltration Process from Point to Grid-cell Size Scale

    NASA Astrophysics Data System (ADS)

    Barrios, M. I.

    2013-12-01

    The hydrological science requires the emergence of a consistent theoretical corpus driving the relationships between dominant physical processes at different spatial and temporal scales. However, the strong spatial heterogeneities and non-linearities of these processes make difficult the development of multiscale conceptualizations. Therefore, scaling understanding is a key issue to advance this science. This work is focused on the use of virtual experiments to address the scaling of vertical infiltration from a physically based model at point scale to a simplified physically meaningful modeling approach at grid-cell scale. Numerical simulations have the advantage of deal with a wide range of boundary and initial conditions against field experimentation. The aim of the work was to show the utility of numerical simulations to discover relationships between the hydrological parameters at both scales, and to use this synthetic experience as a media to teach the complex nature of this hydrological process. The Green-Ampt model was used to represent vertical infiltration at point scale; and a conceptual storage model was employed to simulate the infiltration process at the grid-cell scale. Lognormal and beta probability distribution functions were assumed to represent the heterogeneity of soil hydraulic parameters at point scale. The linkages between point scale parameters and the grid-cell scale parameters were established by inverse simulations based on the mass balance equation and the averaging of the flow at the point scale. Results have shown numerical stability issues for particular conditions and have revealed the complex nature of the non-linear relationships between models' parameters at both scales and indicate that the parameterization of point scale processes at the coarser scale is governed by the amplification of non-linear effects. The findings of these simulations have been used by the students to identify potential research questions on scale issues. Moreover, the implementation of this virtual lab improved the ability to understand the rationale of these process and how to transfer the mathematical models to computational representations.

  8. CIELO-A GIS integrated model for climatic and water balance simulation in islands environments

    NASA Astrophysics Data System (ADS)

    Azevedo, E. B.; Pereira, L. S.

    2003-04-01

    The model CIELO (acronym for "Clima Insular à Escala Local") is a physically based model that simulates the climatic variables in an island using data from a single synoptic reference meteorological station. The reference station "knows" its position in the orographic and dynamic regime context. The domain of computation is a GIS raster grid parameterised with a digital elevation model (DEM). The grid is oriented following the direction of the air masses circulation through a specific algorithm named rotational terrain model (RTM). The model consists of two main sub-models. One, relative to the advective component simulation, assumes the Foehn effect to reproduce the dynamic and thermodynamic processes occurring when an air mass moves through the island orographic obstacle. This makes possible to simulate the air temperature, air humidity, cloudiness and precipitation as influenced by the orography along the air displacement. The second concerns the radiative component as affected by the clouds of orographic origin and by the shadow produced by the relief. The initial state parameters are computed starting from the reference meteorological station across the DEM transept until the sea level at the windward side. Then, starting from the sea level, the model computes the local scale meteorological parameters according to the direction of the air displacement, which is adjusted with the RTM. The air pressure, temperature and humidity are directly calculated for each cell in the computational grid, while several algorithms are used to compute the cloudiness, net radiation, evapotranspiration, and precipitation. The model presented in this paper has been calibrated and validated using data from some meteorological stations and a larger number of rainfall stations located at various elevations in the Azores Islands.

  9. Laser Induced Aluminum Surface Breakdown Model

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Liu, Jiwen; Zhang, Sijun; Wang, Ten-See (Technical Monitor)

    2002-01-01

    Laser powered propulsion systems involve complex fluid dynamics, thermodynamics and radiative transfer processes. Based on an unstructured grid, pressure-based computational aerothermodynamics; platform, several sub-models describing such underlying physics as laser ray tracing and focusing, thermal non-equilibrium, plasma radiation and air spark ignition have been developed. This proposed work shall extend the numerical platform and existing sub-models to include the aluminum wall surface Inverse Bremsstrahlung (IB) effect from which surface ablation and free-electron generation can be initiated without relying on the air spark ignition sub-model. The following tasks will be performed to accomplish the research objectives.

  10. “Transference Ratios” to Predict Total Oxidized Sulfur and Nitrogen Deposition – Part II, Modeling Results

    EPA Science Inventory

    The current study examines predictions of transference ratios and related modeled parameters for oxidized sulfur and oxidized nitrogen using five years (2002-2006) of 12-km grid cell-specific annual estimates from EPA’s Community Air Quality Model (CMAQ) for five selected sub-re...

  11. Aspects on HTS applications in confined power grids

    NASA Astrophysics Data System (ADS)

    Arndt, T.; Grundmann, J.; Kuhnert, A.; Kummeth, P.; Nick, W.; Oomen, M.; Schacherer, C.; Schmidt, W.

    2014-12-01

    In an increasing number of electric power grids the share of distributed energy generation is also increasing. The grids have to cope with a considerable change of power flow, which has an impact on the optimum topology of the grids and sub-grids (high-voltage, medium-voltage and low-voltage sub-grids) and the size of quasi-autonomous grid sections. Furthermore the stability of grids is influenced by its size. Thus special benefits of HTS applications in the power grid might become most visible in confined power grids.

  12. Coarse Grid CFD for underresolved simulation

    NASA Astrophysics Data System (ADS)

    Class, Andreas G.; Viellieber, Mathias O.; Himmel, Steffen R.

    2010-11-01

    CFD simulation of the complete reactor core of a nuclear power plant requires exceedingly huge computational resources so that this crude power approach has not been pursued yet. The traditional approach is 1D subchannel analysis employing calibrated transport models. Coarse grid CFD is an attractive alternative technique based on strongly under-resolved CFD and the inviscid Euler equations. Obviously, using inviscid equations and coarse grids does not resolve all the physics requiring additional volumetric source terms modelling viscosity and other sub-grid effects. The source terms are implemented via correlations derived from fully resolved representative simulations which can be tabulated or computed on the fly. The technique is demonstrated for a Carnot diffusor and a wire-wrap fuel assembly [1]. [4pt] [1] Himmel, S.R. phd thesis, Stuttgart University, Germany 2009, http://bibliothek.fzk.de/zb/berichte/FZKA7468.pdf

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, F.; Banks, J. W.; Henshaw, W. D.

    We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less

  14. Using Unsupervised Learning to Unlock the Potential of Hydrologic Similarity

    NASA Astrophysics Data System (ADS)

    Chaney, N.; Newman, A. J.

    2017-12-01

    By clustering environmental data into representative hydrologic response units (HRUs), hydrologic similarity aims to harness the covariance between a system's physical environment and its hydrologic response to create reduced-order models. This is the primary approach through which sub-grid hydrologic processes are represented in large-scale models (e.g., Earth System Models). Although the possibilities of hydrologic similarity are extensive, its practical implementations have been limited to 1-d bins of oversimplistic metrics of hydrologic response (e.g., topographic index)—this is a missed opportunity. In this presentation we will show how unsupervised learning is unlocking the potential of hydrologic similarity; clustering methods enable generalized frameworks to effectively and efficiently harness the petabytes of global environmental data to robustly characterize sub-grid heterogeneity in large-scale models. To illustrate the potential that unsupervised learning has towards advancing hydrologic similarity, we introduce a hierarchical clustering algorithm (HCA) that clusters very high resolution (30-100 meters) elevation, soil, climate, and land cover data to assemble a domain's representative HRUs. These HRUs are then used to parameterize the sub-grid heterogeneity in land surface models; for this study we use the GFDL LM4 model—the land component of the GFDL Earth System Model. To explore HCA and its impacts on the hydrologic system we use a ¼ grid cell in southeastern California as a test site. HCA is used to construct an ensemble of 9 different HRU configurations—each configuration has a different number of HRUs; for each ensemble member LM4 is run between 2002 and 2014 with a 26 year spinup. The analysis of the ensemble of model simulations show that: 1) clustering the high-dimensional environmental data space leads to a robust representation of the role of the physical environment in the coupled water, energy, and carbon cycles at a relatively low number of HRUs; 2) the reduced-order model with around 300 HRUs effectively reproduces the fully distributed model simulation (30 meters) with less than 1/1000 of computational expense; 3) assigning each grid cell of the fully distributed grid to an HRU via HCA enables novel visualization methods for large-scale models—this has significant implications for how these models are applied and evaluated. We will conclude by outlining the potential that this work has within operational prediction systems including numerical weather prediction, Earth System models, and Early Warning systems.

  15. The added value of dynamical downscaling in a climate change scenario simulation:A case study for European Alps and East Asia

    NASA Astrophysics Data System (ADS)

    Im, Eun-Soon; Coppola, Erika; Giorgi, Filippo

    2010-05-01

    Since anthropogenic climate change is a rather important factor for the future human life all over the planet and its effects are not globally uniform, climate information at regional or local scales become more and more important for an accurate assessment of the potential impact of climate change on societies and ecosystems. High resolution information with suitably fine-scale for resolving complex geographical features could be a critical factor for successful linkage between climate models and impact assessment studies. However, scale mismatch between them still remains major problem. One method for overcoming the resolution limitations of global climate models and for adding regional details to coarse-grid global projections is to use dynamical downscaling by means of a regional climate model. In this study, the ECHAM5/MPI-OM (1.875 degree) A1B scenario simulation has been dynamically downscaled by using two different approaches within the framework of RegCM3 modeling system. First, a mosaic-type parameterization of subgrid-scale topography and land use (Sub-BATS) is applied over the European Alpine region. The Sub-BATS system is composed of 15 km coarse-grid cell and 3 km sub-grid cell. Second, we developed the RegCM3 one-way double-nested system, with the mother domain encompassing the eastern regions of Asia at 60 km grid spacing and the nested domain covering the Korean Peninsula at 20 km grid spacing. By comparing the regional climate model output and the driving global model ECHAM5/MPI-OM output, it is possible to estimate the added value of physically-based dynamical downscaling when for example impact studies at hydrological scale are performed.

  16. Characteristics of sub-daily precipitation extremes in observed data and regional climate model simulations

    NASA Astrophysics Data System (ADS)

    Beranová, Romana; Kyselý, Jan; Hanel, Martin

    2018-04-01

    The study compares characteristics of observed sub-daily precipitation extremes in the Czech Republic with those simulated by Hadley Centre Regional Model version 3 (HadRM3) and Rossby Centre Regional Atmospheric Model version 4 (RCA4) regional climate models (RCMs) driven by reanalyses and examines diurnal cycles of hourly precipitation and their dependence on intensity and surface temperature. The observed warm-season (May-September) maxima of short-duration (1, 2 and 3 h) amounts show one diurnal peak in the afternoon, which is simulated reasonably well by RCA4, although the peak occurs too early in the model. HadRM3 provides an unrealistic diurnal cycle with a nighttime peak and an afternoon minimum coinciding with the observed maximum for all three ensemble members, which suggests that convection is not captured realistically. Distorted relationships of the diurnal cycles of hourly precipitation to daily maximum temperature in HadRM3 further evidence that underlying physical mechanisms are misrepresented in this RCM. Goodness-of-fit tests indicate that generalised extreme value distribution is an applicable model for both observed and RCM-simulated precipitation maxima. However, the RCMs are not able to capture the range of the shape parameter estimates of distributions of short-duration precipitation maxima realistically, leading to either too many (nearly all for HadRM3) or too few (RCA4) grid boxes in which the shape parameter corresponds to a heavy tail. This means that the distributions of maxima of sub-daily amounts are distorted in the RCM-simulated data and do not match reality well. Therefore, projected changes of sub-daily precipitation extremes in climate change scenarios based on RCMs not resolving convection need to be interpreted with caution.

  17. Quantifying the impact of sub-grid surface wind variability on sea salt and dust emissions in CAM5

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Zhao, Chun; Wan, Hui; Qian, Yun; Easter, Richard C.; Ghan, Steven J.; Sakaguchi, Koichi; Liu, Xiaohong

    2016-02-01

    This paper evaluates the impact of sub-grid variability of surface wind on sea salt and dust emissions in the Community Atmosphere Model version 5 (CAM5). The basic strategy is to calculate emission fluxes multiple times, using different wind speed samples of a Weibull probability distribution derived from model-predicted grid-box mean quantities. In order to derive the Weibull distribution, the sub-grid standard deviation of surface wind speed is estimated by taking into account four mechanisms: turbulence under neutral and stable conditions, dry convective eddies, moist convective eddies over the ocean, and air motions induced by mesoscale systems and fine-scale topography over land. The contributions of turbulence and dry convective eddy are parameterized using schemes from the literature. Wind variabilities caused by moist convective eddies and fine-scale topography are estimated using empirical relationships derived from an operational weather analysis data set at 15 km resolution. The estimated sub-grid standard deviations of surface wind speed agree well with reference results derived from 1 year of global weather analysis at 15 km resolution and from two regional model simulations with 3 km grid spacing.The wind-distribution-based emission calculations are implemented in CAM5. In terms of computational cost, the increase in total simulation time turns out to be less than 3 %. Simulations at 2° resolution indicate that sub-grid wind variability has relatively small impacts (about 7 % increase) on the global annual mean emission of sea salt aerosols, but considerable influence on the emission of dust. Among the considered mechanisms, dry convective eddies and mesoscale flows associated with topography are major causes of dust emission enhancement. With all the four mechanisms included and without additional adjustment of uncertain parameters in the model, the simulated global and annual mean dust emission increase by about 50 % compared to the default model. By tuning the globally constant dust emission scale factor, the global annual mean dust emission, aerosol optical depth, and top-of-atmosphere radiative fluxes can be adjusted to the level of the default model, but the frequency distribution of dust emission changes, with more contribution from weaker wind events and less contribution from stronger wind events. In Africa and Asia, the overall frequencies of occurrence of dust emissions increase, and the seasonal variations are enhanced, while the geographical patterns of the emission frequency show little change.

  18. Quantifying the impact of sub-grid surface wind variability on sea salt and dust emissions in CAM5

    DOE PAGES

    Zhang, Kai; Zhao, Chun; Wan, Hui; ...

    2016-02-12

    This paper evaluates the impact of sub-grid variability of surface wind on sea salt and dust emissions in the Community Atmosphere Model version 5 (CAM5). The basic strategy is to calculate emission fluxes multiple times, using different wind speed samples of a Weibull probability distribution derived from model-predicted grid-box mean quantities. In order to derive the Weibull distribution, the sub-grid standard deviation of surface wind speed is estimated by taking into account four mechanisms: turbulence under neutral and stable conditions, dry convective eddies, moist convective eddies over the ocean, and air motions induced by mesoscale systems and fine-scale topography overmore » land. The contributions of turbulence and dry convective eddy are parameterized using schemes from the literature. Wind variabilities caused by moist convective eddies and fine-scale topography are estimated using empirical relationships derived from an operational weather analysis data set at 15 km resolution. The estimated sub-grid standard deviations of surface wind speed agree well with reference results derived from 1 year of global weather analysis at 15 km resolution and from two regional model simulations with 3 km grid spacing.The wind-distribution-based emission calculations are implemented in CAM5. In terms of computational cost, the increase in total simulation time turns out to be less than 3 %. Simulations at 2° resolution indicate that sub-grid wind variability has relatively small impacts (about 7 % increase) on the global annual mean emission of sea salt aerosols, but considerable influence on the emission of dust. Among the considered mechanisms, dry convective eddies and mesoscale flows associated with topography are major causes of dust emission enhancement. With all the four mechanisms included and without additional adjustment of uncertain parameters in the model, the simulated global and annual mean dust emission increase by about 50 % compared to the default model. By tuning the globally constant dust emission scale factor, the global annual mean dust emission, aerosol optical depth, and top-of-atmosphere radiative fluxes can be adjusted to the level of the default model, but the frequency distribution of dust emission changes, with more contribution from weaker wind events and less contribution from stronger wind events. Lastly, in Africa and Asia, the overall frequencies of occurrence of dust emissions increase, and the seasonal variations are enhanced, while the geographical patterns of the emission frequency show little change.« less

  19. Quantifying the impact of sub-grid surface wind variability on sea salt and dust emissions in CAM5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Kai; Zhao, Chun; Wan, Hui

    This paper evaluates the impact of sub-grid variability of surface wind on sea salt and dust emissions in the Community Atmosphere Model version 5 (CAM5). The basic strategy is to calculate emission fluxes multiple times, using different wind speed samples of a Weibull probability distribution derived from model-predicted grid-box mean quantities. In order to derive the Weibull distribution, the sub-grid standard deviation of surface wind speed is estimated by taking into account four mechanisms: turbulence under neutral and stable conditions, dry convective eddies, moist convective eddies over the ocean, and air motions induced by mesoscale systems and fine-scale topography overmore » land. The contributions of turbulence and dry convective eddy are parameterized using schemes from the literature. Wind variabilities caused by moist convective eddies and fine-scale topography are estimated using empirical relationships derived from an operational weather analysis data set at 15 km resolution. The estimated sub-grid standard deviations of surface wind speed agree well with reference results derived from 1 year of global weather analysis at 15 km resolution and from two regional model simulations with 3 km grid spacing.The wind-distribution-based emission calculations are implemented in CAM5. In terms of computational cost, the increase in total simulation time turns out to be less than 3 %. Simulations at 2° resolution indicate that sub-grid wind variability has relatively small impacts (about 7 % increase) on the global annual mean emission of sea salt aerosols, but considerable influence on the emission of dust. Among the considered mechanisms, dry convective eddies and mesoscale flows associated with topography are major causes of dust emission enhancement. With all the four mechanisms included and without additional adjustment of uncertain parameters in the model, the simulated global and annual mean dust emission increase by about 50 % compared to the default model. By tuning the globally constant dust emission scale factor, the global annual mean dust emission, aerosol optical depth, and top-of-atmosphere radiative fluxes can be adjusted to the level of the default model, but the frequency distribution of dust emission changes, with more contribution from weaker wind events and less contribution from stronger wind events. Lastly, in Africa and Asia, the overall frequencies of occurrence of dust emissions increase, and the seasonal variations are enhanced, while the geographical patterns of the emission frequency show little change.« less

  20. Cyber-Physical Correlations for Infrastructure Resilience: A Game-Theoretic Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; He, Fei; Ma, Chris Y. T.

    In several critical infrastructures, the cyber and physical parts are correlated so that disruptions to one affect the other and hence the whole system. These correlations may be exploited to strategically launch components attacks, and hence must be accounted for ensuring the infrastructure resilience, specified by its survival probability. We characterize the cyber-physical interactions at two levels: (i) the failure correlation function specifies the conditional survival probability of cyber sub-infrastructure given the physical sub-infrastructure as a function of their marginal probabilities, and (ii) the individual survival probabilities of both sub-infrastructures are characterized by first-order differential conditions. We formulate a resiliencemore » problem for infrastructures composed of discrete components as a game between the provider and attacker, wherein their utility functions consist of an infrastructure survival probability term and a cost term expressed in terms of the number of components attacked and reinforced. We derive Nash Equilibrium conditions and sensitivity functions that highlight the dependence of infrastructure resilience on the cost term, correlation function and sub-infrastructure survival probabilities. These results generalize earlier ones based on linear failure correlation functions and independent component failures. We apply the results to models of cloud computing infrastructures and energy grids.« less

  1. A New Stellar Atmosphere Grid and Comparisons with HST /STIS CALSPEC Flux Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bohlin, Ralph C.; Fleming, Scott W.; Gordon, Karl D.

    The Space Telescope Imaging Spectrograph has measured the spectral energy distributions for several stars of types O, B, A, F, and G. These absolute fluxes from the CALSPEC database are fit with a new spectral grid computed from the ATLAS-APOGEE ATLAS9 model atmosphere database using a chi-square minimization technique in four parameters. The quality of the fits are compared for complete LTE grids by Castelli and Kurucz (CK04) and our new comprehensive LTE grid (BOSZ). For the cooler stars, the fits with the MARCS LTE grid are also evaluated, while the hottest stars are also fit with the NLTE Lanzmore » and Hubeny OB star grids. Unfortunately, these NLTE models do not transition smoothly in the infrared to agree with our new BOSZ LTE grid at the NLTE lower limit of T {sub eff} = 15,000 K. The new BOSZ grid is available via the Space Telescope Institute MAST archive and has a much finer sampled IR wavelength scale than CK04, which will facilitate the modeling of stars observed by the James Webb Space Telescope . Our result for the angular diameter of Sirius agrees with the ground-based interferometric value.« less

  2. Extending amulti-scale parameter regionalization (MPR) method by introducing parameter constrained optimization and flexible transfer functions

    NASA Astrophysics Data System (ADS)

    Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten

    2015-04-01

    A multi-scale parameter-estimation method, as presented by Samaniego et al. (2010), is implemented and extended for the conceptual hydrological model COSERO. COSERO is a HBV-type model that is specialized for alpine-environments, but has been applied over a wide range of basins all over the world (see: Kling et al., 2014 for an overview). Within the methodology available small-scale information (DEM, soil texture, land cover, etc.) is used to estimate the coarse-scale model parameters by applying a set of transfer-functions (TFs) and subsequent averaging methods, whereby only TF hyper-parameters are optimized against available observations (e.g. runoff data). The parameter regionalisation approach was extended in order to allow for a more meta-heuristical handling of the transfer-functions. The two main novelties are: 1. An explicit introduction of constrains into parameter estimation scheme: The constraint scheme replaces invalid parts of the transfer-function-solution space with valid solutions. It is inspired by applications in evolutionary algorithms and related to the combination of learning and evolution. This allows the consideration of physical and numerical constraints as well as the incorporation of a priori modeller-experience into the parameter estimation. 2. Spline-based transfer-functions: Spline-based functions enable arbitrary forms of transfer-functions: This is of importance since in many cases the general relationship between sub-grid information and parameters are known, but not the form of the transfer-function itself. The contribution presents the results and experiences with the adopted method and the introduced extensions. Simulation are performed for the pre-alpine/alpine Traisen catchment in Lower Austria. References: Samaniego, L., Kumar, R., Attinger, S. (2010): Multiscale parameter regionalization of a grid-based hydrologic model at the mesoscale, Water Resour. Res., doi: 10.1029/2008WR007327 Kling, H., Stanzel, P., Fuchs, M., and Nachtnebel, H. P. (2014): Performance of the COSERO precipitation-runoff model under non-stationary conditions in basins with different climates, Hydrolog. Sci. J., doi: 10.1080/02626667.2014.959956.

  3. Downscaling Aerosols and the Impact of Neglected Subgrid Processes on Direct Aerosol Radiative Forcing for a Representative Global Climate Model Grid Spacing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gustafson, William I.; Qian, Yun; Fast, Jerome D.

    2011-07-13

    Recent improvements to many global climate models include detailed, prognostic aerosol calculations intended to better reproduce the observed climate. However, the trace gas and aerosol fields are treated at the grid-cell scale with no attempt to account for sub-grid impacts on the aerosol fields. This paper begins to quantify the error introduced by the neglected sub-grid variability for the shortwave aerosol radiative forcing for a representative climate model grid spacing of 75 km. An analysis of the value added in downscaling aerosol fields is also presented to give context to the WRF-Chem simulations used for the sub-grid analysis. We foundmore » that 1) the impact of neglected sub-grid variability on the aerosol radiative forcing is strongest in regions of complex topography and complicated flow patterns, and 2) scale-induced differences in emissions contribute strongly to the impact of neglected sub-grid processes on the aerosol radiative forcing. The two of these effects together, when simulated at 75 km vs. 3 km in WRF-Chem, result in an average daytime mean bias of over 30% error in top-of-atmosphere shortwave aerosol radiative forcing for a large percentage of central Mexico during the MILAGRO field campaign.« less

  4. Simulating pre-galactic metal enrichment for JWST deep-field observations

    NASA Astrophysics Data System (ADS)

    Jaacks, Jason

    2017-08-01

    We propose to create a new suite of mesoscale cosmological volume simulations with custom built sub-grid physics in which we independently track the contribution from Population III and Population II star formation to the total metals in the interstellar medium (ISM) of the first galaxies, and in the diffuse IGM at an epoch prior to reionization. These simulations will fill a gap in our simulation knowledge about chemical enrichment in the pre-reionization universe, which is a crucial need given the impending observational push into this epoch with near-future ground and space-based telescopes. This project is the natural extension of our successful Cycle 24 theory proposal (HST-AR-14569.001-A; PI Jaacks) in which we developed a new Pop III star formation sub-grid model which is currently being utilized to study the baseline metal enrichment of pre-reionization systems.

  5. MAFIA Version 4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weiland, T.; Bartsch, M.; Becker, U.

    1997-02-01

    MAFIA Version 4.0 is an almost completely new version of the general purpose electromagnetic simulator known since 13 years. The major improvements concern the new graphical user interface based on state of the art technology as well as a series of new solvers for new physics problems. MAFIA now covers heat distribution, electro-quasistatics, S-parameters in frequency domain, particle beam tracking in linear accelerators, acoustics and even elastodynamics. The solvers that were available in earlier versions have also been improved and/or extended, as for example the complex eigenmode solver, the 2D-3D coupled PIC solvers. Time domain solvers have new waveguide boundarymore » conditions with an extremely low reflection even near cutoff frequency, concentrated elements are available as well as a variety of signal processing options. Probably the most valuable addition are recursive sub-grid capabilities that enable modeling of very small details in large structures.« less

  6. Calculations of High-Temperature Jet Flow Using Hybrid Reynolds-Average Navier-Stokes Formulations

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Elmiligui, Alaa; Giriamaji, Sharath S.

    2008-01-01

    Two multiscale-type turbulence models are implemented in the PAB3D solver. The models are based on modifying the Reynolds-averaged Navier Stokes equations. The first scheme is a hybrid Reynolds-averaged- Navier Stokes/large-eddy-simulation model using the two-equation k(epsilon) model with a Reynolds-averaged-Navier Stokes/large-eddy-simulation transition function dependent on grid spacing and the computed turbulence length scale. The second scheme is a modified version of the partially averaged Navier Stokes model in which the unresolved kinetic energy parameter f(sub k) is allowed to vary as a function of grid spacing and the turbulence length scale. This parameter is estimated based on a novel two-stage procedure to efficiently estimate the level of scale resolution possible for a given flow on a given grid for partially averaged Navier Stokes. It has been found that the prescribed scale resolution can play a major role in obtaining accurate flow solutions. The parameter f(sub k) varies between zero and one and is equal to one in the viscous sublayer and when the Reynolds-averaged Navier Stokes turbulent viscosity becomes smaller than the large-eddy-simulation viscosity. The formulation, usage methodology, and validation examples are presented to demonstrate the enhancement of PAB3D's time-accurate turbulence modeling capabilities. The accurate simulations of flow and turbulent quantities will provide a valuable tool for accurate jet noise predictions. Solutions from these models are compared with Reynolds-averaged Navier Stokes results and experimental data for high-temperature jet flows. The current results show promise for the capability of hybrid Reynolds-averaged Navier Stokes and large eddy simulation and partially averaged Navier Stokes in simulating such flow phenomena.

  7. Improvements in sub-grid, microphysics averages using quadrature based approaches

    NASA Astrophysics Data System (ADS)

    Chowdhary, K.; Debusschere, B.; Larson, V. E.

    2013-12-01

    Sub-grid variability in microphysical processes plays a critical role in atmospheric climate models. In order to account for this sub-grid variability, Larson and Schanen (2013) propose placing a probability density function on the sub-grid cloud microphysics quantities, e.g. autoconversion rate, essentially interpreting the cloud microphysics quantities as a random variable in each grid box. Random sampling techniques, e.g. Monte Carlo and Latin Hypercube, can be used to calculate statistics, e.g. averages, on the microphysics quantities, which then feed back into the model dynamics on the coarse scale. We propose an alternate approach using numerical quadrature methods based on deterministic sampling points to compute the statistical moments of microphysics quantities in each grid box. We have performed a preliminary test on the Kessler autoconversion formula, and, upon comparison with Latin Hypercube sampling, our approach shows an increased level of accuracy with a reduction in sample size by almost two orders of magnitude. Application to other microphysics processes is the subject of ongoing research.

  8. Domain decomposition by the advancing-partition method for parallel unstructured grid generation

    NASA Technical Reports Server (NTRS)

    Banihashemi, legal representative, Soheila (Inventor); Pirzadeh, Shahyar Z. (Inventor)

    2012-01-01

    In a method for domain decomposition for generating unstructured grids, a surface mesh is generated for a spatial domain. A location of a partition plane dividing the domain into two sections is determined. Triangular faces on the surface mesh that intersect the partition plane are identified. A partition grid of tetrahedral cells, dividing the domain into two sub-domains, is generated using a marching process in which a front comprises only faces of new cells which intersect the partition plane. The partition grid is generated until no active faces remain on the front. Triangular faces on each side of the partition plane are collected into two separate subsets. Each subset of triangular faces is renumbered locally and a local/global mapping is created for each sub-domain. A volume grid is generated for each sub-domain. The partition grid and volume grids are then merged using the local-global mapping.

  9. Photoionization Modeling of Oxygen K Absorption in the Interstellar Medium:. [The Chandra Grating Spectra of XTE J1817-330

    NASA Technical Reports Server (NTRS)

    Gatuzz, E.; Garcia, J.; Mendoza, C.; Kallman, T. R.; Witthoeft, M.; Lohfink, A.; Bautista, M. A.; Palmeri, P.; Quinet, P.

    2013-01-01

    We present detailed analyses of oxygen K absorption in the interstellar medium (ISM) using four high-resolution Chandra spectra toward the X-ray low-mass binary XTE J1817-330. The 11-25 Angstrom broadband is described with a simple absorption model that takes into account the pile-up effect and results in an estimate of the hydrogen column density. The oxygen K-edge region (21-25 Angstroms) is fitted with the physical warmabs model, which is based on a photoionization model grid generated with the xstar code with the most up-to-date atomic database. This approach allows a benchmark of the atomic data which involves wavelength shifts of both the K lines and photoionization cross sections in order to fit the observed spectra accurately. As a result we obtain a column density of N(sub H) = 1.38 +/- 0.01 × 10(exp 21) cm(exp -2); an ionization parameter of log xi = -2.70 +/- 0.023; an oxygen abundance of A(sub O) = 0.689 (+0.015/-0.010); and ionization fractions of O(sub I)/O = 0.911, O(sub II)/O = 0.077, and O(sub III)/O = 0.012 that are in good agreement with results from previous studies. Since the oxygen abundance in warmabs is given relative to the solar standard of Grevesse & Sauval, a rescaling with the revision by Asplund et al. yields A(sub O) = 0.952(+0.020/-0.013), a value close to solar that reinforces the new standard.We identify several atomic absorption lines-K(alpha), K(beta), and K(gamma) in O(sub I) and O(sub II) and K(alpha) in O(sub III), O(sub VI), and O(sub VII)-the last two probably residing in the neighborhood of the source rather than in the ISM. This is the first firm detection of oxygen K resonances with principal quantum numbers n greater than 2 associated with ISM cold absorption.

  10. A stable and accurate partitioned algorithm for conjugate heat transfer

    NASA Astrophysics Data System (ADS)

    Meng, F.; Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.

    2017-09-01

    We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in an implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems together with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode theory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized-Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and diffusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. The CHAMP scheme is also developed for general curvilinear grids and CHT examples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.

  11. A stable and accurate partitioned algorithm for conjugate heat transfer

    DOE PAGES

    Meng, F.; Banks, J. W.; Henshaw, W. D.; ...

    2017-04-25

    We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less

  12. Moving overlapping grids with adaptive mesh refinement for high-speed reactive and non-reactive flow

    NASA Astrophysics Data System (ADS)

    Henshaw, William D.; Schwendeman, Donald W.

    2006-08-01

    We consider the solution of the reactive and non-reactive Euler equations on two-dimensional domains that evolve in time. The domains are discretized using moving overlapping grids. In a typical grid construction, boundary-fitted grids are used to represent moving boundaries, and these grids overlap with stationary background Cartesian grids. Block-structured adaptive mesh refinement (AMR) is used to resolve fine-scale features in the flow such as shocks and detonations. Refinement grids are added to base-level grids according to an estimate of the error, and these refinement grids move with their corresponding base-level grids. The numerical approximation of the governing equations takes place in the parameter space of each component grid which is defined by a mapping from (fixed) parameter space to (moving) physical space. The mapped equations are solved numerically using a second-order extension of Godunov's method. The stiff source term in the reactive case is handled using a Runge-Kutta error-control scheme. We consider cases when the boundaries move according to a prescribed function of time and when the boundaries of embedded bodies move according to the surface stress exerted by the fluid. In the latter case, the Newton-Euler equations describe the motion of the center of mass of the each body and the rotation about it, and these equations are integrated numerically using a second-order predictor-corrector scheme. Numerical boundary conditions at slip walls are described, and numerical results are presented for both reactive and non-reactive flows that demonstrate the use and accuracy of the numerical approach.

  13. A Virtual Study of Grid Resolution on Experiments of a Highly-Resolved Turbulent Plume

    NASA Astrophysics Data System (ADS)

    Maisto, Pietro M. F.; Marshall, Andre W.; Gollner, Michael J.; Fire Protection Engineering Department Collaboration

    2017-11-01

    An accurate representation of sub-grid scale turbulent mixing is critical for modeling fire plumes and smoke transport. In this study, PLIF and PIV diagnostics are used with the saltwater modeling technique to provide highly-resolved instantaneous field measurements in unconfined turbulent plumes useful for statistical analysis, physical insight, and model validation. The effect of resolution was investigated employing a virtual interrogation window (of varying size) applied to the high-resolution field measurements. Motivated by LES low-pass filtering concepts, the high-resolution experimental data in this study can be analyzed within the interrogation windows (i.e. statistics at the sub-grid scale) and on interrogation windows (i.e. statistics at the resolved scale). A dimensionless resolution threshold (L/D*) criterion was determined to achieve converged statistics on the filtered measurements. Such a criterion was then used to establish the relative importance between large and small-scale turbulence phenomena while investigating specific scales for the turbulent flow. First order data sets start to collapse at a resolution of 0.3D*, while for second and higher order statistical moments the interrogation window size drops down to 0.2D*.

  14. Thresholds for Shifting Visually Perceived Eye Level Due to Incremental Pitches

    NASA Technical Reports Server (NTRS)

    Scott, Donald M.; Welch, Robert; Cohen, M. M.; Hill, Cyndi

    2001-01-01

    Visually perceived eye level (VPEL) was judged by subjects as they viewed a luminous grid pattern that was pitched by 2 or 5 deg increments between -20 deg and +20 deg. Subjects were dark adapted for 20 min and indicated--VPEL by directing the beam of a laser pointer to the rear wall of a 1.25 m cubic pitch box that rotated about a horizontal axis midpoint on the rear wall. Data were analyzed by ANOVA and the Tukey HSD procedure. Results showed a 10.0 deg threshold for pitches P(sub i) above the reference pitch P(sub 0), and a -10.3 deg threshold for pitches P(sub i) below-the reference-pitch P(sub 0). Threshold data for pitches P(sub i) < P(sub 0) suggest an asymmetric threshold for VPEL below and above physical eye level.

  15. Biological and physical controls on O2/Ar, Ar and pCO2 variability at the Western Antarctic Peninsula and in the Drake Passage

    NASA Astrophysics Data System (ADS)

    Eveleth, R.; Cassar, N.; Doney, S. C.; Munro, D. R.; Sweeney, C.

    2017-05-01

    Using simultaneous sub-kilometer resolution underway measurements of surface O2/Ar, total O2 and pCO2 from annual austral summer surveys in 2012, 2013 and 2014, we explore the impacts of biological and physical processes on the O2 and pCO2 system spatial and interannual variability at the Western Antarctic Peninsula (WAP). In the WAP, mean O2/Ar supersaturation was (7.6±9.1)% and mean pCO2 supersaturation was (-28±22)%. We see substantial spatial variability in O2 and pCO2 including sub-mesoscale/mesoscale variability with decorrelation length scales of 4.5 km, consistent with the regional Rossby radius. This variability is embedded within onshore-offshore gradients. O2 in the LTER grid region is driven primarily by biological processes as seen by the median ratio of the magnitude of biological oxygen (O2/Ar) to physical oxygen (Ar) supersaturation anomalies (%) relative to atmospheric equilibrium (2.6), however physical processes have a more pronounced influence in the southern onshore region of the grid where we see active sea-ice melting. Total O2 measurements should be interpreted with caution in regions of significant sea-ice formation and melt and glacial meltwater input. pCO2 undersaturation predominantly reflects biological processes in the LTER grid. In contrast we compare these results to the Drake Passage where gas supersaturations vary by smaller magnitudes and decorrelate at length scales of 12 km, in line with latitudinal changes in the regional Rossby radius. Here biological processes induce smaller O2/Ar supersaturations (mean (0.14±1.3)%) and pCO2 undersaturations (mean (-2.8±3.9)%) than in the WAP, and pressure changes, bubble and gas exchange fluxes drive stable Ar supersaturations.

  16. Data for polarization in charmless B{yields}{phi}K*: A signal for new physics?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Das, Prasanta Kumar; Yang, K.-C.

    2005-05-01

    The recent observations of sizable transverse fractions of B{yields}{phi}K* may hint for the existence of new physics. We analyze all possible new-physics four-quark operators and find that two classes of new-physics operators could offer resolutions to the B{yields}{phi}K* polarization anomaly. The operators in the first class have structures (1-{gamma}{sub 5})x(1-{gamma}{sub 5}), {sigma}(1-{gamma}{sub 5})x{sigma}(1-{gamma}{sub 5}), and in the second class (1+{gamma}{sub 5})x(1+{gamma}{sub 5}), {sigma}(1+{gamma}{sub 5})x{sigma}(1+{gamma}{sub 5}). For each class, the new-physics effects can be lumped into a single parameter. Two possible experimental results of polarization phases, arg(A{sub perpendicular})-arg(A{sub parallel}){approx_equal}{pi} or 0, originating from the phase ambiguity in data, could be separatelymore » accounted for by our two new-physics scenarios: the first (second) scenario with the first (second) class new-physics operators. The consistency between the data and our new-physics analysis suggests a small new-physics weak phase, together with a large(r) strong phase. We obtain sizable transverse fractions {lambda}{sub parallel{sub parallel}}+{lambda}{sub perpendicular{sub perpendicular}}{approx_equal}{lambda}{sub 00}, in accordance with the observations. We find {lambda}{sub parallel{sub parallel}}{approx_equal}0.8{lambda}{sub perpendicular{sub perpendicular}} in the first scenario but {lambda}{sub parallel{sub parallel}} > or approx. {lambda}{sub perpendicular{sub perpendicular}} in the second scenario. We discuss the impact of the new-physics weak phase on observations.« less

  17. Combined Uncertainty and A-Posteriori Error Bound Estimates for CFD Calculations: Theory and Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    Simulation codes often utilize finite-dimensional approximation resulting in numerical error. Some examples include, numerical methods utilizing grids and finite-dimensional basis functions, particle methods using a finite number of particles. These same simulation codes also often contain sources of uncertainty, for example, uncertain parameters and fields associated with the imposition of initial and boundary data,uncertain physical model parameters such as chemical reaction rates, mixture model parameters, material property parameters, etc.

  18. Absolute continuum intensity diagnostics of a novel large coaxial gridded hollow cathode argon plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Ruilin; Yuan, Chengxun, E-mail: yuancx@hit.edu.cn, E-mail: zhouzx@hit.edu.cn; Jia, Jieshu

    2016-08-15

    This paper reports a novel coaxial gridded hollow discharge during operation at low pressure (20 Pa–80 Pa) in an argon atmosphere. A homogeneous hollow discharge was observed under different conditions, and the excitation mechanism and the discharge parameters for the hollow cathode plasma were examined at length. An optical emission spectrometry (OES) method, with a special focus on absolute continuum intensity method, was employed to measure the plasma parameters. The Langmuir probe measurement (LPM) was used to verify the OES results. Both provided electron density values (n{sub e}) in the order of 10{sup 16} m{sup −3} for different plasma settings. Taken together, themore » results show that the OES method is an effective approach to diagnosing the similar plasma, especially when the LPM is hardly operated.« less

  19. Challenges for MSSM Higgs searches at hadron colliders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carena, Marcela S.; /Fermilab; Menon, A.

    2007-04-01

    In this article we analyze the impact of B-physics and Higgs physics at LEP on standard and non-standard Higgs bosons searches at the Tevatron and the LHC, within the framework of minimal flavor violating supersymmetric models. The B-physics constraints we consider come from the experimental measurements of the rare B-decays b {yields} s{gamma} and B{sub u} {yields} {tau}{nu} and the experimental limit on the B{sub s} {yields} {mu}{sup +}{mu}{sup -} branching ratio. We show that these constraints are severe for large values of the trilinear soft breaking parameter A{sub t}, rendering the non-standard Higgs searches at hadron colliders less promising.more » On the contrary these bounds are relaxed for small values of A{sub t} and large values of the Higgsino mass parameter {mu}, enhancing the prospects for the direct detection of non-standard Higgs bosons at both colliders. We also consider the available ATLAS and CMS projected sensitivities in the standard model Higgs search channels, and we discuss the LHC's ability in probing the whole MSSM parameter space. In addition we also consider the expected Tevatron collider sensitivities in the standard model Higgs h {yields} b{bar b} channel to show that it may be able to find 3 {sigma} evidence in the B-physics allowed regions for small or moderate values of the stop mixing parameter.« less

  20. Sub-grid drag model for immersed vertical cylinders in fluidized beds

    DOE PAGES

    Verma, Vikrant; Li, Tingwen; Dietiker, Jean -Francois; ...

    2017-01-03

    Immersed vertical cylinders are often used as heat exchanger in gas-solid fluidized beds. Computational Fluid Dynamics (CFD) simulations are computationally expensive for large scale systems with bundles of cylinders. Therefore sub-grid models are required to facilitate simulations on a coarse grid, where internal cylinders are treated as a porous medium. The influence of cylinders on the gas-solid flow tends to enhance segregation and affect the gas-solid drag. A correction to gas-solid drag must be modeled using a suitable sub-grid constitutive relationship. In the past, Sarkar et al. have developed a sub-grid drag model for horizontal cylinder arrays based on 2Dmore » simulations. However, the effect of a vertical cylinder arrangement was not considered due to computational complexities. In this study, highly resolved 3D simulations with vertical cylinders were performed in small periodic domains. These simulations were filtered to construct a sub-grid drag model which can then be implemented in coarse-grid simulations. Gas-solid drag was filtered for different solids fractions and a significant reduction in drag was identified when compared with simulation without cylinders and simulation with horizontal cylinders. Slip velocities significantly increase when vertical cylinders are present. Lastly, vertical suspension drag due to vertical cylinders is insignificant however substantial horizontal suspension drag is observed which is consistent to the finding for horizontal cylinders.« less

  1. Are atmospheric updrafts a key to unlocking climate forcing and sensitivity?

    DOE PAGES

    Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel; ...

    2016-10-20

    Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud–aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climate and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vs in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of the scale dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less

  2. Are atmospheric updrafts a key to unlocking climate forcing and sensitivity?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donner, Leo J.; O'Brien, Travis A.; Rieger, Daniel

    Both climate forcing and climate sensitivity persist as stubborn uncertainties limiting the extent to which climate models can provide actionable scientific scenarios for climate change. A key, explicit control on cloud–aerosol interactions, the largest uncertainty in climate forcing, is the vertical velocity of cloud-scale updrafts. Model-based studies of climate sensitivity indicate that convective entrainment, which is closely related to updraft speeds, is an important control on climate sensitivity. Updraft vertical velocities also drive many physical processes essential to numerical weather prediction. Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climatemore » and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying Vertical velocities and their role in atmospheric physical processes have been given very limited attention in models for climate and numerical weather prediction. The relevant physical scales range down to tens of meters and are thus frequently sub-grid and require parameterization. Many state-of-science convection parameterizations provide mass fluxes without specifying vs in climate models may capture this behavior, but it has not been accounted for when parameterizing cloud and precipitation processes in current models. New observations of convective vertical velocities offer a potentially promising path toward developing process-level cloud models and parameterizations for climate and numerical weather prediction. Taking account of the scale dependence of resolved vertical velocities offers a path to matching cloud-scale physical processes and their driving dynamics more realistically, with a prospect of reduced uncertainty in both climate forcing and sensitivity.« less

  3. Flexible hydrological modeling - Disaggregation from lumped catchment scale to higher spatial resolutions

    NASA Astrophysics Data System (ADS)

    Tran, Quoc Quan; Willems, Patrick; Pannemans, Bart; Blanckaert, Joris; Pereira, Fernando; Nossent, Jiri; Cauwenberghs, Kris; Vansteenkiste, Thomas

    2015-04-01

    Based on an international literature review on model structures of existing rainfall-runoff and hydrological models, a generalized model structure is proposed. It consists of different types of meteorological components, storage components, splitting components and routing components. They can be spatially organized in a lumped way, or on a grid, spatially interlinked by source-to-sink or grid-to-grid (cell-to-cell) routing. The grid size of the model can be chosen depending on the application. The user can select/change the spatial resolution depending on the needs and/or the evaluation of the accuracy of the model results, or use different spatial resolutions in parallel for different applications. Major research questions addressed during the study are: How can we assure consistent results of the model at any spatial detail? How can we avoid strong or sudden changes in model parameters and corresponding simulation results, when one moves from one level of spatial detail to another? How can we limit the problem of overparameterization/equifinality when we move from the lumped model to the spatially distributed model? The proposed approach is a step-wise one, where first the lumped conceptual model is calibrated using a systematic, data-based approach, followed by a disaggregation step where the lumped parameters are disaggregated based on spatial catchment characteristics (topography, land use, soil characteristics). In this way, disaggregation can be done down to any spatial scale, and consistently among scales. Only few additional calibration parameters are introduced to scale the absolute spatial differences in model parameters, but keeping the relative differences as obtained from the spatial catchment characteristics. After calibration of the spatial model, the accuracies of the lumped and spatial models were compared for peak, low and cumulative runoff total and sub-flows (at downstream and internal gauging stations). For the distributed models, additional validation on spatial results was done for the groundwater head values at observation wells. To ensure that the lumped model can produce results as accurate as the spatially distributed models or close regardless to the number of parameters and implemented physical processes, it was checked whether the structure of the lumped models had to be adjusted. The concept has been implemented in a PCRaster - Python platform and tested for two Belgian case studies (catchments of the rivers Dijle and Grote Nete). So far, use is made of existing model structures (NAM, PDM, VHM and HBV). Acknowledgement: These results were obtained within the scope of research activities for the Flemish Environment Agency (VMM) - division Operational Water Management on "Next Generation hydrological modeling", in cooperation with IMDC consultants, and for Flanders Hydraulics Research (Waterbouwkundig Laboratorium) on "Effect of climate change on the hydrological regime of navigable watercourses in Belgium".

  4. Modeling the near-ultraviolet band of GK stars. III. Dependence on abundance pattern

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Short, C. Ian; Campbell, Eamonn A., E-mail: ishort@ap.smu.ca

    2013-06-01

    We extend the grid of non-LTE (NLTE) models presented in Paper II to explore variations in abundance pattern in two ways: (1) the adoption of the Asplund et al. (GASS10) abundances, (2) for stars of metallicity, [M/H], of –0.5, the adoption of a non-solar enhancement of α-elements by +0.3 dex. Moreover, our grid of synthetic spectral energy distributions (SEDs) is interpolated to a finer numerical resolution in both T {sub eff} (ΔT {sub eff} = 25 K) and log g (Δlog g = 0.25). We compare the values of T {sub eff} and log g inferred from fitting LTE andmore » NLTE SEDs to observed SEDs throughout the entire visible band, and in an ad hoc 'blue' band. We compare our spectrophotometrically derived T {sub eff} values to a variety of T {sub eff} calibrations, including more empirical ones, drawn from the literature. For stars of solar metallicity, we find that the adoption of the GASS10 abundances lowers the inferred T {sub eff} value by 25-50 K for late-type giants, and NLTE models computed with the GASS10 abundances give T {sub eff} results that are marginally in better agreement with other T {sub eff} calibrations. For stars of [M/H] = –0.5 there is marginal evidence that adoption of α-enhancement further lowers the derived T {sub eff} value by 50 K. Stellar parameters inferred from fitting NLTE models to SEDs are more dependent than LTE models on the wavelength region being fitted, and we find that the effect depends on how heavily line blanketed the fitting region is, whether the fitting region is to the blue of the Wien peak of the star's SED, or both.« less

  5. Numerical weather prediction model tuning via ensemble prediction system

    NASA Astrophysics Data System (ADS)

    Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.

    2011-12-01

    This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.

  6. Does topological information matter for power grid vulnerability?

    PubMed

    Ouyang, Min; Yang, Kun

    2014-12-01

    Power grids, which are playing an important role in supporting the economy of a region as well as the life of its citizens, could be attacked by terrorists or enemies to damage the region. Depending on different levels of power grid information collected by the terrorists, their attack strategies might be different. This paper groups power grid information into four levels: no information, purely topological information (PTI), topological information with generator and load nodes (GLNI), and full information (including component physical properties and flow parameters information), and then identifies possible attack strategies for each information level. Analyzing and comparing power grid vulnerability under these attack strategies from both terrorists' and utility companies' point of view give rise to an approach to quantify the relative values of these three types of information, including PTI, GLNI, and component parameter information (CPI). This approach can provide information regarding the extent to which topological information matters for power system vulnerability decisions. Taking several test systems as examples, results show that for small attacks with p ≤ 0.1, CPI matters the most; when taking attack cost into consideration and assuming that the terrorists take the optimum cost-efficient attack intensity, then CPI has the largest cost-based information value.

  7. Does topological information matter for power grid vulnerability?

    NASA Astrophysics Data System (ADS)

    Ouyang, Min; Yang, Kun

    2014-12-01

    Power grids, which are playing an important role in supporting the economy of a region as well as the life of its citizens, could be attacked by terrorists or enemies to damage the region. Depending on different levels of power grid information collected by the terrorists, their attack strategies might be different. This paper groups power grid information into four levels: no information, purely topological information (PTI), topological information with generator and load nodes (GLNI), and full information (including component physical properties and flow parameters information), and then identifies possible attack strategies for each information level. Analyzing and comparing power grid vulnerability under these attack strategies from both terrorists' and utility companies' point of view give rise to an approach to quantify the relative values of these three types of information, including PTI, GLNI, and component parameter information (CPI). This approach can provide information regarding the extent to which topological information matters for power system vulnerability decisions. Taking several test systems as examples, results show that for small attacks with p ≤ 0.1, CPI matters the most; when taking attack cost into consideration and assuming that the terrorists take the optimum cost-efficient attack intensity, then CPI has the largest cost-based information value.

  8. Grid generation and adaptation via Monge-Kantorovich optimization in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Delzanno, Gian Luca; Chacon, Luis; Finn, John M.

    2008-11-01

    In a recent paper [1], Monge-Kantorovich (MK) optimization was proposed as a method of grid generation/adaptation in two dimensions (2D). The method is based on the minimization of the L2 norm of grid point displacement, constrained to producing a given positive-definite cell volume distribution (equidistribution constraint). The procedure gives rise to the Monge-Amp'ere (MA) equation: a single, non-linear scalar equation with no free-parameters. The MA equation was solved in Ref. [1] with the Jacobian Free Newton-Krylov technique and several challenging test cases were presented in squared domains in 2D. Here, we extend the work of Ref. [1]. We first formulate the MK approach in physical domains with curved boundary elements and in 3D. We then show the results of applying it to these more general cases. We show that MK optimization produces optimal grids in which the constraint is satisfied numerically to truncation error. [1] G.L. Delzanno, L. Chac'on, J.M. Finn, Y. Chung, G. Lapenta, A new, robust equidistribution method for two-dimensional grid generation, submitted to Journal of Computational Physics (2008).

  9. Inference of turbulence parameters from a ROMS simulation using the k-ε closure scheme

    NASA Astrophysics Data System (ADS)

    Thyng, Kristen M.; Riley, James J.; Thomson, Jim

    2013-12-01

    Comparisons between high resolution turbulence data from Admiralty Inlet, WA (USA), and a 65-meter horizontal grid resolution simulation using the hydrostatic ocean modelling code, Regional Ocean Modeling System (ROMS), show that the model's k-ε turbulence closure scheme performs reasonably well. Turbulent dissipation rates and Reynolds stresses agree within a factor of two, on average. Turbulent kinetic energy (TKE) also agrees within a factor of two, but only for motions within the observed inertial sub-range of frequencies (i.e., classic approximately isotropic turbulence). TKE spectra from the observations indicate that there is significant energy at lower frequencies than the inertial sub-range; these scales are not captured by the model closure scheme nor the model grid resolution. To account for scales not present in the model, the inertial sub-range is extrapolated to lower frequencies and then integrated to obtain an inferred, diagnostic total TKE, with improved agreement with the observed total TKE. The realistic behavior of the dissipation rate and Reynolds stress, combined with the adjusted total TKE, imply that ROMS simulations can be used to understand and predict spatial and temporal variations in turbulence. The results are suggested for application to siting tidal current turbines.

  10. Sub-grid scale combustion models for large eddy simulation of unsteady premixed flame propagation around obstacles.

    PubMed

    Di Sarli, Valeria; Di Benedetto, Almerinda; Russo, Gennaro

    2010-08-15

    In this work, an assessment of different sub-grid scale (sgs) combustion models proposed for large eddy simulation (LES) of steady turbulent premixed combustion (Colin et al., Phys. Fluids 12 (2000) 1843-1863; Flohr and Pitsch, Proc. CTR Summer Program, 2000, pp. 61-82; Kim and Menon, Combust. Sci. Technol. 160 (2000) 119-150; Charlette et al., Combust. Flame 131 (2002) 159-180; Pitsch and Duchamp de Lageneste, Proc. Combust. Inst. 29 (2002) 2001-2008) was performed to identify the model that best predicts unsteady flame propagation in gas explosions. Numerical results were compared to the experimental data by Patel et al. (Proc. Combust. Inst. 29 (2002) 1849-1854) for premixed deflagrating flame in a vented chamber in the presence of three sequential obstacles. It is found that all sgs combustion models are able to reproduce qualitatively the experiment in terms of step of flame acceleration and deceleration around each obstacle, and shape of the propagating flame. Without adjusting any constants and parameters, the sgs model by Charlette et al. also provides satisfactory quantitative predictions for flame speed and pressure peak. Conversely, the sgs combustion models other than Charlette et al. give correct predictions only after an ad hoc tuning of constants and parameters. Copyright 2010 Elsevier B.V. All rights reserved.

  11. Assessment of zero-equation SGS models for simulating indoor environment

    NASA Astrophysics Data System (ADS)

    Taghinia, Javad; Rahman, Md Mizanur; Tse, Tim K. T.

    2016-12-01

    The understanding of air-flow in enclosed spaces plays a key role to designing ventilation systems and indoor environment. The computational fluid dynamics aspects dictate that the large eddy simulation (LES) offers a subtle means to analyze complex flows with recirculation and streamline curvature effects, providing more robust and accurate details than those of Reynolds-averaged Navier-Stokes simulations. This work assesses the performance of two zero-equation sub-grid scale models: the Rahman-Agarwal-Siikonen-Taghinia (RAST) model with a single grid-filter and the dynamic Smagorinsky model with grid-filter and test-filter scales. This in turn allows a cross-comparison of the effect of two different LES methods in simulating indoor air-flows with forced and mixed (natural + forced) convection. A better performance against experiments is indicated with the RAST model in wall-bounded non-equilibrium indoor air-flows; this is due to its sensitivity toward both the shear and vorticity parameters.

  12. Stochastic analysis of experimentally determined physical parameters of HPMC:NiCl{sub 2} polymer composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thejas, Urs G.; Somashekar, R., E-mail: rs@physics.uni-mysore.ac.in; Sangappa, Y.

    A stochastic approach to explain the variation of physical parameters in polymer composites is discussed in this study. We have given a statistical model to derive the characteristic variation of physical parameters as a function of dopant concentration. Results of X-ray diffraction study and conductivity have been taken to validate this function, which can be extended to any of the physical parameters and polymer composites. For this study we have considered a polymer composites of HPMC doped with various concentrations of Nickel Chloride.

  13. Stochastic and Perturbed Parameter Representations of Model Uncertainty in Convection Parameterization

    NASA Astrophysics Data System (ADS)

    Christensen, H. M.; Moroz, I.; Palmer, T.

    2015-12-01

    It is now acknowledged that representing model uncertainty in atmospheric simulators is essential for the production of reliable probabilistic ensemble forecasts, and a number of different techniques have been proposed for this purpose. Stochastic convection parameterization schemes use random numbers to represent the difference between a deterministic parameterization scheme and the true atmosphere, accounting for the unresolved sub grid-scale variability associated with convective clouds. An alternative approach varies the values of poorly constrained physical parameters in the model to represent the uncertainty in these parameters. This study presents new perturbed parameter schemes for use in the European Centre for Medium Range Weather Forecasts (ECMWF) convection scheme. Two types of scheme are developed and implemented. Both schemes represent the joint uncertainty in four of the parameters in the convection parametrisation scheme, which was estimated using the Ensemble Prediction and Parameter Estimation System (EPPES). The first scheme developed is a fixed perturbed parameter scheme, where the values of uncertain parameters are changed between ensemble members, but held constant over the duration of the forecast. The second is a stochastically varying perturbed parameter scheme. The performance of these schemes was compared to the ECMWF operational stochastic scheme, Stochastically Perturbed Parametrisation Tendencies (SPPT), and to a model which does not represent uncertainty in convection. The skill of probabilistic forecasts made using the different models was evaluated. While the perturbed parameter schemes improve on the stochastic parametrisation in some regards, the SPPT scheme outperforms the perturbed parameter approaches when considering forecast variables that are particularly sensitive to convection. Overall, SPPT schemes are the most skilful representations of model uncertainty due to convection parametrisation. Reference: H. M. Christensen, I. M. Moroz, and T. N. Palmer, 2015: Stochastic and Perturbed Parameter Representations of Model Uncertainty in Convection Parameterization. J. Atmos. Sci., 72, 2525-2544.

  14. NWP model forecast skill optimization via closure parameter variations

    NASA Astrophysics Data System (ADS)

    Järvinen, H.; Ollinaho, P.; Laine, M.; Solonen, A.; Haario, H.

    2012-04-01

    We present results of a novel approach to tune predictive skill of numerical weather prediction (NWP) models. These models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. The current practice is to specify manually the numerical parameter values, based on expert knowledge. We developed recently a concept and method (QJRMS 2011) for on-line estimation of the NWP model parameters via closure parameter variations. The method called EPPES ("Ensemble prediction and parameter estimation system") utilizes ensemble prediction infra-structure for parameter estimation in a very cost-effective way: practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating an ensemble of predictions so that each member uses different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In this presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an ensemble prediction system emulator, based on the ECHAM5 atmospheric GCM show that the model tuning capability of EPPES scales up to realistic models and ensemble prediction systems. Finally, preliminary results of EPPES in the context of ECMWF forecasting system are presented.

  15. Thin films of aluminum nitride and aluminum gallium nitride for cold cathode applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sowers, A.T.; Christman, J.A.; Bremser, M.D.

    1997-10-01

    Cold cathode structures have been fabricated using AlN and graded AlGaN structures (deposited on n-type 6H-SiC) as the thin film emitting layer. The cathodes consist of an aluminum grid layer separated from the nitride layer by a SiO{sub 2} layer and etched to form arrays of either 1, 3, or 5 {mu}m holes through which the emitting nitride surface is exposed. After fabrication, a hydrogen plasma exposure was employed to activate the cathodes. Cathode devices with 5 {mu}m holes displayed emission for up to 30 min before failing. Maximum emission currents ranged from 10{endash}100 nA and required grid voltages rangingmore » from 20{endash}110 V. The grid currents were typically 1 to 10{sup 4} times the collector currents. {copyright} {ital 1997 American Institute of Physics.}« less

  16. Three-Dimensional High-Order Spectral Finite Volume Method for Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel; Wang, Z. J.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    Many areas require a very high-order accurate numerical solution of conservation laws for complex shapes. This paper deals with the extension to three dimensions of the Spectral Finite Volume (SV) method for unstructured grids, which was developed to solve such problems. We first summarize the limitations of traditional methods such as finite-difference, and finite-volume for both structured and unstructured grids. We then describe the basic formulation of the spectral finite volume method. What distinguishes the SV method from conventional high-order finite-volume methods for unstructured triangular or tetrahedral grids is the data reconstruction. Instead of using a large stencil of neighboring cells to perform a high-order reconstruction, the stencil is constructed by partitioning each grid cell, called a spectral volume (SV), into 'structured' sub-cells, called control volumes (CVs). One can show that if all the SV cells are partitioned into polygonal or polyhedral CV sub-cells in a geometrically similar manner, the reconstructions for all the SVs become universal, irrespective of their shapes, sizes, orientations, or locations. It follows that the reconstruction is reduced to a weighted sum of unknowns involving just a few simple adds and multiplies, and those weights are universal and can be pre-determined once for all. The method is thus very efficient, accurate, and yet geometrically flexible. The most critical part of the SV method is the partitioning of the SV into CVs. In this paper we present the partitioning of a tetrahedral SV into polyhedral CVs with one free parameter for polynomial reconstructions up to degree of precision five. (Note that the order of accuracy of the method is one order higher than the reconstruction degree of precision.) The free parameter will be determined by minimizing the Lebesgue constant of the reconstruction matrix or similar criteria to obtain optimized partitions. The details of an efficient, parallelizable code to solve three-dimensional problems for any order of accuracy are then presented. Important aspects of the data structure are discussed. Comparisons with the Discontinuous Galerkin (DG) method are made. Numerical examples for wave propagation problems are presented.

  17. An approach to the parametric design of ion thrusters

    NASA Technical Reports Server (NTRS)

    Wilbur, Paul J.; Beattie, John R.; Hyman, Jay, Jr.

    1988-01-01

    A methodology that can be used to determine which of several physical constraints can limit ion thruster power and thrust, under various design and operating conditions, is presented. The methodology is exercised to demonstrate typical limitations imposed by grid system span-to-gap ratio, intragrid electric field, discharge chamber power per unit beam area, screen grid lifetime, and accelerator grid lifetime constraints. Limitations on power and thrust for a thruster defined by typical discharge chamber and grid system parameters when it is operated at maximum thrust-to-power are discussed. It is pointed out that other operational objectives such as optimization of payload fraction or mission duration can be substituted for the thrust-to-power objective and that the methodology can be used as a tool for mission analysis.

  18. Photochemical grid model performance with varying horizontal grid resolution and sub-grid plume treatment for the Martins Creek near-field SO2 study

    NASA Astrophysics Data System (ADS)

    Baker, Kirk R.; Hawkins, Andy; Kelly, James T.

    2014-12-01

    Near source modeling is needed to assess primary and secondary pollutant impacts from single sources and single source complexes. Source-receptor relationships need to be resolved from tens of meters to tens of kilometers. Dispersion models are typically applied for near-source primary pollutant impacts but lack complex photochemistry. Photochemical models provide a realistic chemical environment but are typically applied using grid cell sizes that may be larger than the distance between sources and receptors. It is important to understand the impacts of grid resolution and sub-grid plume treatments on photochemical modeling of near-source primary pollution gradients. Here, the CAMx photochemical grid model is applied using multiple grid resolutions and sub-grid plume treatment for SO2 and compared with a receptor mesonet largely impacted by nearby sources approximately 3-17 km away in a complex terrain environment. Measurements are compared with model estimates of SO2 at 4- and 1-km resolution, both with and without sub-grid plume treatment and inclusion of finer two-way grid nests. Annual average estimated SO2 mixing ratios are highest nearest the sources and decrease as distance from the sources increase. In general, CAMx estimates of SO2 do not compare well with the near-source observations when paired in space and time. Given the proximity of these sources and receptors, accuracy in wind vector estimation is critical for applications that pair pollutant predictions and observations in time and space. In typical permit applications, predictions and observations are not paired in time and space and the entire distributions of each are directly compared. Using this approach, model estimates using 1-km grid resolution best match the distribution of observations and are most comparable to similar studies that used dispersion and Lagrangian modeling systems. Model-estimated SO2 increases as grid cell size decreases from 4 km to 250 m. However, it is notable that the 1-km model estimates using 1-km meteorological model input are higher than the 1-km model simulation that used interpolated 4-km meteorology. The inclusion of sub-grid plume treatment did not improve model skill in predicting SO2 in time and space and generally acts to keep emitted mass aloft.

  19. Grid of Supergiant B[e] Models from HDUST Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Domiciano de Souza, A.; Carciofi, A. C.

    2012-12-01

    By using the Monte Carlo radiative transfer code HDUST (developed by A. C. Carciofi and J..E. Bjorkman) we have built a grid of models for stars presenting the B[e] phenomenon and a bimodal outflowing envelope. The models are particularly adapted to the study of B[e] supergiants and FS CMa type stars. The adopted physical parameters of the calculated models make the grid well adapted to interpret high angular and high spectral observations, in particular spectro-interferometric data from ESO-VLTI instruments AMBER (near-IR at low and medium spectral resolution) and MIDI (mid-IR at low spectral resolution). The grid models include, for example, a central B star with different effective temperatures, a gas (hydrogen) and silicate dust circumstellar envelope with a bimodal mass loss presenting dust in the denser equatorial regions. The HDUST grid models were pre-calculated using the high performance parallel computing facility Mésocentre SIGAMM, located at OCA, France.

  20. A Priori Analyses of Three Subgrid-Scale Models for One-Parameter Families of Filters

    NASA Technical Reports Server (NTRS)

    Pruett, C. David; Adams, Nikolaus A.

    1998-01-01

    The decay of isotropic turbulence a compressible flow is examined by direct numerical simulation (DNS). A priori analyses of the DNS data are then performed to evaluate three subgrid-scale (SGS) models for large-eddy simulation (LES): a generalized Smagorinsky model (M1), a stress-similarity model (M2), and a gradient model (M3). The models exploit one-parameter second- or fourth-order filters of Pade type, which permit the cutoff wavenumber k(sub c) to be tuned independently of the grid increment (delta)x. The modeled (M) and exact (E) SGS-stresses are compared component-wise by correlation coefficients of the form C(E,M) computed over the entire three-dimensional fields. In general, M1 correlates poorly against exact stresses (C < 0.2), M3 correlates moderately well (C approx. 0.6), and M2 correlates remarkably well (0.8 < C < 1.0). Specifically, correlations C(E, M2) are high provided the grid and test filters are of the same order. Moreover, the highest correlations (C approx.= 1.0) result whenever the grid and test filters are identical (in both order and cutoff). Finally, present results reveal the exact SGS stresses obtained by grid filters of differing orders to be only moderately well correlated. Thus, in LES the model should not be specified independently of the filter.

  1. B{sub K}-parameter from N{sub f}=2 twisted mass lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Constantinou, M.; Panagopoulos, H.; Skouroupathis, A.

    2011-01-01

    We present an unquenched N{sub f}=2 lattice computation of the B{sub K} parameter which controls K{sup 0}-K{sup 0} oscillations. A partially quenched setup is employed with two maximally twisted dynamical (sea) light Wilson quarks, and valence quarks of both the maximally twisted and the Osterwalder-Seiler variety. Suitable combinations of these two kinds of valence quarks lead to a lattice definition of the B{sub K} parameter which is both multiplicatively renormalizable and O(a) improved. Employing the nonperturbative RI-MOM scheme, in the continuum limit and at the physical value of the pion mass we get B{sub K}{sup RGI}=0.729{+-}0.030, a number well inmore » line with the existing quenched and unquenched determinations.« less

  2. Comparative analysis of existing models for power-grid synchronization

    NASA Astrophysics Data System (ADS)

    Nishikawa, Takashi; Motter, Adilson E.

    2015-01-01

    The dynamics of power-grid networks is becoming an increasingly active area of research within the physics and network science communities. The results from such studies are typically insightful and illustrative, but are often based on simplifying assumptions that can be either difficult to assess or not fully justified for realistic applications. Here we perform a comprehensive comparative analysis of three leading models recently used to study synchronization dynamics in power-grid networks—a fundamental problem of practical significance given that frequency synchronization of all power generators in the same interconnection is a necessary condition for a power grid to operate. We show that each of these models can be derived from first principles within a common framework based on the classical model of a generator, thereby clarifying all assumptions involved. This framework allows us to view power grids as complex networks of coupled second-order phase oscillators with both forcing and damping terms. Using simple illustrative examples, test systems, and real power-grid datasets, we study the inherent frequencies of the oscillators as well as their coupling structure, comparing across the different models. We demonstrate, in particular, that if the network structure is not homogeneous, generators with identical parameters need to be modeled as non-identical oscillators in general. We also discuss an approach to estimate the required (dynamical) system parameters that are unavailable in typical power-grid datasets, their use for computing the constants of each of the three models, and an open-source MATLAB toolbox that we provide for these computations.

  3. Evolution of aerosol downwind of a major highway

    NASA Astrophysics Data System (ADS)

    Liggio, J.; Staebler, R. M.; Brook, J.; Li, S.; Vlasenko, A. L.; Sjostedt, S. J.; Gordon, M.; Makar, P.; Mihele, C.; Evans, G. J.; Jeong, C.; Wentzell, J. J.; Lu, G.; Lee, P.

    2010-12-01

    Primary aerosol from traffic emissions can have a considerable impact local and regional scale air quality. In order to assess the effect of these emissions and of future emissions scenarios, air quality models are required which utilize emissions representative of real world conditions. Often, the emissions processing systems which provide emissions input for the air quality models rely on laboratory testing of individual vehicles under non-ambient conditions. However, on the sub-grid scale particle evolution may lead to changes in the primary emitted size distribution and gas-particle partitioning that are not properly considered when the emissions are ‘instantly mixed’ within the grid volume. The affect of this modeling convention on model results is not well understood. In particular, changes in organic gas/particle partitioning may result in particle evaporation or condensation onto pre-existing aerosol. The result is a change in the particle distribution and/or an increase in the organic mass available for subsequent gas-phase oxidation. These effects may be missing from air-quality models, and a careful analysis of field data is necessary to quantify their impact. A study of the sub-grid evolution of aerosols (FEVER; Fast Evolution of Vehicle Emissions from Roadways) was conducted in the Toronto area in the summer of 2010. The study included mobile measurements of particle size distributions with a Fast mobility particle sizer (FMPS), aerosol composition with an Aerodyne aerosol mass spectrometer (AMS), black carbon (SP2, PA, LII), VOCs (PTR-MS) and other trace gases. The mobile laboratory was used to measure the concentration gradient of the emissions at perpendicular distances from the highway as well as the physical and chemical evolution of the aerosol. Stationary sites at perpendicular distances and upwind from the highway also monitored the particle size distribution. In addition, sonic anemometers mounted on the mobile lab provided measurements of turbulent dispersion as a function of distance from the highway, and a traffic camera was used to determine traffic density, composition and speed. These measurements differ from previous studies in that turbulence is measured under realistic conditions and hence the relationship of the aerosol evolution to atmospheric stability and mixing will also be quantified. Preliminary results suggest that aerosol size and composition does change on the sub-grid scale, and sub-grid scale parameterizations of turbulence and particle chemistry should be included in models to accurately represent these effects.

  4. B decays in an asymmetric left-right model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frank, Mariana; Hayreter, Alper; Turan, Ismail

    2010-08-01

    Motivated by recently observed disagreements with the standard model predictions in B decays, we study b{yields}d, s transitions in an asymmetric class of SU(2){sub L}xSU(2){sub R}xU(1){sub B-L} models, with a simple one-parameter structure of the right-handed mixing matrix for the quarks, which obeys the constraints from kaon physics. We use experimental constraints on the branching ratios of b{yields}s{gamma}, b{yields}ce{nu}{sub e}, and B{sub d,s}{sup 0}-B{sub d,s}{sup 0} mixing to restrict the parameters of the model: g{sub R}/g{sub L}, M{sub W{sub 2}}, M{sub H}{sup {+-}}, tan{beta} as well as the elements of the right-handed quark mixing matrix V{sub CKM}{sup R}. We presentmore » a comparison with the more commonly used (manifest) left-right symmetric model. Our analysis exposes the parameters most sensitive to b transitions and reveals a large parameter space where left- and right-handed quarks mix differently, opening the possibility of observing marked differences in behavior between the standard model and the left-right model.« less

  5. An overview of controls research on the NASA Langley Research Center grid

    NASA Technical Reports Server (NTRS)

    Montgomery, Raymond C.

    1987-01-01

    The NASA Langley Research Center has assembled a flexible grid on which control systems research can be accomplished on a two-dimensional structure that has many physically distributed sensors and actuators. The grid is a rectangular planar structure that is suspended by two cables attached to one edge so that out of plane vibrations are normal to gravity. There are six torque wheel actuators mounted to it so that torque is produced in the grid plane. Also, there are six rate gyros mounted to sense angular motion in the grid plane and eight accelerometers that measure linear acceleration normal to the grid plane. All components can be relocated to meet specific control system test requirements. Digital, analog, and hybrid control systems capability is provided in the apparatus. To date, research on this grid has been conducted in the areas of system and parameter identification, model estimation, distributed modal control, hierarchical adaptive control, and advanced redundancy management algorithms. The presentation overviews each technique and presents the most significant results generated for each area.

  6. CFD modeling of turbulent mixing through vertical pressure tube type boiling water reactor fuel rod bundles with spacer-grids

    NASA Astrophysics Data System (ADS)

    Verma, Shashi Kant; Sinha, S. L.; Chandraker, D. K.

    2018-05-01

    Numerical simulation has been carried out for the study of natural mixing of a Tracer (Passive scalar) to describe the development of turbulent diffusion in an injected sub-channel and, afterwards on, cross-mixing between adjacent sub-channels. In this investigation, post benchmark evaluation of the inter-subchannel mixing was initiated to test the ability of state-of-the-art Computational Fluid Dynamics (CFD) codes to numerically predict the important turbulence parameters downstream of a ring type spacer grid in a rod-bundle. A three-dimensional Computational Fluid Dynamics (CFD) tool (STAR-CCM+) was used to model the single phase flow through a 30° segment or 1/12th of the cross segment of a 54-rod bundle with a ring shaped spacer grid. Polyhedrons were used to discretize the computational domain, along with prismatic cells near the walls, with an overall mesh count of 5.2 M cell volumes. The Reynolds Stress Models (RSM) was tested because of RSM accounts for the turbulence anisotropy, to assess their capability in predicting the velocities as well as mass fraction of potassium nitrate measured in the experiment. In this way, the line probes are located in the different position of subchannels which could be used to characterize the progress of the mixing along the flow direction, and the degree of cross-mixing assessed using the quantity of tracer arriving in the neighbouring sub-channels. The predicted dimensionless mixing scalar along the length, however, was in good agreement with the measurements downstream of spacers.

  7. Hawking-Moss instanton in nonlinear massive gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Ying-li; Saito, Ryo; Sasaki, Misao, E-mail: yingli@yukawa.kyoto-u.ac.jp, E-mail: rsaito@yukawa.kyoto-u.ac.jp, E-mail: misao@yukawa.kyoto-u.ac.jp

    2013-02-01

    As a first step toward understanding a lanscape of vacua in a theory of non-linear massive gravity, we consider a landscape of a single scalar field and study tunneling between a pair of adjacent vacua. We study the Hawking-Moss (HM) instanton that sits at a local maximum of the potential, and evaluate the dependence of the tunneling rate on the parameters of the theory. It is found that provided with the same physical HM Hubble parameter H{sub HM}, depending on the values of parameters α{sub 3} and α{sub 4} in the action (2.2), the corresponding tunneling rate can be eithermore » enhanced or suppressed when compared to the one in the context of General Relativity (GR). Furthermore, we find the constraint on the ratio of the physical Hubble parameter to the fiducial one, which constrains the form of potential. This result is in sharp contrast to GR where there is no bound on the minimum value of the potential.« less

  8. Three-dimensional local grid refinement for block-centered finite-difference groundwater models using iteratively coupled shared nodes: A new method of interpolation and analysis of errors

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2004-01-01

    This paper describes work that extends to three dimensions the two-dimensional local-grid refinement method for block-centered finite-difference groundwater models of Mehl and Hill [Development and evaluation of a local grid refinement method for block-centered finite-difference groundwater models using shared nodes. Adv Water Resour 2002;25(5):497-511]. In this approach, the (parent) finite-difference grid is discretized more finely within a (child) sub-region. The grid refinement method sequentially solves each grid and uses specified flux (parent) and specified head (child) boundary conditions to couple the grids. Iteration achieves convergence between heads and fluxes of both grids. Of most concern is how to interpolate heads onto the boundary of the child grid such that the physics of the parent-grid flow is retained in three dimensions. We develop a new two-step, "cage-shell" interpolation method based on the solution of the flow equation on the boundary of the child between nodes shared with the parent grid. Error analysis using a test case indicates that the shared-node local grid refinement method with cage-shell boundary head interpolation is accurate and robust, and the resulting code is used to investigate three-dimensional local grid refinement of stream-aquifer interactions. Results reveal that (1) the parent and child grids interact to shift the true head and flux solution to a different solution where the heads and fluxes of both grids are in equilibrium, (2) the locally refined model provided a solution for both heads and fluxes in the region of the refinement that was more accurate than a model without refinement only if iterations are performed so that both heads and fluxes are in equilibrium, and (3) the accuracy of the coupling is limited by the parent-grid size - A coarse parent grid limits correct representation of the hydraulics in the feedback from the child grid.

  9. The Herschel view of GAS in Protoplanetary Systems (GASPS). First comparisons with a large grid of models

    NASA Astrophysics Data System (ADS)

    Pinte, C.; Woitke, P.; Ménard, F.; Duchêne, G.; Kamp, I.; Meeus, G.; Mathews, G.; Howard, C. D.; Grady, C. A.; Thi, W.-F.; Tilling, I.; Augereau, J.-C.; Dent, W. R. F.; Alacid, J. M.; Andrews, S.; Ardila, D. R.; Aresu, G.; Barrado, D.; Brittain, S.; Ciardi, D. R.; Danchi, W.; Eiroa, C.; Fedele, D.; de Gregorio-Monsalvo, I.; Heras, A.; Huelamo, N.; Krivov, A.; Lebreton, J.; Liseau, R.; Martin-Zaïdi, C.; Mendigutía, I.; Montesinos, B.; Mora, A.; Morales-Calderon, M.; Nomura, H.; Pantin, E.; Pascucci, I.; Phillips, N.; Podio, L.; Poelman, D. R.; Ramsay, S.; Riaz, B.; Rice, K.; Riviere-Marichalar, P.; Roberge, A.; Sandell, G.; Solano, E.; Vandenbussche, B.; Walker, H.; Williams, J. P.; White, G. J.; Wright, G.

    2010-07-01

    The Herschel GASPS key program is a survey of the gas phase of protoplanetary discs, targeting 240 objects which cover a large range of ages, spectral types, and disc properties. To interpret this large quantity of data and initiate self-consistent analyses of the gas and dust properties of protoplanetary discs, we have combined the capabilities of the radiative transfer code MCFOST with the gas thermal balance and chemistry code ProDiMo to compute a grid of ≈300 000 disc models (DENT). We present a comparison of the first Herschel/GASPS line and continuum data with the predictions from the DENT grid of models. Our objective is to test some of the main trends already identified in the DENT grid, as well as to define better empirical diagnostics to estimate the total gas mass of protoplanetary discs. Photospheric UV radiation appears to be the dominant gas-heating mechanism for Herbig stars, whereas UV excess and/or X-rays emission dominates for T Tauri stars. The DENT grid reveals the complexity in the analysis of far-IR lines and the difficulty to invert these observations into physical quantities. The combination of Herschel line observations with continuum data and/or with rotational lines in the (sub-)millimetre regime, in particular CO lines, is required for a detailed characterisation of the physical and chemical properties of circumstellar discs. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  10. Improved image quality of cone beam CT scans for radiotherapy image guidance using fiber-interspaced antiscatter grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stankovic, Uros; Herk, Marcel van; Ploeger, Lennert S.

    Purpose: Medical linear accelerator mounted cone beam CT (CBCT) scanner provides useful soft tissue contrast for purposes of image guidance in radiotherapy. The presence of extensive scattered radiation has a negative effect on soft tissue visibility and uniformity of CBCT scans. Antiscatter grids (ASG) are used in the field of diagnostic radiography to mitigate the scatter. They usually do increase the contrast of the scan, but simultaneously increase the noise. Therefore, and considering other scatter mitigation mechanisms present in a CBCT scanner, the applicability of ASGs with aluminum interspacing for a wide range of imaging conditions has been inconclusive inmore » previous studies. In recent years, grids using fiber interspacers have appeared, providing grids with higher scatter rejection while maintaining reasonable transmission of primary radiation. The purpose of this study was to evaluate the impact of one such grid on CBCT image quality. Methods: The grid used (Philips Medical Systems) had ratio of 21:1, frequency 36 lp/cm, and nominal selectivity of 11.9. It was mounted on the kV flat panel detector of an Elekta Synergy linear accelerator and tested in a phantom and a clinical study. Due to the flex of the linac and presence of gridline artifacts an angle dependent gain correction algorithm was devised to mitigate resulting artifacts. Scan reconstruction was performed using XVI4.5 augmented with inhouse developed image lag correction and Hounsfield unit calibration. To determine the necessary parameters for Hounsfield unit calibration and software scatter correction parameters, the Catphan 600 (The Phantom Laboratory) phantom was used. Image quality parameters were evaluated using CIRS CBCT Image Quality and Electron Density Phantom (CIRS) in two different geometries: one modeling head and neck and other pelvic region. Phantoms were acquired with and without the grid and reconstructed with and without software correction which was adapted for the different acquisition scenarios. Parameters used in the phantom study weret{sub cup} for nonuniformity and contrast-to-noise ratio (CNR) for soft tissue visibility. Clinical scans were evaluated in an observer study in which four experienced radiotherapy technologists rated soft tissue visibility and uniformity of scans with and without the grid. Results: The proposed angle dependent gain correction algorithm suppressed the visible ring artifacts. Grid had a beneficial impact on nonuniformity, contrast to noise ratio, and Hounsfield unit accuracy for both scanning geometries. The nonuniformity reduced by 90% for head sized object and 91% for pelvic-sized object. CNR improved compared to no corrections on average by a factor 2.8 for the head sized object, and 2.2 for the pelvic sized phantom. Grid outperformed software correction alone, but adding additional software correction to the grid was overall the best strategy. In the observer study, a significant improvement was found in both soft tissue visibility and nonuniformity of scans when grid is used. Conclusions: The evaluated fiber-interspaced grid improved the image quality of the CBCT system for broad range of imaging conditions. Clinical scans show significant improvement in soft tissue visibility and uniformity without the need to increase the imaging dose.« less

  11. Navier-Stokes simulation of rotor-body flowfield in hover using overset grids

    NASA Technical Reports Server (NTRS)

    Srinivasan, G. R.; Ahmad, J. U.

    1993-01-01

    A free-wake Navier-Stokes numerical scheme and multiple Chimera overset grids have been utilized for calculating the quasi-steady hovering flowfield of a Boeing-360 rotor mounted on an axisymmetric whirl-tower. The entire geometry of this rotor-body configuration is gridded-up with eleven different overset grids. The composite grid has 1.3 million grid points for the entire flow domain. The numerical results, obtained using coarse grids and a rigid rotor assumption, show a thrust value that is within 5% of the experimental value at a flow condition of M(sub tip) = 0.63, Theta(sub c) = 8 deg, and Re = 2.5 x 10(exp 6). The numerical method thus demonstrates the feasibility of using a multi-block scheme for calculating the flowfields of complex configurations consisting of rotating and non-rotating components.

  12. Integrating Unified Gravity Wave Physics into the NOAA Next Generation Global Prediction System

    NASA Astrophysics Data System (ADS)

    Alpert, J. C.; Yudin, V.; Fuller-Rowell, T. J.; Akmaev, R. A.

    2017-12-01

    The Unified Gravity Wave Physics (UGWP) project for the Next Generation Global Prediction System (NGGPS) is a NOAA collaborative effort between the National Centers for Environmental Prediction (NCEP), Environemntal Modeling Center (EMC) and the University of Colorado, Cooperative Institute for Research in Environmental Sciences (CU-CIRES) to support upgrades and improvements of GW dynamics (resolved scales) and physics (sub-grid scales) in the NOAA Environmental Modeling System (NEMS)†. As envisioned the global climate, weather and space weather models of NEMS will substantially improve their predictions and forecasts with the resolution-sensitive (scale-aware) formulations planned under the UGWP framework for both orographic and non-stationary waves. In particular, the planned improvements for the Global Forecast System (GFS) model of NEMS are: calibration of model physics for higher vertical and horizontal resolution and an extended vertical range of simulations, upgrades to GW schemes, including the turbulent heating and eddy mixing due to wave dissipation and breaking, and representation of the internally-generated QBO. The main priority of the UGWP project is unified parameterization of orographic and non-orographic GW effects including momentum deposition in the middle atmosphere and turbulent heating and eddies due to wave dissipation and breaking. The latter effects are not currently represented in NOAA atmosphere models. The team has tested and evaluated four candidate GW solvers integrating the selected GW schemes into the NGGPS model. Our current work and planned activity is to implement the UGWP schemes in the first available GFS/FV3 (open FV3) configuration including adapted GFDL modification for sub-grid orography in GFS. Initial global model results will be shown for the operational and research GFS configuration for spectral and FV3 dynamical cores. †http://www.emc.ncep.noaa.gov/index.php?branch=NEMS

  13. The swiss army knife of job submission tools: grid-control

    NASA Astrophysics Data System (ADS)

    Stober, F.; Fischer, M.; Schleper, P.; Stadie, H.; Garbers, C.; Lange, J.; Kovalchuk, N.

    2017-10-01

    grid-control is a lightweight and highly portable open source submission tool that supports all common workflows in high energy physics (HEP). It has been used by a sizeable number of HEP analyses to process tasks that sometimes consist of up to 100k jobs. grid-control is built around a powerful plugin and configuration system, that allows users to easily specify all aspects of the desired workflow. Job submission to a wide range of local or remote batch systems or grid middleware is supported. Tasks can be conveniently specified through the parameter space that will be processed, which can consist of any number of variables and data sources with complex dependencies on each other. Dataset information is processed through a configurable pipeline of dataset filters, partition plugins and partition filters. The partition plugins can take the number of files, size of the work units, metadata or combinations thereof into account. All changes to the input datasets or variables are propagated through the processing pipeline and can transparently trigger adjustments to the parameter space and the job submission. While the core functionality is completely experiment independent, full integration with the CMS computing environment is provided by a small set of plugins.

  14. Gaussian process model for extrapolation of scattering observables for complex molecules: From benzene to benzonitrile

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Jie; Krems, Roman V.; Li, Zhiying

    2015-10-21

    We consider a problem of extrapolating the collision properties of a large polyatomic molecule A–H to make predictions of the dynamical properties for another molecule related to A–H by the substitution of the H atom with a small molecular group X, without explicitly computing the potential energy surface for A–X. We assume that the effect of the −H →−X substitution is embodied in a multidimensional function with unknown parameters characterizing the change of the potential energy surface. We propose to apply the Gaussian Process model to determine the dependence of the dynamical observables on the unknown parameters. This can bemore » used to produce an interval of the observable values which corresponds to physical variations of the potential parameters. We show that the Gaussian Process model combined with classical trajectory calculations can be used to obtain the dependence of the cross sections for collisions of C{sub 6}H{sub 5}CN with He on the unknown parameters describing the interaction of the He atom with the CN fragment of the molecule. The unknown parameters are then varied within physically reasonable ranges to produce a prediction uncertainty of the cross sections. The results are normalized to the cross sections for He — C{sub 6}H{sub 6} collisions obtained from quantum scattering calculations in order to provide a prediction interval of the thermally averaged cross sections for collisions of C{sub 6}H{sub 5}CN with He.« less

  15. Factorization and resummation of Higgs boson differential distributions in soft-collinear effective theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mantry, Sonny; Petriello, Frank

    We derive a factorization theorem for the Higgs boson transverse momentum (p{sub T}) and rapidity (Y) distributions at hadron colliders, using the soft-collinear effective theory (SCET), for m{sub h}>>p{sub T}>>{Lambda}{sub QCD}, where m{sub h} denotes the Higgs mass. In addition to the factorization of the various scales involved, the perturbative physics at the p{sub T} scale is further factorized into two collinear impact-parameter beam functions (IBFs) and an inverse soft function (ISF). These newly defined functions are of a universal nature for the study of differential distributions at hadron colliders. The additional factorization of the p{sub T}-scale physics simplifies themore » implementation of higher order radiative corrections in {alpha}{sub s}(p{sub T}). We derive formulas for factorization in both momentum and impact parameter space and discuss the relationship between them. Large logarithms of the relevant scales in the problem are summed using the renormalization group equations of the effective theories. Power corrections to the factorization theorem in p{sub T}/m{sub h} and {Lambda}{sub QCD}/p{sub T} can be systematically derived. We perform multiple consistency checks on our factorization theorem including a comparison with known fixed-order QCD results. We compare the SCET factorization theorem with the Collins-Soper-Sterman approach to low-p{sub T} resummation.« less

  16. Testing the skill of numerical hydraulic modeling to simulate spatiotemporal flooding patterns in the Logone floodplain, Cameroon

    NASA Astrophysics Data System (ADS)

    Fernández, Alfonso; Najafi, Mohammad Reza; Durand, Michael; Mark, Bryan G.; Moritz, Mark; Jung, Hahn Chul; Neal, Jeffrey; Shastry, Apoorva; Laborde, Sarah; Phang, Sui Chian; Hamilton, Ian M.; Xiao, Ningchuan

    2016-08-01

    Recent innovations in hydraulic modeling have enabled global simulation of rivers, including simulation of their coupled wetlands and floodplains. Accurate simulations of floodplains using these approaches may imply tremendous advances in global hydrologic studies and in biogeochemical cycling. One such innovation is to explicitly treat sub-grid channels within two-dimensional models, given only remotely sensed data in areas with limited data availability. However, predicting inundated area in floodplains using a sub-grid model has not been rigorously validated. In this study, we applied the LISFLOOD-FP hydraulic model using a sub-grid channel parameterization to simulate inundation dynamics on the Logone River floodplain, in northern Cameroon, from 2001 to 2007. Our goal was to determine whether floodplain dynamics could be simulated with sufficient accuracy to understand human and natural contributions to current and future inundation patterns. Model inputs in this data-sparse region include in situ river discharge, satellite-derived rainfall, and the shuttle radar topography mission (SRTM) floodplain elevation. We found that the model accurately simulated total floodplain inundation, with a Pearson correlation coefficient greater than 0.9, and RMSE less than 700 km2, compared to peak inundation greater than 6000 km2. Predicted discharge downstream of the floodplain matched measurements (Nash-Sutcliffe efficiency of 0.81), and indicated that net flow from the channel to the floodplain was modeled accurately. However, the spatial pattern of inundation was not well simulated, apparently due to uncertainties in SRTM elevations. We evaluated model results at 250, 500 and 1000-m spatial resolutions, and found that results are insensitive to spatial resolution. We also compared the model output against results from a run of LISFLOOD-FP in which the sub-grid channel parameterization was disabled, finding that the sub-grid parameterization simulated more realistic dynamics. These results suggest that analysis of global inundation is feasible using a sub-grid model, but that spatial patterns at sub-kilometer resolutions still need to be adequately predicted.

  17. On the Subgrid-Scale Modeling of Compressible Turbulence

    NASA Technical Reports Server (NTRS)

    Squires, Kyle; Zeman, Otto

    1990-01-01

    A new sub-grid scale model is presented for the large-eddy simulation of compressible turbulence. In the proposed model, compressibility contributions have been incorporated in the sub-grid scale eddy viscosity which, in the incompressible limit, reduce to a form originally proposed by Smagorinsky (1963). The model has been tested against a simple extension of the traditional Smagorinsky eddy viscosity model using simulations of decaying, compressible homogeneous turbulence. Simulation results show that the proposed model provides greater dissipation of the compressive modes of the resolved-scale velocity field than does the Smagorinsky eddy viscosity model. For an initial r.m.s. turbulence Mach number of 1.0, simulations performed using the Smagorinsky model become physically unrealizable (i.e., negative energies) because of the inability of the model to sufficiently dissipate fluctuations due to resolved scale velocity dilations. The proposed model is able to provide the necessary dissipation of this energy and maintain the realizability of the flow. Following Zeman (1990), turbulent shocklets are considered to dissipate energy independent of the Kolmogorov energy cascade. A possible parameterization of dissipation by turbulent shocklets for Large-Eddy Simulation is also presented.

  18. A data-driven approach for retrieving temperatures and abundances in brown dwarf atmospheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Line, Michael R.; Fortney, Jonathan J.; Marley, Mark S.

    2014-09-20

    Brown dwarf spectra contain a wealth of information about their molecular abundances, temperature structure, and gravity. We present a new data driven retrieval approach, previously used in planetary atmosphere studies, to extract the molecular abundances and temperature structure from brown dwarf spectra. The approach makes few a priori physical assumptions about the state of the atmosphere. The feasibility of the approach is first demonstrated on a synthetic brown dwarf spectrum. Given typical spectral resolutions, wavelength coverage, and noise, property precisions of tens of percent can be obtained for the molecular abundances and tens to hundreds of K on the temperaturemore » profile. The technique is then applied to the well-studied brown dwarf, Gl 570D. From this spectral retrieval, the spectroscopic radius is constrained to be 0.75-0.83 R {sub J}, log (g) to be 5.13-5.46, and T {sub eff} to be between 804 and 849 K. Estimates for the range of abundances and allowed temperature profiles are also derived. The results from our retrieval approach are in agreement with the self-consistent grid modeling results of Saumon et al. This new approach will allow us to address issues of compositional differences between brown dwarfs and possibly their formation environments, disequilibrium chemistry, and missing physics in current grid modeling approaches as well as a many other issues.« less

  19. Improving sub-grid scale accuracy of boundary features in regional finite-difference models

    USGS Publications Warehouse

    Panday, Sorab; Langevin, Christian D.

    2012-01-01

    As an alternative to grid refinement, the concept of a ghost node, which was developed for nested grid applications, has been extended towards improving sub-grid scale accuracy of flow to conduits, wells, rivers or other boundary features that interact with a finite-difference groundwater flow model. The formulation is presented for correcting the regular finite-difference groundwater flow equations for confined and unconfined cases, with or without Newton Raphson linearization of the nonlinearities, to include the Ghost Node Correction (GNC) for location displacement. The correction may be applied on the right-hand side vector for a symmetric finite-difference Picard implementation, or on the left-hand side matrix for an implicit but asymmetric implementation. The finite-difference matrix connectivity structure may be maintained for an implicit implementation by only selecting contributing nodes that are a part of the finite-difference connectivity. Proof of concept example problems are provided to demonstrate the improved accuracy that may be achieved through sub-grid scale corrections using the GNC schemes.

  20. SUBGRID PARAMETERIZATION OF SNOW DISTRIBUTION FOR AN ENERGY AND MASS BALANCE SNOW COVER MODEL. (R824784)

    EPA Science Inventory

    Representation of sub-element scale variability in snow accumulation and ablation is increasingly recognized as important in distributed hydrologic modelling. Representing sub-grid scale variability may be accomplished through numerical integration of a nested grid or through a l...

  1. Evaluation of the Momentum Closure Schemes in MPAS-Ocean

    NASA Astrophysics Data System (ADS)

    Zhao, Shimei; Liu, Yudi; Liu, Wei

    2018-04-01

    In order to compare and evaluate the performances of the Laplacian viscosity closure, the biharmonic viscosity closure, and the Leith closure momentum schemes in the MPAS-Ocean model, a variety of physical quantities, such as the relative reference potential energy (RPE) change, the RPE time change rate (RPETCR), the grid Reynolds number, the root mean square (RMS) of kinetic energy, and the spectra of kinetic energy and enstrophy, are calculated on the basis of results of a 3D baroclinic periodic channel. Results indicate that: 1) The RPETCR demonstrates a saturation phenomenon in baroclinic eddy tests. The critical grid Reynolds number corresponding to RPETCR saturation differs between the three closures: the largest value is in the biharmonic viscosity closure, followed by that in the Laplacian viscosity closure, and that in the Leith closure is the smallest. 2) All three closures can effectively suppress spurious dianeutral mixing by reducing the grid Reynolds number under sub-saturation conditions of the RPETCR, but they can also damage certain physical processes. Generally, the damage to the rotation process is greater than that to the advection process. 3) The dissipation in the biharmonic viscosity closure is strongly dependent on scales. Most dissipation concentrates on small scales, and the energy of small-scale eddies is often transferred to large-scale kinetic energy. The viscous dissipation in the Laplacian viscosity closure is the strongest on various scales, followed by that in the Leith closure. Note that part of the small-scale kinetic energy is also transferred to large-scale kinetic energy in the Leith closure. 4) The characteristic length scale L and the dimensionless parameter D in the Leith closure are inherently coupled. The RPETCR is inversely proportional to the product of D and L. When the product of D and L is constant, both the simulated RPETCR and the inhibition of spurious dianeutral mixing are the same in all tests using the Leith closure. The dissipative scale in the Leith closure depends on the parameter L, and the dissipative intensity depends on the parameter D. 5) Although optimal results may not be achieved by using the optimal parameters obtained from the 2D barotropic model in the 3D baroclinic simulation, the total energies are dissipative in all three closures. Dissipation is the strongest in the biharmonic viscosity closure, followed by that in the Leith closure, and that in the Laplacian viscosity closure is the weakest. Mesoscale eddies develop the fastest in the biharmonic viscosity closure after the baroclinic adjustment process finishes, and the kinetic energy reaches its maximum, which is attributed to the smallest dissipation of enstrophy in the biharmonic viscosity closure. Mesoscale eddies develop the slowest, and the kinetic energy peak value is the smallest in the Laplacian viscosity closure. Results in the Leith closure are between that in the biharmonic viscosity closure and the Laplacian viscosity closure.

  2. Improving and Understanding Climate Models: Scale-Aware Parameterization of Cloud Water Inhomogeneity and Sensitivity of MJO Simulation to Physical Parameters in a Convection Scheme

    NASA Astrophysics Data System (ADS)

    Xie, Xin

    Microphysics and convection parameterizations are two key components in a climate model to simulate realistic climatology and variability of cloud distribution and the cycles of energy and water. When a model has varying grid size or simulations have to be run with different resolutions, scale-aware parameterization is desirable so that we do not have to tune model parameters tailored to a particular grid size. The subgrid variability of cloud hydrometers is known to impact microphysics processes in climate models and is found to highly depend on spatial scale. A scale- aware liquid cloud subgrid variability parameterization is derived and implemented in the Community Earth System Model (CESM) in this study using long-term radar-based ground measurements from the Atmospheric Radiation Measurement (ARM) program. When used in the default CESM1 with the finite-volume dynamic core where a constant liquid inhomogeneity parameter was assumed, the newly developed parameterization reduces the cloud inhomogeneity in high latitudes and increases it in low latitudes. This is due to both the smaller grid size in high latitudes, and larger grid size in low latitudes in the longitude-latitude grid setting of CESM as well as the variation of the stability of the atmosphere. The single column model and general circulation model (GCM) sensitivity experiments show that the new parameterization increases the cloud liquid water path in polar regions and decreases it in low latitudes. Current CESM1 simulation suffers from the bias of both the pacific double ITCZ precipitation and weak Madden-Julian oscillation (MJO). Previous studies show that convective parameterization with multiple plumes may have the capability to alleviate such biases in a more uniform and physical way. A multiple-plume mass flux convective parameterization is used in Community Atmospheric Model (CAM) to investigate the sensitivity of MJO simulations. We show that MJO simulation is sensitive to entrainment rate specification. We found that shallow plumes can generate and sustain the MJO propagation in the model.

  3. IGMS: An Integrated ISO-to-Appliance Scale Grid Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan; Hale, Elaine; Hansen, Timothy M.

    This paper describes the Integrated Grid Modeling System (IGMS), a novel electric power system modeling platform for integrated transmission-distribution analysis that co-simulates off-the-shelf tools on high performance computing (HPC) platforms to offer unprecedented resolution from ISO markets down to appliances and other end uses. Specifically, the system simultaneously models hundreds or thousands of distribution systems in co-simulation with detailed Independent System Operator (ISO) markets and AGC-level reserve deployment. IGMS uses a new MPI-based hierarchical co-simulation framework to connect existing sub-domain models. Our initial efforts integrate opensource tools for wholesale markets (FESTIV), bulk AC power flow (MATPOWER), and full-featured distribution systemsmore » including physics-based end-use and distributed generation models (many instances of GridLAB-D[TM]). The modular IGMS framework enables tool substitution and additions for multi-domain analyses. This paper describes the IGMS tool, characterizes its performance, and demonstrates the impacts of the coupled simulations for analyzing high-penetration solar PV and price responsive load scenarios.« less

  4. Dependence of the source performance on plasma parameters at the BATMAN test facility

    NASA Astrophysics Data System (ADS)

    Wimmer, C.; Fantz, U.

    2015-04-01

    The investigation of the dependence of the source performance (high jH-, low je) for optimum Cs conditions on the plasma parameters at the BATMAN (Bavarian Test MAchine for Negative hydrogen ions) test facility is desirable in order to find key parameters for the operation of the source as well as to deepen the physical understanding. The most relevant source physics takes place in the extended boundary layer, which is the plasma layer with a thickness of several cm in front of the plasma grid: the production of H-, its transport through the plasma and its extraction, inevitably accompanied by the co-extraction of electrons. Hence, a link of the source performance with the plasma parameters in the extended boundary layer is expected. In order to characterize electron and negative hydrogen ion fluxes in the extended boundary layer, Cavity Ring-Down Spectroscopy and Langmuir probes have been applied for the measurement of the H- density and the determination of the plasma density, the plasma potential and the electron temperature, respectively. The plasma potential is of particular importance as it determines the sheath potential profile at the plasma grid: depending on the plasma grid bias relative to the plasma potential, a transition in the plasma sheath from an electron repelling to an electron attracting sheath takes place, influencing strongly the electron fraction of the bias current and thus the amount of co-extracted electrons. Dependencies of the source performance on the determined plasma parameters are presented for the comparison of two source pressures (0.6 Pa, 0.45 Pa) in hydrogen operation. The higher source pressure of 0.6 Pa is a standard point of operation at BATMAN with external magnets, whereas the lower pressure of 0.45 Pa is closer to the ITER requirements (p ≤ 0.3 Pa).

  5. Comparison of Large eddy dynamo simulation using dynamic sub-grid scale (SGS) model with a fully resolved direct simulation in a rotating spherical shell

    NASA Astrophysics Data System (ADS)

    Matsui, H.; Buffett, B. A.

    2017-12-01

    The flow in the Earth's outer core is expected to have vast length scale from the geometry of the outer core to the thickness of the boundary layer. Because of the limitation of the spatial resolution in the numerical simulations, sub-grid scale (SGS) modeling is required to model the effects of the unresolved field on the large-scale fields. We model the effects of sub-grid scale flow and magnetic field using a dynamic scale similarity model. Four terms are introduced for the momentum flux, heat flux, Lorentz force and magnetic induction. The model was previously used in the convection-driven dynamo in a rotating plane layer and spherical shell using the Finite Element Methods. In the present study, we perform large eddy simulations (LES) using the dynamic scale similarity model. The scale similarity model is implement in Calypso, which is a numerical dynamo model using spherical harmonics expansion. To obtain the SGS terms, the spatial filtering in the horizontal directions is done by taking the convolution of a Gaussian filter expressed in terms of a spherical harmonic expansion, following Jekeli (1981). A Gaussian field is also applied in the radial direction. To verify the present model, we perform a fully resolved direct numerical simulation (DNS) with the truncation of the spherical harmonics L = 255 as a reference. And, we perform unresolved DNS and LES with SGS model on coarser resolution (L= 127, 84, and 63) using the same control parameter as the resolved DNS. We will discuss the verification results by comparison among these simulations and role of small scale fields to large scale fields through the role of the SGS terms in LES.

  6. A grid of MHD models for stellar mass loss and spin-down rates of solar analogs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, O.; Drake, J. J.

    2014-03-01

    Stellar winds are believed to be the dominant factor in the spin-down of stars over time. However, stellar winds of solar analogs are poorly constrained due to observational challenges. In this paper, we present a grid of magnetohydrodynamic models to study and quantify the values of stellar mass loss and angular momentum loss rates as a function of the stellar rotation period, magnetic dipole component, and coronal base density. We derive simple scaling laws for the loss rates as a function of these parameters, and constrain the possible mass loss rate of stars with thermally driven winds. Despite the successmore » of our scaling law in matching the results of the model, we find a deviation between the 'solar dipole' case and a real case based on solar observations that overestimates the actual solar mass loss rate by a factor of three. This implies that the model for stellar fields might require a further investigation with additional complexity. Mass loss rates in general are largely controlled by the magnetic field strength, with the wind density varying in proportion to the confining magnetic pressure B {sup 2}. We also find that the mass loss rates obtained using our grid models drop much faster with the increase in rotation period than scaling laws derived using observed stellar activity. For main-sequence solar-like stars, our scaling law for angular momentum loss versus poloidal magnetic field strength retrieves the well-known Skumanich decline of angular velocity with time, Ω{sub *}∝t {sup –1/2}, if the large-scale poloidal magnetic field scales with rotation rate as B{sub p}∝Ω{sub ⋆}{sup 2}.« less

  7. Metrics for Assessment of Smart Grid Data Integrity Attacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Annarita Giani; Miles McQueen; Russell Bent

    2012-07-01

    There is an emerging consensus that the nation’s electricity grid is vulnerable to cyber attacks. This vulnerability arises from the increasing reliance on using remote measurements, transmitting them over legacy data networks to system operators who make critical decisions based on available data. Data integrity attacks are a class of cyber attacks that involve a compromise of information that is processed by the grid operator. This information can include meter readings of injected power at remote generators, power flows on transmission lines, and relay states. These data integrity attacks have consequences only when the system operator responds to compromised datamore » by redispatching generation under normal or contingency protocols. These consequences include (a) financial losses from sub-optimal economic dispatch to service loads, (b) robustness/resiliency losses from placing the grid at operating points that are at greater risk from contingencies, and (c) systemic losses resulting from cascading failures induced by poor operational choices. This paper is focused on understanding the connections between grid operational procedures and cyber attacks. We first offer two examples to illustrate how data integrity attacks can cause economic and physical damage by misleading operators into taking inappropriate decisions. We then focus on unobservable data integrity attacks involving power meter data. These are coordinated attacks where the compromised data are consistent with the physics of power flow, and are therefore passed by any bad data detection algorithm. We develop metrics to assess the economic impact of these attacks under re-dispatch decisions using optimal power flow methods. These metrics can be use to prioritize the adoption of appropriate countermeasures including PMU placement, encryption, hardware upgrades, and advance attack detection algorithms.« less

  8. Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blyth, Taylor S.; Avramova, Maria

    The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics- based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR)more » cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal- hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.« less

  9. Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF

    NASA Astrophysics Data System (ADS)

    Blyth, Taylor S.

    The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics-based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR) cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal-hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.

  10. Dynamically reconfigurable photovoltaic system

    DOEpatents

    Okandan, Murat; Nielson, Gregory N.

    2016-05-31

    A PV system composed of sub-arrays, each having a group of PV cells that are electrically connected to each other. A power management circuit for each sub-array has a communications interface and serves to connect or disconnect the sub-array to a programmable power grid. The power grid has bus rows and bus columns. A bus management circuit is positioned at a respective junction of a bus column and a bus row and is programmable through its communication interface to connect or disconnect a power path in the grid. As a result, selected sub-arrays are connected by selected power paths to be in parallel so as to produce a low system voltage, and, alternately in series so as to produce a high system voltage that is greater than the low voltage by at least a factor of ten.

  11. Dynamically reconfigurable photovoltaic system

    DOEpatents

    Okandan, Murat; Nielson, Gregory N.

    2016-12-27

    A PV system composed of sub-arrays, each having a group of PV cells that are electrically connected to each other. A power management circuit for each sub-array has a communications interface and serves to connect or disconnect the sub-array to a programmable power grid. The power grid has bus rows and bus columns. A bus management circuit is positioned at a respective junction of a bus column and a bus row and is programmable through its communication interface to connect or disconnect a power path in the grid. As a result, selected sub-arrays are connected by selected power paths to be in parallel so as to produce a low system voltage, and, alternately in series so as to produce a high system voltage that is greater than the low voltage by at least a factor of ten.

  12. Time-Domain Modeling of RF Antennas and Plasma-Surface Interactions

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas G.; Smithe, David N.

    2017-10-01

    Recent advances in finite-difference time-domain (FDTD) modeling techniques allow plasma-surface interactions such as sheath formation and sputtering to be modeled concurrently with the physics of antenna near- and far-field behavior and ICRF power flow. Although typical sheath length scales (micrometers) are much smaller than the wavelengths of fast (tens of cm) and slow (millimeter) waves excited by the antenna, sheath behavior near plasma-facing antenna components can be represented by a sub-grid kinetic sheath boundary condition, from which RF-rectified sheath potential variation over the surface is computed as a function of current flow and local plasma parameters near the wall. These local time-varying sheath potentials can then be used, in tandem with particle-in-cell (PIC) models of the edge plasma, to study sputtering effects. Particle strike energies at the wall can be computed more accurately, consistent with their passage through the known potential of the sheath, such that correspondingly increased accuracy of sputtering yields and heat/particle fluxes to antenna surfaces is obtained. The new simulation capabilities enable time-domain modeling of plasma-surface interactions and ICRF physics in realistic experimental configurations at unprecedented spatial resolution. We will present results/animations from high-performance (10k-100k core) FDTD/PIC simulations of Alcator C-Mod antenna operation.

  13. THE DEVELOPMENT OF A 1990 GLOBAL INVENTORY FOR SO(X) AND NO(X) ON A 1(DEGREE) X 1(DEGREE) LATITUDE-LONGITUDE GRID.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    VAN HEYST,B.J.

    1999-10-01

    Sulfur and nitrogen oxides emitted to the atmosphere have been linked to the acidification of water bodies and soils and perturbations in the earth's radiation balance. In order to model the global transport and transformation of SO{sub x} and NO{sub x}, detailed spatial and temporal emission inventories are required. Benkovitz et al. (1996) published the development of an inventory of 1985 global emissions of SO{sub x} and NO{sub x} from anthropogenic sources. The inventory was gridded to a 1{degree} x 1{degree} latitude-longitude grid and has served as input to several global modeling studies. There is now a need to providemore » modelers with an update of this inventory to a more recent year, with a split of the emissions into elevated and low level sources. This paper describes the development of a 1990 update of the SO{sub x} and NO{sub x} global inventories that also includes a breakdown of sources into 17 sector groups. The inventory development starts with a gridded global default EDGAR inventory (Olivier et al, 1996). In countries where more detailed national inventories are available, these are used to replace the emissions for those countries in the global default. The gridded emissions are distributed into two height levels (0-100m and >100m) based on the final plume heights that are estimated to be typical for the various sectors considered. The sources of data as well as some of the methodologies employed to compile and develop the 1990 global inventory for SO{sub x} and NO{sub x} are discussed. The results reported should be considered to be interim since the work is still in progress and additional data sets are expected to become available.« less

  14. SuperB Simulation Production System

    NASA Astrophysics Data System (ADS)

    Tomassetti, L.; Bianchi, F.; Ciaschini, V.; Corvo, M.; Del Prete, D.; Di Simone, A.; Donvito, G.; Fella, A.; Franchini, P.; Giacomini, F.; Gianoli, A.; Longo, S.; Luitz, S.; Luppi, E.; Manzali, M.; Pardi, S.; Paolini, A.; Perez, A.; Rama, M.; Russo, G.; Santeramo, B.; Stroili, R.

    2012-12-01

    The SuperB asymmetric e+e- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab-1 and a peak luminosity of 1036 cm-2 s-1. The SuperB Computing group is working on developing a simulation production framework capable to satisfy the experiment needs. It provides access to distributed resources in order to support both the detector design definition and its performance evaluation studies. During last year the framework has evolved from the point of view of job workflow, Grid services interfaces and technologies adoption. A complete code refactoring and sub-component language porting now permits the framework to sustain distributed production involving resources from two continents and Grid Flavors. In this paper we will report a complete description of the production system status of the art, its evolution and its integration with Grid services; in particular, we will focus on the utilization of new Grid component features as in LB and WMS version 3. Results from the last official SuperB production cycle will be reported.

  15. Smaller global and regional carbon emissions from gross land use change when considering sub-grid secondary land cohorts in a global dynamic vegetation model

    NASA Astrophysics Data System (ADS)

    Yue, Chao; Ciais, Philippe; Li, Wei

    2018-02-01

    Several modelling studies reported elevated carbon emissions from historical land use change (ELUC) by including bidirectional transitions on the sub-grid scale (termed gross land use change), dominated by shifting cultivation and other land turnover processes. However, most dynamic global vegetation models (DGVMs) that have implemented gross land use change either do not account for sub-grid secondary lands, or often have only one single secondary land tile over a model grid cell and thus cannot account for various rotation lengths in shifting cultivation and associated secondary forest age dynamics. Therefore, it remains uncertain how realistic the past ELUC estimations are and how estimated ELUC will differ between the two modelling approaches with and without multiple sub-grid secondary land cohorts - in particular secondary forest cohorts. Here we investigated historical ELUC over 1501-2005 by including sub-grid forest age dynamics in a DGVM. We run two simulations, one with no secondary forests (Sageless) and the other with sub-grid secondary forests of six age classes whose demography is driven by historical land use change (Sage). Estimated global ELUC for 1501-2005 is 176 Pg C in Sage compared to 197 Pg C in Sageless. The lower ELUC values in Sage arise mainly from shifting cultivation in the tropics under an assumed constant rotation length of 15 years, being 27 Pg C in Sage in contrast to 46 Pg C in Sageless. Estimated cumulative ELUC values from wood harvest in the Sage simulation (31 Pg C) are however slightly higher than Sageless (27 Pg C) when the model is forced by reconstructed harvested areas because secondary forests targeted in Sage for harvest priority are insufficient to meet the prescribed harvest area, leading to wood harvest being dominated by old primary forests. An alternative approach to quantify wood harvest ELUC, i.e. always harvesting the close-to-mature forests in both Sageless and Sage, yields similar values of 33 Pg C by both simulations. The lower ELUC from shifting cultivation in Sage simulations depends on the predefined forest clearing priority rules in the model and the assumed rotation length. A set of sensitivity model runs over Africa reveal that a longer rotation length over the historical period likely results in higher emissions. Our results highlight that although gross land use change as a former missing emission component is included by a growing number of DGVMs, its contribution to overall ELUC remains uncertain and tends to be overestimated when models ignore sub-grid secondary forests.

  16. Rapid Airplane Parametric Input Design (RAPID)

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.

    1995-01-01

    RAPID is a methodology and software system to define a class of airplane configurations and directly evaluate surface grids, volume grids, and grid sensitivity on and about the configurations. A distinguishing characteristic which separates RAPID from other airplane surface modellers is that the output grids and grid sensitivity are directly applicable in CFD analysis. A small set of design parameters and grid control parameters govern the process which is incorporated into interactive software for 'real time' visual analysis and into batch software for the application of optimization technology. The computed surface grids and volume grids are suitable for a wide range of Computational Fluid Dynamics (CFD) simulation. The general airplane configuration has wing, fuselage, horizontal tail, and vertical tail components. The double-delta wing and tail components are manifested by solving a fourth order partial differential equation (PDE) subject to Dirichlet and Neumann boundary conditions. The design parameters are incorporated into the boundary conditions and therefore govern the shapes of the surfaces. The PDE solution yields a smooth transition between boundaries. Surface grids suitable for CFD calculation are created by establishing an H-type topology about the configuration and incorporating grid spacing functions in the PDE equation for the lifting components and the fuselage definition equations. User specified grid parameters govern the location and degree of grid concentration. A two-block volume grid about a configuration is calculated using the Control Point Form (CPF) technique. The interactive software, which runs on Silicon Graphics IRIS workstations, allows design parameters to be continuously varied and the resulting surface grid to be observed in real time. The batch software computes both the surface and volume grids and also computes the sensitivity of the output grid with respect to the input design parameters by applying the precompiler tool ADIFOR to the grid generation program. The output of ADIFOR is a new source code containing the old code plus expressions for derivatives of specified dependent variables (grid coordinates) with respect to specified independent variables (design parameters). The RAPID methodology and software provide a means of rapidly defining numerical prototypes, grids, and grid sensitivity of a class of airplane configurations. This technology and software is highly useful for CFD research for preliminary design and optimization processes.

  17. Properties of Dense Cores Embedded in Musca Derived from Extinction Maps and {sup 13}CO, C{sup 18}O, and NH{sub 3} Emission Lines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Machaieie, Dinelsa A.; Vilas-Boas, José W.; Wuensche, Carlos A.

    Using near-infrared data from the Two Micron All Sky Survey catalog and the Near Infrared Color Excess method, we studied the extinction distribution in five dense cores of Musca, which show visual extinction greater than 10 mag and are potential sites of star formation. We analyzed the stability in four of them, fitting their radial extinction profiles with Bonnor–Ebert isothermal spheres, and explored their properties using the J = 1–0 transition of {sup 13}CO and C{sup 18}O and the J = K = 1 transition of NH{sub 3}. One core is not well described by the model. The stability parametermore » of the fitted cores ranges from 4.5 to 5.7 and suggests that all cores are stable, including Mu13, which harbors one young stellar object (YSO), the IRAS 12322-7023 source. However, the analysis of the physical parameters shows that Mu13 tends to have larger A {sub V}, n {sub c}, and P {sub ext} than the remaining starless cores. The other physical parameters do not show any trend. It is possible that those are the main parameters to explore in active star-forming cores. Mu13 also shows the most intense emission of NH{sub 3}. Its {sup 13}CO and C{sup 18}O lines have double peaks, whose integrated intensity maps suggest that they are due to the superposition of clouds with different radial velocities seen in the line of sight. It is not possible to state whether these clouds are colliding and inducing star formation or are related to a physical process associated with the formation of the YSO.« less

  18. Combining super-ensembles and statistical emulation to improve a regional climate and vegetation model

    NASA Astrophysics Data System (ADS)

    Hawkins, L. R.; Rupp, D. E.; Li, S.; Sarah, S.; McNeall, D. J.; Mote, P.; Betts, R. A.; Wallom, D.

    2017-12-01

    Changing regional patterns of surface temperature, precipitation, and humidity may cause ecosystem-scale changes in vegetation, altering the distribution of trees, shrubs, and grasses. A changing vegetation distribution, in turn, alters the albedo, latent heat flux, and carbon exchanged with the atmosphere with resulting feedbacks onto the regional climate. However, a wide range of earth-system processes that affect the carbon, energy, and hydrologic cycles occur at sub grid scales in climate models and must be parameterized. The appropriate parameter values in such parameterizations are often poorly constrained, leading to uncertainty in predictions of how the ecosystem will respond to changes in forcing. To better understand the sensitivity of regional climate to parameter selection and to improve regional climate and vegetation simulations, we used a large perturbed physics ensemble and a suite of statistical emulators. We dynamically downscaled a super-ensemble (multiple parameter sets and multiple initial conditions) of global climate simulations using a 25-km resolution regional climate model HadRM3p with the land-surface scheme MOSES2 and dynamic vegetation module TRIFFID. We simultaneously perturbed land surface parameters relating to the exchange of carbon, water, and energy between the land surface and atmosphere in a large super-ensemble of regional climate simulations over the western US. Statistical emulation was used as a computationally cost-effective tool to explore uncertainties in interactions. Regions of parameter space that did not satisfy observational constraints were eliminated and an ensemble of parameter sets that reduce regional biases and span a range of plausible interactions among earth system processes were selected. This study demonstrated that by combining super-ensemble simulations with statistical emulation, simulations of regional climate could be improved while simultaneously accounting for a range of plausible land-atmosphere feedback strengths.

  19. THE RELATIONSHIP BETWEEN {nu}{sub max} AND AGE t FROM ZAMS TO RGB-TIP FOR LOW-MASS STARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Y. K.; Gai, N., E-mail: tyk450@163.com, E-mail: ning.gai@hotmail.com

    2013-07-10

    Stellar age is an important quantity in astrophysics, which is useful for many fields both in the universe and galaxies. It cannot be determined by direct measurements, but can only be estimated or inferred. We attempt to find a useful indicator of stellar age, which is accurate from the zero-age main sequence to the tip of red giant branch for low-mass stars. Using the Yale Rotation and Evolution Code (YREC), a grid of stellar models has been constructed. Meanwhile, the frequency of maximum oscillations' power {nu}{sub max} and the large frequency separation {Delta}{nu} are calculated using the scaling relations. Formore » the stars, the masses of which are from 0.8 M{sub Sun} to 2.8 M{sub Sun }, we can obtain the {nu}{sub max} and stellar age by combing the scaling relations with the four sets of grid models (YREC, Dotter et al., Marigo et al., and YY isochrones). We find that {nu}{sub max} is tightly correlated and decreases monotonically with the age of the star from the main sequence to the red giant evolutionary stages. Moreover, we find that the line shapes of the curves in the Age versus {nu}{sub max} diagram, which is plotted by the four sets of grid models, are consistent for red giants with masses from 1.1 M{sub Sun} to 2.8 M{sub Sun }. For red giants, the differences of correlation coefficients between Age and {nu}{sub max} for different grid models are minor and can be ignored. Interestingly, we find two peaks that correspond to the subgiants and bump of red giants in the Age versus {nu}{sub max} diagram. By general linear least-squares, we make the polynomial fitting and deduce the relationship between log(Age) and log({nu}{sub max}) in red giants' evolutionary state.« less

  20. Service differentiated and adaptive CSMA/CA over IEEE 802.15.4 for Cyber-Physical Systems.

    PubMed

    Xia, Feng; Li, Jie; Hao, Ruonan; Kong, Xiangjie; Gao, Ruixia

    2013-01-01

    Cyber-Physical Systems (CPS) that collect, exchange, manage information, and coordinate actions are an integral part of the Smart Grid. In addition, Quality of Service (QoS) provisioning in CPS, especially in the wireless sensor/actuator networks, plays an essential role in Smart Grid applications. IEEE 802.15.4, which is one of the most widely used communication protocols in this area, still needs to be improved to meet multiple QoS requirements. This is because IEEE 802.15.4 slotted Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA) employs static parameter configuration without supporting differentiated services and network self-adaptivity. To address this issue, this paper proposes a priority-based Service Differentiated and Adaptive CSMA/CA (SDA-CSMA/CA) algorithm to provide differentiated QoS for various Smart Grid applications as well as dynamically initialize backoff exponent according to traffic conditions. Simulation results demonstrate that the proposed SDA-CSMA/CA scheme significantly outperforms the IEEE 802.15.4 slotted CSMA/CA in terms of effective data rate, packet loss rate, and average delay.

  1. Service Differentiated and Adaptive CSMA/CA over IEEE 802.15.4 for Cyber-Physical Systems

    PubMed Central

    Gao, Ruixia

    2013-01-01

    Cyber-Physical Systems (CPS) that collect, exchange, manage information, and coordinate actions are an integral part of the Smart Grid. In addition, Quality of Service (QoS) provisioning in CPS, especially in the wireless sensor/actuator networks, plays an essential role in Smart Grid applications. IEEE 802.15.4, which is one of the most widely used communication protocols in this area, still needs to be improved to meet multiple QoS requirements. This is because IEEE 802.15.4 slotted Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA) employs static parameter configuration without supporting differentiated services and network self-adaptivity. To address this issue, this paper proposes a priority-based Service Differentiated and Adaptive CSMA/CA (SDA-CSMA/CA) algorithm to provide differentiated QoS for various Smart Grid applications as well as dynamically initialize backoff exponent according to traffic conditions. Simulation results demonstrate that the proposed SDA-CSMA/CA scheme significantly outperforms the IEEE 802.15.4 slotted CSMA/CA in terms of effective data rate, packet loss rate, and average delay. PMID:24260021

  2. A multi-resolution approach to electromagnetic modeling.

    NASA Astrophysics Data System (ADS)

    Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu

    2018-04-01

    We present a multi-resolution approach for three-dimensional magnetotelluric forward modeling. Our approach is motivated by the fact that fine grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography, and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. This is especially true for forward modeling required in regularized inversion, where conductivity variations at depth are generally very smooth. With a conventional structured finite-difference grid the fine discretization required to adequately represent rapid variations near the surface are continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modeling is especially important for solving regularized inversion problems. We implement a multi-resolution finite-difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of sub-grids, with each sub-grid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modeling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modeling operators on interfaces between adjacent sub-grids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models show that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.

  3. Electron paramagnetic resonance spectral study of [Mn(acs){sub 2}(2–pic){sub 2}(H{sub 2}O){sub 2}] single crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kocakoç, Mehpeyker, E-mail: mkocakoc@cu.edu.tr; Tapramaz, Recep, E-mail: recept@omu.edu.tr

    Acesulfame potassium salt is a synthetic and non-caloric sweetener. It is also important chemically for its capability of being ligand in coordination compounds, because it can bind over Nitrogen and Oxygen atoms of carbonyl and sulfonyl groups and ring oxygen. Some acesulfame containing transition metal ion complexes with mixed ligands exhibit solvato and thermo chromic properties and these properties make them physically important. In this work single crystals of Mn{sup +2} ion complex with mixed ligand, [Mn(acs){sub 2}(2-pic){sub 2}(H{sub 2}O){sub 2}], was studied with electron paramagnetic resonance (EPR) spectroscopy. EPR parameters were determined. Zero field splitting parameters indicated that themore » complex was highly symmetric. Variable temperature studies showed no detectable chance in spectra.« less

  4. An RR Lyrae period shift in terms of the Fourier parameter Phi sub 31

    NASA Technical Reports Server (NTRS)

    Clement, Christine M.; Jankulak, Michael; Simon, Norman R.

    1992-01-01

    The Fourier phase parameter Phi sub 31 has been determined for RRc stars in five globular clusters, NGC 6171, M5, M3, M53, and M15. The results indicate that the RRc stars in a given cluster show a sequence of Phi sub 31 increasing with period, and that the higher the cluster metallicity, the higher the sequence lies in a plot of Phi sub 31 with period. The Phi sub 31 values for the stars in NGC 6171 and M5 presented here are based on observations made with the University of Toronto 0.61 m telescope at Las Campanas, Chile, while those for M3, M53, and M15 are based on published data. A bootstrap procedure has been used to establish the uncertainties in the Fourier parameters. The physical significance of the relationship among Phi sub 31, period, and metallicity is not yet understood. It will need to be tested with hydrodynamic pulsation models computed with new opacities.

  5. Physics design of the injector source for ITER neutral beam injector (invited).

    PubMed

    Antoni, V; Agostinetti, P; Aprile, D; Cavenago, M; Chitarin, G; Fonnesu, N; Marconato, N; Pilan, N; Sartori, E; Serianni, G; Veltri, P

    2014-02-01

    Two Neutral Beam Injectors (NBI) are foreseen to provide a substantial fraction of the heating power necessary to ignite thermonuclear fusion reactions in ITER. The development of the NBI system at unprecedented parameters (40 A of negative ion current accelerated up to 1 MV) requires the realization of a full scale prototype, to be tested and optimized at the Test Facility under construction in Padova (Italy). The beam source is the key component of the system and the design of the multi-grid accelerator is the goal of a multi-national collaborative effort. In particular, beam steering is a challenging aspect, being a tradeoff between requirements of the optics and real grids with finite thickness and thermo-mechanical constraints due to the cooling needs and the presence of permanent magnets. In the paper, a review of the accelerator physics and an overview of the whole R&D physics program aimed to the development of the injector source are presented.

  6. Multiresolution Iterative Reconstruction in High-Resolution Extremity Cone-Beam CT

    PubMed Central

    Cao, Qian; Zbijewski, Wojciech; Sisniega, Alejandro; Yorkston, John; Siewerdsen, Jeffrey H; Stayman, J Webster

    2016-01-01

    Application of model-based iterative reconstruction (MBIR) to high resolution cone-beam CT (CBCT) is computationally challenging because of the very fine discretization (voxel size <100 µm) of the reconstructed volume. Moreover, standard MBIR techniques require that the complete transaxial support for the acquired projections is reconstructed, thus precluding acceleration by restricting the reconstruction to a region-of-interest. To reduce the computational burden of high resolution MBIR, we propose a multiresolution Penalized-Weighted Least Squares (PWLS) algorithm, where the volume is parameterized as a union of fine and coarse voxel grids as well as selective binning of detector pixels. We introduce a penalty function designed to regularize across the boundaries between the two grids. The algorithm was evaluated in simulation studies emulating an extremity CBCT system and in a physical study on a test-bench. Artifacts arising from the mismatched discretization of the fine and coarse sub-volumes were investigated. The fine grid region was parameterized using 0.15 mm voxels and the voxel size in the coarse grid region was varied by changing a downsampling factor. No significant artifacts were found in either of the regions for downsampling factors of up to 4×. For a typical extremities CBCT volume size, this downsampling corresponds to an acceleration of the reconstruction that is more than five times faster than a brute force solution that applies fine voxel parameterization to the entire volume. For certain configurations of the coarse and fine grid regions, in particular when the boundary between the regions does not cross high attenuation gradients, downsampling factors as high as 10× can be used without introducing artifacts, yielding a ~50× speedup in PWLS. The proposed multiresolution algorithm significantly reduces the computational burden of high resolution iterative CBCT reconstruction and can be extended to other applications of MBIR where computationally expensive, high-fidelity forward models are applied only to a sub-region of the field-of-view. PMID:27694701

  7. Supernova feedback in numerical simulations of galaxy formation: separating physics from numerics

    NASA Astrophysics Data System (ADS)

    Smith, Matthew C.; Sijacki, Debora; Shen, Sijing

    2018-07-01

    While feedback from massive stars exploding as supernovae (SNe) is thought to be one of the key ingredients regulating galaxy formation, theoretically it is still unclear how the available energy couples to the interstellar medium and how galactic scale outflows are launched. We present a novel implementation of six sub-grid SN feedback schemes in the moving-mesh code AREPO, including injections of thermal and/or kinetic energy, two parametrizations of delayed cooling feedback and a `mechanical' feedback scheme that injects the correct amount of momentum depending on the relevant scale of the SN remnant resolved. All schemes make use of individually time-resolved SN events. Adopting isolated disc galaxy set-ups at different resolutions, with the highest resolution runs reasonably resolving the Sedov-Taylor phase of the SN, we aim to find a physically motivated scheme with as few tunable parameters as possible. As expected, simple injections of energy overcool at all but the highest resolution. Our delayed cooling schemes result in overstrong feedback, destroying the disc. The mechanical feedback scheme is efficient at suppressing star formation, agrees well with the Kennicutt-Schmidt relation, and leads to converged star formation rates and galaxy morphologies with increasing resolution without fine-tuning any parameters. However, we find it difficult to produce outflows with high enough mass loading factors at all but the highest resolution, indicating either that we have oversimplified the evolution of unresolved SN remnants, require other stellar feedback processes to be included, and require a better star formation prescription or most likely some combination of these issues.

  8. Supernova feedback in numerical simulations of galaxy formation: separating physics from numerics

    NASA Astrophysics Data System (ADS)

    Smith, Matthew C.; Sijacki, Debora; Shen, Sijing

    2018-04-01

    While feedback from massive stars exploding as supernovae (SNe) is thought to be one of the key ingredients regulating galaxy formation, theoretically it is still unclear how the available energy couples to the interstellar medium and how galactic scale outflows are launched. We present a novel implementation of six sub-grid SN feedback schemes in the moving-mesh code AREPO, including injections of thermal and/or kinetic energy, two parametrizations of delayed cooling feedback and a `mechanical' feedback scheme that injects the correct amount of momentum depending on the relevant scale of the SN remnant resolved. All schemes make use of individually time-resolved SN events. Adopting isolated disk galaxy setups at different resolutions, with the highest resolution runs reasonably resolving the Sedov-Taylor phase of the SN, we aim to find a physically motivated scheme with as few tunable parameters as possible. As expected, simple injections of energy overcool at all but the highest resolution. Our delayed cooling schemes result in overstrong feedback, destroying the disk. The mechanical feedback scheme is efficient at suppressing star formation, agrees well with the Kennicutt-Schmidt relation and leads to converged star formation rates and galaxy morphologies with increasing resolution without fine tuning any parameters. However, we find it difficult to produce outflows with high enough mass loading factors at all but the highest resolution, indicating either that we have oversimplified the evolution of unresolved SN remnants, require other stellar feedback processes to be included, require a better star formation prescription or most likely some combination of these issues.

  9. Self-modulating pressure gauge

    DOEpatents

    Edwards, D. Jr.; Lanni, C.P.

    1979-08-07

    An ion gauge is disclosed having a reduced x-ray limit and means for measuring that limit. The gauge comprises an ion gauge of the Bayard-Alpert type having a short collector and having means for varying the grid-collector voltage. The x-ray limit (i.e. the collector current resulting from x-rays striking the collector) may then be determined by the formula: I/sub x/ = ..cap alpha..I/sub l/ - I/sub h//..cap alpha.. - l where: I/sub x/ = x-ray limit, I/sub l/ and I/sub h/ = the collector current at the lower and higher grid voltage respectively; and, ..cap alpha.. = the ratio of the collector current due to positive ions at the higher voltage to that at the lower voltage.

  10. Vertical eddy diffusivity as a control parameter in the tropical Pacific

    NASA Astrophysics Data System (ADS)

    Martinez Avellaneda, N.; Cornuelle, B.

    2011-12-01

    Ocean models suffer from errors in the treatment of turbulent sub-grid-scale motions responsible for mixing and energy dissipation. Unrealistic small-scale physics in models can have large-scale consequences, such as biases in the upper ocean temperature, a symptom of poorly-simulated upwelling, currents and air-sea interactions. This is of special importance in the tropical Pacific Ocean (TP), which is home to energetic air-sea interactions that affect global climate. It has been shown in a number of studies that the simulated ENSO variability is highly dependent on the state of the ocean (e.g.: background mixing). Moreover, the magnitude of the vertical numerical diffusion is of primary importance in properly reproducing the Pacific equatorial thermocline. This work is part of a NASA-funded project to estimate the space- and time-varying ocean mixing coefficients in an eddy-permitting (1/3dgr) model of the TP to obtain an improved estimate of its time-varying circulation and its underlying dynamics. While an estimation procedure for the TP (26dgr S - 30dgr N) in underway using the MIT general circulation model, complementary adjoint-based sensitivity studies have been carried out for the starting ocean state from Forget (2010). This analysis aids the interpretation of the estimated mixing coefficients and possible error compensation. The focus of the sensitivity tests is the Equatorial Undercurrent and sub-thermocline jets (i.e., Tsuchiya Jets), which have been thought to have strong dependence on vertical diffusivity and should provide checks on the estimated mixing parameters. In order to build intuition for the vertical diffusivity adjoint results in the TP, adjoint and forward perturbed simulations were carried out for an idealized sharp thermocline in a rectangular domain.

  11. The purification, crystallization and preliminary diffraction of a glycerophosphodiesterase from Enterobacter aerogenes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jackson, Colin J.; Carr, Paul D.; Kim, Hye-Kyung

    2006-07-01

    The metallo-glycerophosphodiesterase from E. aerogenes (GpdQ) has been cloned, expressed in E. coli and purified. Initial screening of crystallization conditions for this enzyme resulted in the identification of needles from one condition in a sodium malonate grid screen. Removal of the metals from the enzyme and subsequent optimization of these conditions led to crystals. The metallo-glycerophosphodiesterase from Enterobacter aerogenes (GpdQ) has been cloned, expressed in Escherichia coli and purified. Initial screening of crystallization conditions for this enzyme resulted in the identification of needles from one condition in a sodium malonate grid screen. Removal of the metals from the enzyme andmore » subsequent optimization of these conditions led to crystals that diffracted to 2.9 Å and belonged to space group P2{sub 1}3, with unit-cell parameter a = 164.1 Å. Self-rotation function analysis and V{sub M} calculations indicated that the asymmetric unit contains two copies of the monomeric enzyme, corresponding to a solvent content of 79%. It is intended to determine the structure of this protein utilizing SAD phasing from transition metals or molecular replacement.« less

  12. Erratum: ``A Grid of Non-LTE Line-blanketed Model Atmospheres of O-Type Stars'' (ApJS, 146, 417 [2003])

    NASA Astrophysics Data System (ADS)

    Lanz, Thierry; Hubeny, Ivan

    2003-07-01

    We have constructed a comprehensive grid of 680 metal line-blanketed, non-LTE, plane-parallel, hydrostatic model atmospheres for the basic parameters appropriate to O-type stars. The OSTAR2002 grid considers 12 values of effective temperatures, 27,500K<=Teff<=55,000 K with 2500 K steps, eight surface gravities, 3.0<=logg<=4.75 with 0.25 dex steps, and 10 chemical compositions, from metal-rich relative to the Sun to metal-free. The lower limit of logg for a given effective temperature is set by an approximate location of the Eddington limit. The selected chemical compositions have been chosen to cover a number of typical environments of massive stars: the Galactic center, the Magellanic Clouds, blue compact dwarf galaxies like I Zw 18, and galaxies at high redshifts. The paper contains a description of the OSTAR2002 grid and some illustrative examples and comparisons. The complete OSTAR2002 grid is available at our Web site at ApJS, 146, 417 [2003]. Laboratory for Astronomy and Solar Physics, NASA Goddard Space Flight Center, Code 681, Greenbelt, MD 20771.

  13. Sub-grid scale precipitation in ALCMs: re-assessing the land surface sensitivity using a single column model

    NASA Astrophysics Data System (ADS)

    Pitman, Andrew J.; Yang, Zong-Liang; Henderson-Sellers, Ann

    1993-10-01

    The sensitivity of a land surface scheme to the distribution of precipitation within a general circulation model's grid element is investigated. Earlier experiments which showed considerable sensitivity of the runoff and evaporation simulation to the distribution of precipitation are repeated in the light of other results which show no sensitivity of evaporation to the distribution of precipitation. Results show that while the earlier results over-estimated the sensitivity of the surface hydrology to the precipitation distribution, the general conclusion that the system is sensitive is supported. It is found that changing the distribution of precipitation from falling over 100% of the grid square to falling over 10% leads to a reduction in evaporation from 1578 mm y-1 to 1195 mm y -1 while runoff increases from 278 mm y-1 to 602 mm y-1. The sensitivity is explained in terms of evaporation being dominated by available energy when precipitation falls over nearly the entire grid square, but by moisture availability (mainly intercepted water) when it falls over little of the grid square. These results also indicate that earlier work using stand-alone forcing to drive land surface schemes ‘off-line’, and to investigate the sensitivity of land surface codes to various parameters, leads to results which are non-repeatable in single column simulations.

  14. Online production validation in a HEP environment

    NASA Astrophysics Data System (ADS)

    Harenberg, T.; Kuhl, T.; Lang, N.; Mättig, P.; Sandhoff, M.; Schwanenberger, C.; Volkmer, F.

    2017-03-01

    In high energy physics (HEP) event simulations, petabytes of data are processed and stored requiring millions of CPU-years. This enormous demand for computing resources is handled by centers distributed worldwide, which form part of the LHC computing grid. The consumption of such an important amount of resources demands for an efficient production of simulation and for the early detection of potential errors. In this article we present a new monitoring framework for grid environments, which polls a measure of data quality during job execution. This online monitoring facilitates the early detection of configuration errors (specially in simulation parameters), and may thus contribute to significant savings in computing resources.

  15. Star Clusters within FIRE

    NASA Astrophysics Data System (ADS)

    Perez, Adrianna; Moreno, Jorge; Naiman, Jill; Ramirez-Ruiz, Enrico; Hopkins, Philip F.

    2017-01-01

    In this work, we analyze the environments surrounding star clusters of simulated merging galaxies. Our framework employs Feedback In Realistic Environments (FIRE) model (Hopkins et al., 2014). The FIRE project is a high resolution cosmological simulation that resolves star forming regions and incorporates stellar feedback in a physically realistic way. The project focuses on analyzing the properties of the star clusters formed in merging galaxies. The locations of these star clusters are identified with astrodendro.py, a publicly available dendrogram algorithm. Once star cluster properties are extracted, they will be used to create a sub-grid (smaller than the resolution scale of FIRE) of gas confinement in these clusters. Then, we can examine how the star clusters interact with these available gas reservoirs (either by accreting this mass or blowing it out via feedback), which will determine many properties of the cluster (star formation history, compact object accretion, etc). These simulations will further our understanding of star formation within stellar clusters during galaxy evolution. In the future, we aim to enhance sub-grid prescriptions for feedback specific to processes within star clusters; such as, interaction with stellar winds and gas accretion onto black holes and neutron stars.

  16. OGLE-2015-BLG-0196: GROUND-BASED GRAVITATIONAL MICROLENS PARALLAX CONFIRMED BY SPACE-BASED OBSERVATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, C.; Udalski, A.; Szymański, M. K.

    2017-01-01

    In this paper, we present an analysis of the binary gravitational microlensing event OGLE-2015-BLG-0196. The event lasted for almost a year, and the light curve exhibited significant deviations from the lensing model based on the rectilinear lens-source relative motion, enabling us to measure the microlens parallax. The ground-based microlens parallax is confirmed by the data obtained from space-based microlens observations using the Spitzer telescope. By additionally measuring the angular Einstein radius from the analysis of the resolved caustic crossing, the physical parameters of the lens are determined up to the twofold degeneracy, u {sub 0} < 0 and u {sub 0} > 0, solutionsmore » caused by the well-known “ecliptic” degeneracy. It is found that the binary lens is composed of two M dwarf stars with similar masses, M {sub 1} = 0.38 ± 0.04 M {sub ⊙} (0.50 ± 0.05 M {sub ⊙}) and M {sub 2} = 0.38 ± 0.04 M {sub ⊙} (0.55 ± 0.06 M {sub ⊙}), and the distance to the lens is D {sub L} = 2.77 ± 0.23 kpc (3.30 ± 0.29 kpc). Here the physical parameters outside and inside the parentheses are for the u {sub 0} < 0 and u {sub 0} > 0 solutions, respectively.« less

  17. Glaucoma diagnosis by mapping macula with Fourier domain optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Tan, Ou; Lu, Ake; Chopra, Vik; Varma, Rohit; Hiroshi, Ishikawa; Schuman, Joel; Huang, David

    2008-03-01

    A new image segmentation method was developed to detect macular retinal sub-layers boundary on newly-developed Fourier-Domain Optical Coherence Tomography (FD-OCT) with macular grid scan pattern. The segmentation results were used to create thickness map of macular ganglion cell complex (GCC), which contains the ganglion cell dendrites, cell bodies and axons. Overall average and several pattern analysis parameters were defined on the GCC thickness map and compared for the diagnosis of glaucoma. Intraclass correlation (ICC) is used to compare the reproducibility of the parameters. Area under receiving operative characteristic curve (AROC) was calculated to compare the diagnostic power. The result is also compared to the output of clinical time-domain OCT (TD-OCT). We found that GCC based parameters had good repeatability and comparable diagnostic power with circumpapillary nerve fiber layer (cpNFL) thickness. Parameters based on pattern analysis can increase the diagnostic power of GCC macular mapping.

  18. Source parameter inversion of compound earthquakes on GPU/CPU hybrid platform

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Ni, S.; Chen, W.

    2012-12-01

    Source parameter of earthquakes is essential problem in seismology. Accurate and timely determination of the earthquake parameters (such as moment, depth, strike, dip and rake of fault planes) is significant for both the rupture dynamics and ground motion prediction or simulation. And the rupture process study, especially for the moderate and large earthquakes, is essential as the more detailed kinematic study has became the routine work of seismologists. However, among these events, some events behave very specially and intrigue seismologists. These earthquakes usually consist of two similar size sub-events which occurred with very little time interval, such as mb4.5 Dec.9, 2003 in Virginia. The studying of these special events including the source parameter determination of each sub-events will be helpful to the understanding of earthquake dynamics. However, seismic signals of two distinctive sources are mixed up bringing in the difficulty of inversion. As to common events, the method(Cut and Paste) has been proven effective for resolving source parameters, which jointly use body wave and surface wave with independent time shift and weights. CAP could resolve fault orientation and focal depth using a grid search algorithm. Based on this method, we developed an algorithm(MUL_CAP) to simultaneously acquire parameters of two distinctive events. However, the simultaneous inversion of both sub-events make the computation very time consuming, so we develop a hybrid GPU and CPU version of CAP(HYBRID_CAP) to improve the computation efficiency. Thanks to advantages on multiple dimension storage and processing in GPU, we obtain excellent performance of the revised code on GPU-CPU combined architecture and the speedup factors can be as high as 40x-90x compared to classical cap on traditional CPU architecture.As the benchmark, we take the synthetics as observation and inverse the source parameters of two given sub-events and the inversion results are very consistent with the true parameters. For the events in Virginia, USA on 9 Dec, 2003, we re-invert source parameters and detailed analysis of regional waveform indicates that Virginia earthquake included two sub-events which are Mw4.05 and Mw4.25 at the same depth of 10km with focal mechanism of strike65/dip32/rake135, which are consistent with previous study. Moreover, compared to traditional two-source model method, MUL_CAP is more automatic with no need for human intervention.

  19. Measuring accessibility of sustainable transportation using space syntax in Bojonggede area

    NASA Astrophysics Data System (ADS)

    Suryawinata, B. A.; Mariana, Y.; Wijaksono, S.

    2017-12-01

    Changes in the physical structure of regional space as a result of the increase of planned and unplanned settlements in the Bojonggede area have an impact on the road network pattern system. Changes in road network patterns will have an impact on the permeability of the area. Permeability measures the extent to which road network patterns provide an option in traveling. If the permeability increases the travel distance decreases and the route of travel choice increases, permeability like this can create an easy access system and physically integrated. This study aims to identify the relationship of physical characteristics of residential area and road network pattern to the level of space permeability in Bojonggede area. By conducting this research can be a reference for the arrangement of circulation, accessibility, and land use in the vicinity of Bojonggede. This research uses quantitative method and space syntax method to see global integration and local integration on the region which become the parameter of permeability level. The results showed that the level of permeability globally and locally high in Bojonggede physical area is the physical characteristics of the area that has a grid pattern of road network grid.

  20. A test of star formation laws in disk galaxies. II. Dependence on dynamical properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suwannajak, Chutipong; Tan, Jonathan C.; Leroy, Adam K.

    2014-05-20

    We use the observed radial profiles of the mass surface densities of total, Σ {sub g}, and molecular, Σ{sub H2}, gas, rotation velocity, and star formation rate (SFR) surface density, Σ{sub sfr}, of the molecular-rich (Σ{sub H2} ≥ Σ{sub HI}/2) regions of 16 nearby disk galaxies to test several star formation (SF) laws: a 'Kennicutt-Schmidt (K-S)' law, Σ{sub sfr}=A{sub g}Σ{sub g,2}{sup 1.5}; a 'Constant Molecular' law, Σ{sub sfr} = A {sub H2}Σ{sub H2,2}; the turbulence-regulated laws of Krumholz and McKee (KM05) and Krumholz, McKee, and Tumlinson (KMT09); a 'Gas-Ω' law, Σ{sub sfr}=B{sub Ω}Σ{sub g}Ω; and a shear-driven 'giant molecular cloudmore » (GMC) Collision' law, Σ{sub sfr} = B {sub CC}Σ {sub g}Ω(1-0.7β), where β ≡ d ln v {sub circ}/d ln r. If allowed one free normalization parameter for each galaxy, these laws predict the SFR with rms errors of factors of 1.4-1.8. If a single normalization parameter is used by each law for the entire galaxy sample, then rms errors range from factors of 1.5-2.1. Although the Constant Molecular law gives the smallest rms errors, the improvement over the KMT, K-S, and GMC Collision laws is not especially significant, particularly given the different observational inputs that the laws utilize and the scope of included physics, which ranges from empirical relations to detailed treatment of interstellar medium processes. We next search for systematic variation of SF law parameters with local and global galactic dynamical properties of disk shear rate (related to β), rotation speed, and presence of a bar. We demonstrate with high significance that higher shear rates enhance SF efficiency per local orbital time. Such a trend is expected if GMC collisions play an important role in SF, while an opposite trend would be expected if the development of disk gravitational instabilities is the controlling physics.« less

  1. Verification of sub-grid filtered drag models for gas-particle fluidized beds with immersed cylinder arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarkar, Avik; Sun, Xin; Sundaresan, Sankaran

    2014-04-23

    The accuracy of coarse-grid multiphase CFD simulations of fluidized beds may be improved via the inclusion of filtered constitutive models. In our previous study (Sarkar et al., Chem. Eng. Sci., 104, 399-412), we developed such a set of filtered drag relationships for beds with immersed arrays of cooling tubes. Verification of these filtered drag models is addressed in this work. Predictions from coarse-grid simulations with the sub-grid filtered corrections are compared against accurate, highly-resolved simulations of full-scale turbulent and bubbling fluidized beds. The filtered drag models offer a computationally efficient yet accurate alternative for obtaining macroscopic predictions, but the spatialmore » resolution of meso-scale clustering heterogeneities is sacrificed.« less

  2. Sub-grid drag models for horizontal cylinder arrays immersed in gas-particle multiphase flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarkar, Avik; Sun, Xin; Sundaresan, Sankaran

    2013-09-08

    Immersed cylindrical tube arrays often are used as heat exchangers in gas-particle fluidized beds. In multiphase computational fluid dynamics (CFD) simulations of large fluidized beds, explicit resolution of small cylinders is computationally infeasible. Instead, the cylinder array may be viewed as an effective porous medium in coarse-grid simulations. The cylinders' influence on the suspension as a whole, manifested as an effective drag force, and on the relative motion between gas and particles, manifested as a correction to the gas-particle drag, must be modeled via suitable sub-grid constitutive relationships. In this work, highly resolved unit-cell simulations of flow around an arraymore » of horizontal cylinders, arranged in a staggered configuration, are filtered to construct sub-grid, or `filtered', drag models, which can be implemented in coarse-grid simulations. The force on the suspension exerted by the cylinders is comprised of, as expected, a buoyancy contribution, and a kinetic component analogous to fluid drag on a single cylinder. Furthermore, the introduction of tubes also is found to enhance segregation at the scale of the cylinder size, which, in turn, leads to a reduction in the filtered gas-particle drag.« less

  3. SEMI-BLIND EIGEN ANALYSES OF RECOMBINATION HISTORIES USING COSMIC MICROWAVE BACKGROUND DATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farhang, M.; Bond, J. R.; Chluba, J.

    2012-06-20

    Cosmological parameter measurements from cosmic microwave background (CMB) experiments, such as Planck, ACTPol, SPTPol, and other high-resolution follow-ons, fundamentally rely on the accuracy of the assumed recombination model or one with well-prescribed uncertainties. Deviations from the standard recombination history might suggest new particle physics or modified atomic physics. Here we treat possible perturbative fluctuations in the free electron fraction, X{sub e}(z), by a semi-blind expansion in densely packed modes in redshift. From these we construct parameter eigenmodes, which we rank order so that the lowest modes provide the most power to probe X{sub e}(z) with CMB measurements. Since the eigenmodesmore » are effectively weighed by the fiducial X{sub e} history, they are localized around the differential visibility peak, allowing for an excellent probe of hydrogen recombination but a weaker probe of the higher redshift helium recombination and the lower redshift highly neutral freezeout tail. We use an information-based criterion to truncate the mode hierarchy and show that with even a few modes the method goes a long way from the fiducial recombination model computed with RECFAST, X{sub e,i}(z), toward the precise underlying history given by the new and improved recombination calculations of COSMOREC or HYREC, X{sub e,f}(z), in the hydrogen recombination regime, though not well in the helium regime. Without such a correction, the derived cosmic parameters are biased. We discuss an iterative approach for updating the eigenmodes to further hone in on X{sub e,f}(z) if large deviations are indeed found. We also introduce control parameters that downweight the attention on the visibility peak structure, e.g., focusing the eigenmode probes more strongly on the X{sub e}(z) freezeout tail, as would be appropriate when looking for the X{sub e} signature of annihilating or decaying elementary particles.« less

  4. Efficient radiative transfer methods for continuum and line transfer in large three-dimensional models

    NASA Astrophysics Data System (ADS)

    Juvela, Mika J.

    The relationship between physical conditions of an interstellar cloud and the observed radiation is defined by the radiative transfer problem. Radiative transfer calculations are needed if, e.g., one wants to disentangle abundance variations from excitation effects or wants to model variations of dust properties inside an interstellar cloud. New observational facilities (e.g., ALMA and Herschel) will bring improved accuracy both in terms of intensity and spatial resolution. This will enable detailed studies of the densest sub-structures of interstellar clouds and star forming regions. Such observations must be interpreted with accurate radiative transfer methods and realistic source models. In many cases this will mean modelling in three dimensions. High optical depths and observed wide range of linear scales are, however, challenging for radiative transfer modelling. A large range of linear scales can be accessed only with hierarchical models. Figure 1 shows an example of the use of a hierarchical grid for radiative transfer calculations when the original model cloud (L=10 pc, =500 cm-3) was based a MHD simulation carried out on a regular grid (Juvela & Padoan, 2005). For computed line intensities an accuracy of 10% was still reached when the number of individual cells (and the run time) was reduced by a factor of ten. This illustrates how, as long as cloud is not extremely optically thick, most of the emission comes from a small sub-volume. It is also worth noting that while errors are ~10% for any given point they are much smaller when compared with intensity variations. In particular, calculations on hierarchical grid recovered the spatial power spectrum of line emission with very good accuracy. Monte Carlo codes are used widely in both continuum and line transfer calculations. Like any lambda iteration schemes these suffer from slow convergence when models are optically thick. In line transfer Accelerated Monte Carlo methods (AMC) present a partial solution to this problem (Juvela & Padoan, 2000; Hogerheijde & van der Tak, 2000). AMC methods can be used similarly in continuum calculations to speed up the computation of dust temperatures (Juvela, 2005). The sampling problems associated with high optical depths can be solved with weighted sampling and the handling of models with τV ~ 1000 is perfectly feasible. Transiently heated small dust grains pose another problem because the calculation of their temperature distribution is very time consuming. However, a 3D model will contain thousands of cells at very similar conditions. If dust temperature distributions are calculated only once for such a set an approximate solution can be found in a much shorter time time. (Juvela & Padoan, 2003; see Figure 2a). MHD simulations with Automatic Mesh Refinement (AMR) techniques present an exciting development for the modelling of interstellar clouds. Cloud models consist of a hierarchy of grids with different grid steps and the ratio between the cloud size and the smallest resolution elements can be 106 or even larger. We are currently working on radiative transfer codes (line and continuum) that could be used efficiently on such grids (see Figure 2b). The radiative transfer problem can be solved relatively independently on each of the sub-grids. This means that the use of convergence acceleration methods can be limited to those sub-grids where they are needed and, on the other hand, parallelization of the code is straightforward.

  5. Conceptual Design of the Everglades Depth Estimation Network (EDEN) Grid

    USGS Publications Warehouse

    Jones, John W.; Price, Susan D.

    2007-01-01

    INTRODUCTION The Everglades Depth Estimation Network (EDEN) offers a consistent and documented dataset that can be used to guide large-scale field operations, to integrate hydrologic and ecological responses, and to support biological and ecological assessments that measure ecosystem responses to the Comprehensive Everglades Restoration Plan (Telis, 2006). Ground elevation data for the greater Everglades and the digital ground elevation models derived from them form the foundation for all EDEN water depth and associated ecologic/hydrologic modeling (Jones, 2004, Jones and Price, 2007). To use EDEN water depth and duration information most effectively, it is important to be able to view and manipulate information on elevation data quality and other land cover and habitat characteristics across the Everglades region. These requirements led to the development of the geographic data layer described in this techniques and methods report. Relying on extensive experience in GIS data development, distribution, and analysis, a great deal of forethought went into the design of the geographic data layer used to index elevation and other surface characteristics for the Greater Everglades region. To allow for simplicity of design and use, the EDEN area was broken into a large number of equal-sized rectangles ('Cells') that in total are referred to here as the 'grid'. Some characteristics of this grid, such as the size of its cells, its origin, the area of Florida it is designed to represent, and individual grid cell identifiers, could not be changed once the grid database was developed. Therefore, these characteristics were selected to design as robust a grid as possible and to ensure the grid's long-term utility. It is desirable to include all pertinent information known about elevation and elevation data collection as grid attributes. Also, it is very important to allow for efficient grid post-processing, sub-setting, analysis, and distribution. This document details the conceptual design of the EDEN grid spatial parameters and cell attribute-table content.

  6. Maintaining Balance: The Increasing Role of Energy Storage for Renewable Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stenclik, Derek; Denholm, Paul; Chalamala, Babu

    For nearly a century, global power systems have focused on three key functions: to generate, transmit, and distribute electricity as a real-time commodity. Physics requires that electricity generation always be in real-time balance with load, despite variability in load on timescales ranging from sub-second disturbances to multi-year trends. With the increasing role of variable generation from wind and solar, retirements of fossil fuel-based generation, and a changing consumer demand profile, grid operators are using new methods to maintain this balance.

  7. Maintaining Balance: The Increasing Role of Energy Storage for Renewable Integration

    DOE PAGES

    Stenclik, Derek; Denholm, Paul; Chalamala, Babu

    2017-10-17

    For nearly a century, global power systems have focused on three key functions: to generate, transmit, and distribute electricity as a real-time commodity. Physics requires that electricity generation always be in real-time balance with load, despite variability in load on timescales ranging from sub-second disturbances to multi-year trends. With the increasing role of variable generation from wind and solar, retirements of fossil fuel-based generation, and a changing consumer demand profile, grid operators are using new methods to maintain this balance.

  8. Benefits Analysis of Smart Grid Projects. White paper, 2014-2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marnay, Chris; Liu, Liping; Yu, JianCheng

    Smart grids are rolling out internationally, with the United States (U.S.) nearing completion of a significant USD4-plus-billion federal program funded under the American Recovery and Reinvestment Act (ARRA-2009). The emergence of smart grids is widespread across developed countries. Multiple approaches to analyzing the benefits of smart grids have emerged. The goals of this white paper are to review these approaches and analyze examples of each to highlight their differences, advantages, and disadvantages. This work was conducted under the auspices of a joint U.S.-China research effort, the Climate Change Working Group (CCWG) Implementation Plan, Smart Grid. We present comparative benefits assessmentsmore » (BAs) of smart grid demonstrations in the U.S. and China along with a BA of a pilot project in Europe. In the U.S., we assess projects at two sites: (1) the University of California, Irvine campus (UCI), which consists of two distinct demonstrations: Southern California Edison’s (SCE) Irvine Smart Grid Demonstration Project (ISGD) and the UCI campus itself; and (2) the Navy Yard (TNY) area in Philadelphia, which has been repurposed as a mixed commercial-industrial, and possibly residential, development. In China, we cover several smart-grid aspects of the Sino-Singapore Tianjin Eco-city (TEC) and the Shenzhen Bay Technology and Ecology City (B-TEC). In Europe, we look at a BA of a pilot smart grid project in the Malagrotta area west of Rome, Italy, contributed by the Joint Research Centre (JRC) of the European Commission. The Irvine sub-project BAs use the U.S. Department of Energy (U.S. DOE) Smart Grid Computational Tool (SGCT), which is built on methods developed by the Electric Power Research Institute (EPRI). The TEC sub-project BAs apply Smart Grid Multi-Criteria Analysis (SG-MCA) developed by the State Grid Corporation of China (SGCC) based on the analytic hierarchy process (AHP) with fuzzy logic. The B-TEC and TNY sub-project BAs are evaluated using new approaches developed by those project teams. JRC has adopted an approach similar to EPRI’s but tailored to the Malagrotta distribution grid.« less

  9. Using large hydrological datasets to create a robust, physically based, spatially distributed model for Great Britain

    NASA Astrophysics Data System (ADS)

    Lewis, Elizabeth; Kilsby, Chris; Fowler, Hayley

    2014-05-01

    The impact of climate change on hydrological systems requires further quantification in order to inform water management. This study intends to conduct such analysis using hydrological models. Such models are of varying forms, of which conceptual, lumped parameter models and physically-based models are two important types. The majority of hydrological studies use conceptual models calibrated against measured river flow time series in order to represent catchment behaviour. This method often shows impressive results for specific problems in gauged catchments. However, the results may not be robust under non-stationary conditions such as climate change, as physical processes and relationships amenable to change are not accounted for explicitly. Moreover, conceptual models are less readily applicable to ungauged catchments, in which hydrological predictions are also required. As such, the physically based, spatially distributed model SHETRAN is used in this study to develop a robust and reliable framework for modelling historic and future behaviour of gauged and ungauged catchments across the whole of Great Britain. In order to achieve this, a large array of data completely covering Great Britain for the period 1960-2006 has been collated and efficiently stored ready for model input. The data processed include a DEM, rainfall, PE and maps of geology, soil and land cover. A desire to make the modelling system easy for others to work with led to the development of a user-friendly graphical interface. This allows non-experts to set up and run a catchment model in a few seconds, a process that can normally take weeks or months. The quality and reliability of the extensive dataset for modelling hydrological processes has also been evaluated. One aspect of this has been an assessment of error and uncertainty in rainfall input data, as well as the effects of temporal resolution in precipitation inputs on model calibration. SHETRAN has been updated to accept gridded rainfall inputs, and UKCP09 gridded daily rainfall data has been disaggregated using hourly records to analyse the implications of using realistic sub-daily variability. Furthermore, the development of a comprehensive dataset and computationally efficient means of setting up and running catchment models has allowed for examination of how a robust parameter scheme may be derived. This analysis has been based on collective parameterisation of multiple catchments in contrasting hydrological settings and subject to varied processes. 350 gauged catchments all over the UK have been simulated, and a robust set of parameters is being sought by examining the full range of hydrological processes and calibrating to a highly diverse flow data series. The modelling system will be used to generate flow time series based on historical input data and also downscaled Regional Climate Model (RCM) forecasts using the UKCP09 Weather Generator. This will allow for analysis of flow frequency and associated future changes, which cannot be determined from the instrumental record or from lumped parameter model outputs calibrated only to historical catchment behaviour. This work will be based on the existing and functional modelling system described following some further improvements to calibration, particularly regarding simulation of groundwater-dominated catchments.

  10. A Measurement of the Michel Parameters in Leptonic Decays of the Tau

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ammar, R.; Baringer, P.; Bean, A.

    1997-06-01

    We have measured the spectral shape Michel parameters {rho} and {eta} using leptonic decays of the {tau} , recorded by the CLEO II detector. Assuming e-{mu} universality in the vectorlike couplings, we find {rho}{sub e{mu}}=0.735{plus_minus}0.013{plus_minus}0.008 and {eta}{sub e{mu}}=-0.015{plus_minus}0.061{plus_minus}0.062 , where the first error is statistical and the second systematic. We also present measurements for the parameters for e and {mu} final states separately. {copyright} {ital 1997} {ital The American Physical Society}

  11. Recent development of the Multi-Grid detector for large area neutron scattering instruments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guerard, Bruno

    2015-07-01

    Most of the Neutron Scattering facilities are committed in a continuous program of modernization of their instruments, requiring large area and high performance thermal neutron detectors. Beside scintillators detectors, {sup 3}He detectors, like linear PSDs (Position Sensitive Detectors) and MWPCs (Multi-Wires Proportional Chambers), are the most current techniques nowadays. Time Of Flight instruments are using {sup 3}He PSDs mounted side by side to cover tens of m{sup 2}. As a result of the so-called '{sup 3}He shortage crisis{sup ,} the volume of 3He which is needed to build one of these instruments is not accessible anymore. The development of alternativemore » techniques requiring no 3He, has been given high priority to secure the future of neutron scattering instrumentation. This is particularly important in the context where the future ESS (European Spallation Source) will start its operation in 2019-2020. Improved scintillators represent one of the alternative techniques. Another one is the Multi-Grid introduced at the ILL in 2009. A Multi-Grid detector is composed of several independent modules of typically 0.8 m x 3 m sensitive area, mounted side by side in air or in a vacuum TOF chamber. One module is composed of segmented boron-lined proportional counters mounted in a gas vessel; the counters, of square section, are assembled with Aluminium grids electrically insulated and stacked together. This design provides two advantages: First, magnetron sputtering techniques can be used to coat B{sub 4}C films on planar substrates, and second, the neutron position along the anode wires can be measured by reading out individually the grid signals with fast shaping amplifiers followed by comparators. Unlike charge division localisation in linear PSDs, the individual readout of the grids allows operating the Multi-Grid at a low amplification gain, hence this detector is tolerant to mechanical defects and its production accessible to laboratories equipped with standard equipment. Prototypes of different configurations and sizes have been developed and tested. A demonstrator, with a sensitive area of 0.8 m x 3 m, has been studied during the CRISP European project; it contains 1024 grids, and a surface of isotopically enriched B{sub 4}C film close to 80 m{sup 2}. Its size represented a challenge in terms of fabrication and mounting of the detection elements. Another challenge was to make the gas chamber mechanically compatible with operation in a vacuum TOF chamber. Optimal working condition of this detector was achieved by flushing Ar-CO{sub 2} at a pressure of 50 mbar, and by applying 400 Volts on the anodes. This unusual gas pressure allows to greatly simplifying the mechanics of the gas vessel in vacuum. The detection efficiency has been measured with high precision for different film thicknesses. 52% has been measured at 2.5 Angstrom, in good agreement with the MC simulation. A high position resolution has been achieved by centre of gravity measurement of the TOT (Time-Over-Threshold) signals between neighbouring grids. These results, as well as other detection parameters, including gamma sensitivity and spatial uniformity, will be presented. (author)« less

  12. Asteroseismology of KIC 7107778: a binary comprising almost identical subgiants

    NASA Astrophysics Data System (ADS)

    Li, Yaguang; Bedding, Timothy R.; Li, Tanda; Bi, Shaolan; Murphy, Simon J.; Corsaro, Enrico; Chen, Li; Tian, Zhijia

    2018-05-01

    We analyse an asteroseismic binary system: KIC 7107778, a non-eclipsing, unresolved target, with solar-like oscillations in both components. We used Kepler short cadence time series spanning nearly 2 yr to obtain the power spectrum. Oscillation mode parameters were determined using Bayesian inference and a nested sampling Monte Carlo algorithm with the DIAMONDS package. The power profiles of the two components fully overlap, indicating their close similarity. We modelled the two stars with MESA and calculated oscillation frequencies with GYRE. Stellar fundamental parameters (mass, radius, and age) were estimated by grid modelling with atmospheric parameters and the oscillation frequencies of l = 0, 2 modes as constraints. Most l = 1 mixed modes were identified with models searched using a bisection method. Stellar parameters for the two sub-giant stars are MA = 1.42 ± 0.06 M⊙, MB = 1.39 ± 0.03 M⊙, RA = 2.93 ± 0.05 R⊙, RB = 2.76 ± 0.04 R⊙, tA = 3.32 ± 0.54 Gyr and tB = 3.51 ± 0.33 Gyr. The mass difference of the system is ˜1 per cent. The results confirm their simultaneous birth and evolution, as is expected from binary formation. KIC 7107778 comprises almost identical twins, and is the first asteroseismic sub-giant binary to be detected.

  13. NIMBUS 7 Earth Radiation Budget (ERB) Matrix User's Guide. Volume 2: Tape Specifications

    NASA Technical Reports Server (NTRS)

    Ray, S. N.; Vasanth, K. L.

    1984-01-01

    The ERB MATRIX tape is generated by an IBM 3081 computer program and is a 9 track, 1600 BPI tape. The gross format of the tape given on Page 1, shows an initial standard header file followed by data files. The standard header file contains two standard header records. A trailing documentation file (TDF) is the last file on the tape. Pages 9 through 17 describe, in detail, the standard header file and the TDF. The data files contain data for 37 different ERB parameters. Each file has data based on either a daily, 6 day cyclic, or monthly time interval. There are three types of physical records in the data files; namely, the world grid physical record, the documentation mercator/polar map projection physical record, and the monthly calibration physical record. The manner in which the data for the 37 ERB parameters are stored in the physical records comprising the data files, is given in the gross format section.

  14. A new Downscaling Approach for SMAP, SMOS and ASCAT by predicting sub-grid Soil Moisture Variability based on Soil Texture

    NASA Astrophysics Data System (ADS)

    Montzka, C.; Rötzer, K.; Bogena, H. R.; Vereecken, H.

    2017-12-01

    Improving the coarse spatial resolution of global soil moisture products from SMOS, SMAP and ASCAT is currently an up-to-date topic. Soil texture heterogeneity is known to be one of the main sources of soil moisture spatial variability. A method has been developed that predicts the soil moisture standard deviation as a function of the mean soil moisture based on soil texture information. It is a closed-form expression using stochastic analysis of 1D unsaturated gravitational flow in an infinitely long vertical profile based on the Mualem-van Genuchten model and first-order Taylor expansions. With the recent development of high resolution maps of basic soil properties such as soil texture and bulk density, relevant information to estimate soil moisture variability within a satellite product grid cell is available. Here, we predict for each SMOS, SMAP and ASCAT grid cell the sub-grid soil moisture variability based on the SoilGrids1km data set. We provide a look-up table that indicates the soil moisture standard deviation for any given soil moisture mean. The resulting data set provides important information for downscaling coarse soil moisture observations of the SMOS, SMAP and ASCAT missions. Downscaling SMAP data by a field capacity proxy indicates adequate accuracy of the sub-grid soil moisture patterns.

  15. MODELING THE NEAR-UV BAND OF GK STARS. II. NON-LTE MODELS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ian Short, C.; Campbell, Eamonn A.; Pickup, Heather

    We present a grid of atmospheric models and synthetic spectral energy distributions (SEDs) for late-type dwarfs and giants of solar and 1/3 solar metallicity with many opacity sources computed in self-consistent non-local thermodynamic equilibrium (NLTE), and compare them to the LTE grid of Short and Hauschildt (Paper I). We describe, for the first time, how the NLTE treatment affects the thermal equilibrium of the atmospheric structure (T({tau}) relation) and the SED as a finely sampled function of T{sub eff}, log g, and [A/H] among solar metallicity and mildly metal-poor red giants. We compare the computed SEDs to the library ofmore » observed spectrophotometry described in Paper I across the entire visible band, and in the blue and red regions of the spectrum separately. We find that for the giants of both metallicities, the NLTE models yield best-fit T{sub eff} values that are 30-90 K lower than those provided by LTE models, while providing greater consistency between log g values, and, for Arcturus, T{sub eff} values, fitted separately to the blue and red spectral regions. There is marginal evidence that NLTE models give more consistent best-fit T{sub eff} values between the red and blue bands for earlier spectral classes among the solar metallicity GK giants than they do for the later classes, but no model fits the blue-band spectrum well for any class. For the two dwarf spectral classes that we are able to study, the effect of NLTE on derived parameters is less significant. We compare our derived T{sub eff} values to several other spectroscopic and photometric T{sub eff} calibrations for red giants, including one that is less model dependent based on the infrared flux method (IRFM). We find that the NLTE models provide slightly better agreement to the IRFM calibration among the warmer stars in our sample, while giving approximately the same level of agreement for the cooler stars.« less

  16. Fast Grid Frequency Support from Distributed Inverter-Based Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoke, Anderson F

    This presentation summarizes power hardware-in-the-loop testing performed to evaluate the ability of distributed inverter-coupled generation to support grid frequency on the fastest time scales. The research found that distributed PV inverters and other DERs can effectively support the grid on sub-second time scales.

  17. PARADIGM USING JOINT DETERMINISTIC GRID MODELING AND SUB-GRID VARIABILITY STOCHASTIC DESCRIPTION AS A TEMPLATE FOR MODEL EVALUATION

    EPA Science Inventory

    The goal of achieving verisimilitude of air quality simulations to observations is problematic. Chemical transport models such as the Community Multi-Scale Air Quality (CMAQ) modeling system produce volume averages of pollutant concentration fields. When grid sizes are such tha...

  18. Monolithic multigrid method for the coupled Stokes flow and deformable porous medium system

    NASA Astrophysics Data System (ADS)

    Luo, P.; Rodrigo, C.; Gaspar, F. J.; Oosterlee, C. W.

    2018-01-01

    The interaction between fluid flow and a deformable porous medium is a complicated multi-physics problem, which can be described by a coupled model based on the Stokes and poroelastic equations. A monolithic multigrid method together with either a coupled Vanka smoother or a decoupled Uzawa smoother is employed as an efficient numerical technique for the linear discrete system obtained by finite volumes on staggered grids. A specialty in our modeling approach is that at the interface of the fluid and poroelastic medium, two unknowns from the different subsystems are defined at the same grid point. We propose a special discretization at and near the points on the interface, which combines the approximation of the governing equations and the considered interface conditions. In the decoupled Uzawa smoother, Local Fourier Analysis (LFA) helps us to select optimal values of the relaxation parameter appearing. To implement the monolithic multigrid method, grid partitioning is used to deal with the interface updates when communication is required between two subdomains. Numerical experiments show that the proposed numerical method has an excellent convergence rate. The efficiency and robustness of the method are confirmed in numerical experiments with typically small realistic values of the physical coefficients.

  19. Evolution of the concentration PDF in random environments modeled by global random walk

    NASA Astrophysics Data System (ADS)

    Suciu, Nicolae; Vamos, Calin; Attinger, Sabine; Knabner, Peter

    2013-04-01

    The evolution of the probability density function (PDF) of concentrations of chemical species transported in random environments is often modeled by ensembles of notional particles. The particles move in physical space along stochastic-Lagrangian trajectories governed by Ito equations, with drift coefficients given by the local values of the resolved velocity field and diffusion coefficients obtained by stochastic or space-filtering upscaling procedures. A general model for the sub-grid mixing also can be formulated as a system of Ito equations solving for trajectories in the composition space. The PDF is finally estimated by the number of particles in space-concentration control volumes. In spite of their efficiency, Lagrangian approaches suffer from two severe limitations. Since the particle trajectories are constructed sequentially, the demanded computing resources increase linearly with the number of particles. Moreover, the need to gather particles at the center of computational cells to perform the mixing step and to estimate statistical parameters, as well as the interpolation of various terms to particle positions, inevitably produce numerical diffusion in either particle-mesh or grid-free particle methods. To overcome these limitations, we introduce a global random walk method to solve the system of Ito equations in physical and composition spaces, which models the evolution of the random concentration's PDF. The algorithm consists of a superposition on a regular lattice of many weak Euler schemes for the set of Ito equations. Since all particles starting from a site of the space-concentration lattice are spread in a single numerical procedure, one obtains PDF estimates at the lattice sites at computational costs comparable with those for solving the system of Ito equations associated to a single particle. The new method avoids the limitations concerning the number of particles in Lagrangian approaches, completely removes the numerical diffusion, and speeds up the computation by orders of magnitude. The approach is illustrated for the transport of passive scalars in heterogeneous aquifers, with hydraulic conductivity modeled as a random field.

  20. Experimental and LES investigation of premixed methane/air flame propagating in a tube with a thin obstacle

    NASA Astrophysics Data System (ADS)

    Chen, Peng; Guo, Shilong; Li, Yanchao; Zhang, Yutao

    2017-03-01

    In this paper, an experimental and numerical investigation of premixed methane/air flame dynamics in a closed combustion vessel with a thin obstacle is described. In the experiment, high-speed video photography and a pressure transducer are used to study the flame shape changes and pressure dynamics. In the numerical simulation, four sub-grid scale viscosity models and three sub-grid scale combustion models are evaluated for their individual prediction compared with the experimental data. High-speed photographs show that the flame propagation process can be divided into five stages: spherical flame, finger-shaped flame, jet flame, mushroom-shaped flame and bidirectional propagation flame. Compared with the other sub-grid scale viscosity models and sub-grid scale combustion models, the dynamic Smagorinsky-Lilly model and the power-law flame wrinkling model are better able to predict the flame behaviour, respectively. Thus, coupling the dynamic Smagorinsky-Lilly model and the power-law flame wrinkling model, the numerical results demonstrate that flame shape change is a purely hydrodynamic phenomenon, and the mushroom-shaped flame and bidirectional propagation flame are the result of flame-vortex interaction. In addition, the transition from "corrugated flamelets" to "thin reaction zones" is observed in the simulation.

  1. Formulating viscous hydrodynamics for large velocity gradients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratt, Scott

    2008-02-15

    Viscous corrections to relativistic hydrodynamics, which are usually formulated for small velocity gradients, have recently been extended from Navier-Stokes formulations to a class of treatments based on Israel-Stewart equations. Israel-Stewart treatments, which treat the spatial components of the stress-energy tensor {tau}{sub ij} as dynamical objects, introduce new parameters, such as the relaxation times describing nonequilibrium behavior of the elements {tau}{sub ij}. By considering linear response theory and entropy constraints, we show how the additional parameters are related to fluctuations of {tau}{sub ij}. Furthermore, the Israel-Stewart parameters are analyzed for their ability to provide stable and physical solutions for sound waves.more » Finally, it is shown how these parameters, which are naturally described by correlation functions in real time, might be constrained by lattice calculations, which are based on path-integral formulations in imaginary time.« less

  2. The Impact of Varying the Physics Grid Resolution Relative to the Dynamical Core Resolution in CAM-SE-CSLAM

    NASA Astrophysics Data System (ADS)

    Herrington, A. R.; Lauritzen, P. H.; Reed, K. A.

    2017-12-01

    The spectral element dynamical core of the Community Atmosphere Model (CAM) has recently been coupled to an approximately isotropic, finite-volume grid per implementation of the conservative semi-Lagrangian multi-tracer transport scheme (CAM-SE-CSLAM; Lauritzen et al. 2017). In this framework, the semi-Lagrangian transport of tracers are computed on the finite-volume grid, while the adiabatic dynamics are solved using the spectral element grid. The physical parameterizations are evaluated on the finite-volume grid, as opposed to the unevenly spaced Gauss-Lobatto-Legendre nodes of the spectral element grid. Computing the physics on the finite-volume grid reduces numerical artifacts such as grid imprinting, possibly because the forcing terms are no longer computed at element boundaries where the resolved dynamics are least smooth. The separation of the physics grid and the dynamics grid allows for a unique opportunity to understand the resolution sensitivity in CAM-SE-CSLAM. The observed large sensitivity of CAM to horizontal resolution is a poorly understood impediment to improved simulations of regional climate using global, variable resolution grids. Here, a series of idealized moist simulations are presented in which the finite-volume grid resolution is varied relative to the spectral element grid resolution in CAM-SE-CSLAM. The simulations are carried out at multiple spectral element grid resolutions, in part to provide a companion set of simulations, in which the spectral element grid resolution is varied relative to the finite-volume grid resolution, but more generally to understand if the sensitivity to the finite-volume grid resolution is consistent across a wider spectrum of resolved scales. Results are interpreted in the context of prior ideas regarding resolution sensitivity of global atmospheric models.

  3. Application of Physics Based Distributed Hydrologic Models to Assess Anthropologic Land Disturbance in Watersheds

    NASA Astrophysics Data System (ADS)

    Downer, C. W.; Ogden, F. L.; Byrd, A. R.

    2008-12-01

    The Department of Defense (DoD) manages approximately 200,000 km2 of land within the United States on military installations and flood control and river improvement projects. The Watershed Systems Group (WSG) within the Coastal and Hydraulics Laboratory of the Engineer Research and Development Center (ERDC) supports the US Army and the US Army Corps of Engineers in both military and civil operations through the development, modification and application of surface and sub-surface hydrologic models. The US Army has a long history of land management and the development of analytical tools to assist with the management of US Army lands. The US Army has invested heavily in the distributed hydrologic model GSSHA and its predecessor CASC2D. These tools have been applied at numerous military and civil sites to analyze the effects of landscape alteration on hydrologic response and related consequences, changes in erosion and sediment transport, along with associated contaminants. Examples include: impacts of military training and land management activities, impact of changing land use (urbanization or environmental restoration), as well as impacts of management practices employed to abate problems, i.e. Best Management Practices (BMPs). Traditional models such as HSPF and SWAT, are largely conceptual in nature. GSSHA attempts to simulate the physical processes actually occurring in the watershed allowing the user to explicitly simulate changing parameter values in response to changes in land use, land cover, elevation, etc. Issues of scale raise questions: How do we best include fine-scale land use or management features in models of large watersheds? Do these features have to be represented explicitly through physical processes in the watershed domain? Can a point model, physical or empirical, suffice? Can these features be lumped into coarsely resolved numerical grids or sub-watersheds? In this presentation we will discuss the US Army's distributed hydrologic models in terms of how they simulate the relevant processes and present multiple applications of the models used for analyzing land management and land use change. Using these applications as a basis we will discuss issues related to the analysis of anthropogenic alterations in the landscape.

  4. ASTROPHYSICAL PARAMETERS OF LS 2883 AND IMPLICATIONS FOR THE PSR B1259-63 GAMMA-RAY BINARY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Negueruela, Ignacio; Lorenzo, Javier; Ribo, Marc

    2011-05-01

    Only a few binary systems with compact objects display TeV emission. The physical properties of the companion stars represent basic input for understanding the physical mechanisms behind the particle acceleration, emission, and absorption processes in these so-called gamma-ray binaries. Here we present high-resolution and high signal-to-noise optical spectra of LS 2883, the Be star forming a gamma-ray binary with the young non-accreting pulsar PSR B1259-63, showing it to rotate faster and be significantly earlier and more luminous than previously thought. Analysis of the interstellar lines suggests that the system is located at the same distance as (and thus is likelymore » a member of) Cen OB1. Taking the distance to the association, d = 2.3 kpc, and a color excess of E(B - V) = 0.85 for LS 2883 results in M{sub V} {approx} -4.4. Because of fast rotation, LS 2883 is oblate (R{sub eq} {approx_equal} 9.7 R{sub sun} and R{sub pole} {approx_equal} 8.1 R{sub sun}) and presents a temperature gradient (T{sub eq}{approx} 27,500 K, log g{sub eq} = 3.7; T{sub pole}{approx} 34,000 K, log g{sub pole} = 4.1). If the star did not rotate, it would have parameters corresponding to a late O-type star. We estimate its luminosity at log(L{sub *}/L{sub sun}) {approx_equal} 4.79 and its mass at M{sub *} {approx} 30 M{sub sun}. The mass function then implies an inclination of the binary system i{sub orb} {approx} 23{sup 0}, slightly smaller than previous estimates. We discuss the implications of these new astrophysical parameters of LS 2883 for the production of high-energy and very high-energy gamma rays in the PSR B1259-63/LS 2883 gamma-ray binary system. In particular, the stellar properties are very important for prediction of the line-like bulk Comptonization component from the unshocked ultrarelativistic pulsar wind.« less

  5. Tidally averaged circulation in Puget Sound sub-basins: Comparison of historical data, analytical model, and numerical model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khangaonkar, Tarang; Yang, Zhaoqing; Kim, Tae Yun

    2011-07-20

    Through extensive field data collection and analysis efforts conducted since the 1950s, researchers have established an understanding of the characteristic features of circulation in Puget Sound. The pattern ranges from the classic fjordal behavior in some basins, with shallow brackish outflow and compensating inflow immediately below, to the typical two-layer flow observed in many partially mixed estuaries with saline inflow at depth. An attempt at reproducing this behavior by fitting an analytical formulation to past data is presented, followed by the application of a three-dimensional circulation and transport numerical model. The analytical treatment helped identify key physical processes and parameters,more » but quickly reconfirmed that response is complex and would require site-specific parameterization to include effects of sills and interconnected basins. The numerical model of Puget Sound, developed using unstructured-grid finite volume method, allowed resolution of the sub-basin geometric features, including presence of major islands, and site-specific strong advective vertical mixing created by bathymetry and multiple sills. The model was calibrated using available recent short-term oceanographic time series data sets from different parts of the Puget Sound basin. The results are compared against (1) recent velocity and salinity data collected in Puget Sound from 2006 and (2) a composite data set from previously analyzed historical records, mostly from the 1970s. The results highlight the ability of the model to reproduce velocity and salinity profile characteristics, their variations among Puget Sound subbasins, and tidally averaged circulation. Sensitivity of residual circulation to variations in freshwater inflow and resulting salinity gradient in fjordal sub-basins of Puget Sound is examined.« less

  6. An application of statistical mechanics for representing equilibrium perimeter distributions of tropical convective clouds

    NASA Astrophysics Data System (ADS)

    Garrett, T. J.; Alva, S.; Glenn, I. B.; Krueger, S. K.

    2015-12-01

    There are two possible approaches for parameterizing sub-grid cloud dynamics in a coarser grid model. The most common is to use a fine scale model to explicitly resolve the mechanistic details of clouds to the best extent possible, and then to parameterize these behaviors cloud state for the coarser grid. A second is to invoke physical intuition and some very general theoretical principles from equilibrium statistical mechanics. This approach avoids any requirement to resolve time-dependent processes in order to arrive at a suitable solution. The second approach is widely used elsewhere in the atmospheric sciences: for example the Planck function for blackbody radiation is derived this way, where no mention is made of the complexities of modeling a large ensemble of time-dependent radiation-dipole interactions in order to obtain the "grid-scale" spectrum of thermal emission by the blackbody as a whole. We find that this statistical approach may be equally suitable for modeling convective clouds. Specifically, we make the physical argument that the dissipation of buoyant energy in convective clouds is done through mixing across a cloud perimeter. From thermodynamic reasoning, one might then anticipate that vertically stacked isentropic surfaces are characterized by a power law dlnN/dlnP = -1, where N(P) is the number clouds of perimeter P. In a Giga-LES simulation of convective clouds within a 100 km square domain we find that such a power law does appear to characterize simulated cloud perimeters along isentropes, provided a sufficient cloudy sample. The suggestion is that it may be possible to parameterize certain important aspects of cloud state without appealing to computationally expensive dynamic simulations.

  7. An Advanced User Interface Approach for Complex Parameter Study Process Specification in the Information Power Grid

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob; Yan, Jerry C. (Technical Monitor)

    2000-01-01

    The creation of parameter study suites has recently become a more challenging problem as the parameter studies have now become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are now seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers great resource opportunity but at the expense of great difficulty of use. We present an approach to this problem which stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.

  8. Predictive Model for Particle Residence Time Distributions in Riser Reactors. Part 1: Model Development and Validation

    DOE PAGES

    Foust, Thomas D.; Ziegler, Jack L.; Pannala, Sreekanth; ...

    2017-02-28

    Here in this computational study, we model the mixing of biomass pyrolysis vapor with solid catalyst in circulating riser reactors with a focus on the determination of solid catalyst residence time distributions (RTDs). A comprehensive set of 2D and 3D simulations were conducted for a pilot-scale riser using the Eulerian-Eulerian two-fluid modeling framework with and without sub-grid-scale models for the gas-solids interaction. A validation test case was also simulated and compared to experiments, showing agreement in the pressure gradient and RTD mean and spread. For simulation cases, it was found that for accurate RTD prediction, the Johnson and Jackson partialmore » slip solids boundary condition was required for all models and a sub-grid model is useful so that ultra high resolutions grids that are very computationally intensive are not required. Finally, we discovered a 2/3 scaling relation for the RTD mean and spread when comparing resolved 2D simulations to validated unresolved 3D sub-grid-scale model simulations.« less

  9. Small-Grid Dithers for the JWST Coronagraphs

    NASA Technical Reports Server (NTRS)

    Lajoie, Charles-Philippe; Soummer, Remi; Pueyo, Laurent; Hines, Dean C.; Nelan, Edmund P.; Perrin, Marshall; Clampin, Mark; Isaacs, John C.

    2016-01-01

    We discuss new results of coronagraphic simulations demonstrating a novel mode for JWST that utilizes sub-pixel dithered reference images, called Small-Grid Dithers, to optimize coronagraphic PSF subtraction. These sub-pixel dithers are executed with the Fine Steering Mirror under fine guidance, are accurate to approx.2-3 milliarcseconds (1-s/axis), and provide ample speckle diversity to reconstruct an optimized synthetic reference PSF using LOCI or KLIP. We also discuss the performance gains of Small-Grid Dithers compared to the standard undithered scenario, and show potential contrast gain factors for the NIRCam and MIRI coronagraphs ranging from 2 to more than 10, respectively.

  10. Rapid Airplane Parametric Input Design(RAPID)

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.; Bloor, Malcolm I. G.; Wilson, Michael J.; Thomas, Almuttil M.

    2004-01-01

    An efficient methodology is presented for defining a class of airplane configurations. Inclusive in this definition are surface grids, volume grids, and grid sensitivity. A small set of design parameters and grid control parameters govern the process. The general airplane configuration has wing, fuselage, vertical tail, horizontal tail, and canard components. The wing, tail, and canard components are manifested by solving a fourth-order partial differential equation subject to Dirichlet and Neumann boundary conditions. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage has circular cross section, and the radius is an algebraic function of four design parameters and an independent computational variable. Volume grids are obtained through an application of the Control Point Form method. Grid sensitivity is obtained by applying the automatic differentiation precompiler ADIFOR to software for the grid generation. The computed surface grids, volume grids, and sensitivity derivatives are suitable for a wide range of Computational Fluid Dynamics simulation and configuration optimizations.

  11. Impact of cloud horizontal inhomogeneity and directional sampling on the retrieval of cloud droplet size by the POLDER instrument

    NASA Astrophysics Data System (ADS)

    Shang, H.; Chen, L.; Bréon, F. M.; Letu, H.; Li, S.; Wang, Z.; Su, L.

    2015-11-01

    The principles of cloud droplet size retrieval via Polarization and Directionality of the Earth's Reflectance (POLDER) requires that clouds be horizontally homogeneous. The retrieval is performed by combining all measurements from an area of 150 km × 150 km to compensate for POLDER's insufficient directional sampling. Using POLDER-like data simulated with the RT3 model, we investigate the impact of cloud horizontal inhomogeneity and directional sampling on the retrieval and analyze which spatial resolution is potentially accessible from the measurements. Case studies show that the sub-grid-scale variability in droplet effective radius (CDR) can significantly reduce valid retrievals and introduce small biases to the CDR (~ 1.5 μm) and effective variance (EV) estimates. Nevertheless, the sub-grid-scale variations in EV and cloud optical thickness (COT) only influence the EV retrievals and not the CDR estimate. In the directional sampling cases studied, the retrieval using limited observations is accurate and is largely free of random noise. Several improvements have been made to the original POLDER droplet size retrieval. For example, measurements in the primary rainbow region (137-145°) are used to ensure retrievals of large droplet (> 15 μm) and to reduce the uncertainties caused by cloud heterogeneity. We apply the improved method using the POLDER global L1B data from June 2008, and the new CDR results are compared with the operational CDRs. The comparison shows that the operational CDRs tend to be underestimated for large droplets because the cloudbow oscillations in the scattering angle region of 145-165° are weak for cloud fields with CDR > 15 μm. Finally, a sub-grid-scale retrieval case demonstrates that a higher resolution, e.g., 42 km × 42 km, can be used when inverting cloud droplet size distribution parameters from POLDER measurements.

  12. SED Modeling of 20 Massive Young Stellar Objects

    NASA Astrophysics Data System (ADS)

    Tanti, Kamal Kumar

    In this paper, we present the spectral energy distributions (SEDs) modeling of twenty massive young stellar objects (MYSOs) and subsequently estimated different physical and structural/geometrical parameters for each of the twenty central YSO outflow candidates, along with their associated circumstellar disks and infalling envelopes. The SEDs for each of the MYSOs been reconstructed by using 2MASS, MSX, IRAS, IRAC & MIPS, SCUBA, WISE, SPIRE and IRAM data, with the help of a SED Fitting Tool, that uses a grid of 2D radiative transfer models. Using the detailed analysis of SEDs and subsequent estimation of physical and geometrical parameters for the central YSO sources along with its circumstellar disks and envelopes, the cumulative distribution of the stellar, disk and envelope parameters can be analyzed. This leads to a better understanding of massive star formation processes in their respective star forming regions in different molecular clouds.

  13. Grid-Enabled High Energy Physics Research using a Beowulf Cluster

    NASA Astrophysics Data System (ADS)

    Mahmood, Akhtar

    2005-04-01

    At Edinboro University of Pennsylvania, we have built a 8-node 25 Gflops Beowulf Cluster with 2.5 TB of disk storage space to carry out grid-enabled, data-intensive high energy physics research for the ATLAS experiment via Grid3. We will describe how we built and configured our Cluster, which we have named the Sphinx Beowulf Cluster. We will describe the results of our cluster benchmark studies and the run-time plots of several parallel application codes. Once fully functional, the Cluster will be part of Grid3[www.ivdgl.org/grid3]. The current ATLAS simulation grid application, models the entire physical processes from the proton anti-proton collisions and detector's response to the collision debri through the complete reconstruction of the event from analyses of these responses. The end result is a detailed set of data that simulates the real physical collision event inside a particle detector. Grid is the new IT infrastructure for the 21^st century science -- a new computing paradigm that is poised to transform the practice of large-scale data-intensive research in science and engineering. The Grid will allow scientist worldwide to view and analyze huge amounts of data flowing from the large-scale experiments in High Energy Physics. The Grid is expected to bring together geographically and organizationally dispersed computational resources, such as CPUs, storage systems, communication systems, and data sources.

  14. Continental-scale river flow in climate models

    NASA Technical Reports Server (NTRS)

    Miller, James R.; Russell, Gary L.; Caliri, Guilherme

    1994-01-01

    The hydrologic cycle is a major part of the global climate system. There is an atmospheric flux of water from the ocean surface to the continents. The cycle is closed by return flow in rivers. In this paper a river routing model is developed to use with grid box climate models for the whole earth. The routing model needs an algorithm for the river mass flow and a river direction file, which has been compiled for 4 deg x 5 deg and 2 deg x 2.5 deg resolutions. River basins are defined by the direction files. The river flow leaving each grid box depends on river and lake mass, downstream distance, and an effective flow speed that depends on topography. As input the routing model uses monthly land source runoff from a 5-yr simulation of the NASA/GISS atmospheric climate model (Hansen et al.). The land source runoff from the 4 deg x 5 deg resolution model is quartered onto a 2 deg x 2.5 deg grid, and the effect of grid resolution is examined. Monthly flow at the mouth of the world's major rivers is compared with observations, and a global error function for river flow is used to evaluate the routing model and its sensitivity to physical parameters. Three basinwide parameters are introduced: the river length weighted by source runoff, the turnover rate, and the basinwide speed. Although the values of these parameters depend on the resolution at which the rivers are defined, the values should converge as the grid resolution becomes finer. When the routing scheme described here is coupled with a climate model's source runoff, it provides the basis for closing the hydrologic cycle in coupled atmosphere-ocean models by realistically allowing water to return to the ocean at the correct location and with the proper magnitude and timing.

  15. Parameter Uncertainty on AGCM-simulated Tropical Cyclones

    NASA Astrophysics Data System (ADS)

    He, F.

    2015-12-01

    This work studies the parameter uncertainty on tropical cyclone (TC) simulations in Atmospheric General Circulation Models (AGCMs) using the Reed-Jablonowski TC test case, which is illustrated in Community Atmosphere Model (CAM). It examines the impact from 24 parameters across the physical parameterization schemes that represent the convection, turbulence, precipitation and cloud processes in AGCMs. The one-at-a-time (OAT) sensitivity analysis method first quantifies their relative importance on TC simulations and identifies the key parameters to the six different TC characteristics: intensity, precipitation, longwave cloud radiative forcing (LWCF), shortwave cloud radiative forcing (SWCF), cloud liquid water path (LWP) and ice water path (IWP). Then, 8 physical parameters are chosen and perturbed using the Latin-Hypercube Sampling (LHS) method. The comparison between OAT ensemble run and LHS ensemble run shows that the simulated TC intensity is mainly affected by the parcel fractional mass entrainment rate in Zhang-McFarlane (ZM) deep convection scheme. The nonlinear interactive effect among different physical parameters is negligible on simulated TC intensity. In contrast, this nonlinear interactive effect plays a significant role in other simulated tropical cyclone characteristics (precipitation, LWCF, SWCF, LWP and IWP) and greatly enlarge their simulated uncertainties. The statistical emulator Extended Multivariate Adaptive Regression Splines (EMARS) is applied to characterize the response functions for nonlinear effect. Last, we find that the intensity uncertainty caused by physical parameters is in a degree comparable to uncertainty caused by model structure (e.g. grid) and initial conditions (e.g. sea surface temperature, atmospheric moisture). These findings suggest the importance of using the perturbed physics ensemble (PPE) method to revisit tropical cyclone prediction under climate change scenario.

  16. A parallel overset-curvilinear-immersed boundary framework for simulating complex 3D incompressible flows

    PubMed Central

    Borazjani, Iman; Ge, Liang; Le, Trung; Sotiropoulos, Fotis

    2013-01-01

    We develop an overset-curvilinear immersed boundary (overset-CURVIB) method in a general non-inertial frame of reference to simulate a wide range of challenging biological flow problems. The method incorporates overset-curvilinear grids to efficiently handle multi-connected geometries and increase the resolution locally near immersed boundaries. Complex bodies undergoing arbitrarily large deformations may be embedded within the overset-curvilinear background grid and treated as sharp interfaces using the curvilinear immersed boundary (CURVIB) method (Ge and Sotiropoulos, Journal of Computational Physics, 2007). The incompressible flow equations are formulated in a general non-inertial frame of reference to enhance the overall versatility and efficiency of the numerical approach. Efficient search algorithms to identify areas requiring blanking, donor cells, and interpolation coefficients for constructing the boundary conditions at grid interfaces of the overset grid are developed and implemented using efficient parallel computing communication strategies to transfer information among sub-domains. The governing equations are discretized using a second-order accurate finite-volume approach and integrated in time via an efficient fractional-step method. Various strategies for ensuring globally conservative interpolation at grid interfaces suitable for incompressible flow fractional step methods are implemented and evaluated. The method is verified and validated against experimental data, and its capabilities are demonstrated by simulating the flow past multiple aquatic swimmers and the systolic flow in an anatomic left ventricle with a mechanical heart valve implanted in the aortic position. PMID:23833331

  17. The VIMOS Ultra Deep Survey: Nature, ISM properties, and ionizing spectra of CIII]λ1909 emitters at z = 2-4

    NASA Astrophysics Data System (ADS)

    Nakajima, K.; Schaerer, D.; Le Fèvre, O.; Amorín, R.; Talia, M.; Lemaux, B. C.; Tasca, L. A. M.; Vanzella, E.; Zamorani, G.; Bardelli, S.; Grazian, A.; Guaita, L.; Hathi, N. P.; Pentericci, L.; Zucca, E.

    2018-05-01

    Context. Ultraviolet (UV) emission-line spectra are used to spectroscopically confirm high-z galaxies and increasingly also to determine their physical properties. Aims: We construct photoionization models to interpret the observed UV spectra of distant galaxies in terms of the dominant radiation field and the physical condition of the interstellar medium (ISM). These models are applied to new spectroscopic observations from the VIMOS Ultra Deep Survey (VUDS). Methods: We construct a large grid of photoionization models, which use several incident radiation fields (stellar populations, active galactic nuclei (AGNs), mix of stars and AGNs, blackbodies, and others), and cover a wide range of metallicities and ionization parameters. From these models we derive new spectral UV line diagnostics using equivalent widths (EWs) of [CIII]λ1909 doublet, CIVλ1549 doublet and the line ratios of [CIII], CIV, and He IIλ1640 recombination lines. We apply these diagnostics to a sample of 450 [CIII]-emitting galaxies at redshifts z = 2-4 previously identified in VUDS. Results: We demonstrate that our photoionization models successfully reproduce observations of nearby and high-redshift sources with known radiation field and/or metallicity. For star-forming galaxies our models predict that [CIII] EW peaks at sub-solar metallicities, whereas CIV EW peaks at even lower metallicity. Using the UV diagnostics, we show that the average star-forming galaxy (EW([CIII]) 2 Å) based on the composite of the 450 UV-selected galaxies' spectra The inferred metallicity and ionization parameter is typically Z = 0.3-0.5 Z⊙ and logU = -2.7 to - 3, in agreement with earlier works at similar redshifts. The models also indicate an average age of 50-200 Myr since the beginning of the current star-formation, and an ionizing photon production rate, ξion, of logξion/erg-1 Hz = 25.3-25.4. Among the sources with EW([CIII]) >= 10 Å, approximately 30% are likely dominated by AGNs. The metallicity derived for galaxies with EW(CIII) = 10-20 Å is low, Z = 0.02-0.2 Z⊙, and the ionization parameter higher (logU -1.7) than the average star-forming galaxy. To explain the average UV observations of the strongest but rarest [CIII] emitters (EW([CIII]) > 20 Å), we find that stellar photoionization is clearly insufficient. A radiation field consisting of a mix of a young stellar population (logξion/erg-1 Hz 25.7) plus an AGN component is required. Furthermore an enhanced C/O abundance ratio (up to the solar value) is needed for metallicities Z = 0.1-0.2 Z⊙ and logU = -1.7 to - 1.5. Conclusions: A large grid of photoionization models has allowed us to propose new diagnostic diagrams to classify the nature of the ionizing radiation field (star formation or AGN) of distant galaxies using UV emission lines, and to constrain their ISM properties. We have applied this grid to a sample of [CIII]-emitting galaxies at z = 2-4 detected in VUDS, finding a range of physical properties and clear evidence for significant AGN contribution in rare sources with very strong [CIII] emission. The UV diagnostics we propose should also serve as an important basis for the interpretation of upcoming observations of high-redshift galaxies. Based on data obtained with the European Southern Observatory Very Large Telescope, Paranal, Chile, under Large Program 185.A-0791.JSPS Overseas Research Fellow.

  18. User's Guide - WRF Lightning Assimilation

    EPA Pesticide Factsheets

    This document describes how to run WRF with the lightning assimilation technique described in Heath et al. (2016). The assimilation method uses gridded lightning data to trigger and suppress sub-grid deep convection in Kain-Fritsch.

  19. Precipitation characteristics of CAM5 physics at mesoscale resolution during MC3E and the impact of convective timescale choice

    DOE PAGES

    Gustafson, William I.; Ma, Po-Lun; Singh, Balwinder

    2014-12-17

    The physics suite of the Community Atmosphere Model version 5 (CAM5) has recently been implemented in the Weather Research and Forecasting (WRF) model to explore the behavior of the parameterization suite at high resolution and in the more controlled setting of a limited area model. The initial paper documenting this capability characterized the behavior for northern high latitude conditions. This present paper characterizes the precipitation characteristics for continental, mid-latitude, springtime conditions during the Midlatitude Continental Convective Clouds Experiment (MC3E) over the central United States. This period exhibited a range of convective conditions from those driven strongly by large-scale synoptic regimesmore » to more locally driven convection. The study focuses on the precipitation behavior at 32 km grid spacing to better anticipate how the physics will behave in the global model when used at similar grid spacing in the coming years. Importantly, one change to the Zhang-McFarlane deep convective parameterization when implemented in WRF was to make the convective timescale parameter an explicit function of grid spacing. This study examines the sensitivity of the precipitation to the default value of the convective timescale in WRF, which is 600 seconds for 32 km grid spacing, to the value of 3600 seconds used for 2 degree grid spacing in CAM5. For comparison, an infinite convective timescale is also used. The results show that the 600 second timescale gives the most accurate precipitation over the central United States in terms of rain amount. However, this setting has the worst precipitation diurnal cycle, with the convection too tightly linked to the daytime surface heating. Longer timescales greatly improve the diurnal cycle but result in less precipitation and produce a low bias. An analysis of rain rates shows the accurate precipitation amount with the shorter timescale is assembled from an over abundance of drizzle combined with too little heavy rain events. With longer timescales one can improve the distribution, particularly for the extreme rain rates. Ultimately, without changing other aspects of the physics, one must choose between accurate diurnal timing and rain amount when choosing an appropriate convective timescale.« less

  20. Structural features of single crystals of LuB{sub 12} upon a transition to the cage-glass phase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolotina, N. B., E-mail: nb-bolotina@mail.ru; Verin, I. A.; Shitsevalova, N. Yu.

    2016-03-15

    The unit-cell parameters of dodecaboride LuB{sub 12}, which undergoes a transition to the cage-glass phase, have been determined for the first time in the temperature range of 50–75 K by X-ray diffraction, and the single-crystal structure of this compound is established at 50 K. Nonlinear changes in the unit-cell parameters correspond to anomalies in the physical properties near the glass-transition temperature T* ~ 50–70 K. This compound has cubic symmetry at room temperature, and it is reduced to tetragonal symmetry at lower temperatures. Based on the X-ray diffraction data and relying on the physical properties of the crystals, the structuremore » model, in which a small part (~15%) of Lu atoms are displaced from the 2a sites at the centers of the B{sub 24} cuboctahedra to the 16n sites of sp. gr. I4/mmm, seems preferable.« less

  1. Dispersion of thermooptic coefficients of soda-lime-silica glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghosh, G.

    1995-01-01

    The thermooptic coefficients, i.e., the variation of refractive index with temperature (dn/dT), are analyzed in a physically meaningful model for two series of soda-lime-silica glasses. 25Na{sub 2}O{center_dot}xCaO{center_dot}(75 {minus} x)SiO{sub 2} and (25 {minus} x)Na{sub 2}O{center_dot}xCaO {center_dot} 75SiO{sub 2}. This model is based on three physical parameters--the thermal expansion coefficient and excitonic and isentropic optical bands that are in the vacuum ultraviolet region--instead of on consideration of the temperature coefficient of electronic polarizability, as suggested in 1960. This model is capable of predicting and analyzing the thermooptic coefficients throughout the transmission region of the optical glasses at any operating temperature.

  2. Addressing sub-scan variability of tundra snow properties in ground-based Ku- and X-band scatterometer observations

    NASA Astrophysics Data System (ADS)

    King, J. M.; Kasurak, A.; Kelly, R. E.; Duguay, C. R.; Derksen, C.; Rutter, N.; Sandells, M.; Watts, T.

    2012-12-01

    During the winter of 2010-2011 ground-based Ku- (17.2 GHz) and X-band (9.6 GHz) scatterometers were deployed near Churchill, Manitoba, Canada to evaluate the potential for dual-frequency observation of tundra snow properties. Field-based scatterometer observations when combined with in-situ snowpack properties and physically based models, provide the means necessary to develop and evaluate local scale property retrievals. To form meaningful analysis of the observed physical interaction space, potential sources of bias and error in the observed backscatter must be identified and quantified. This paper explores variation in observed Ku- and X-band backscatter in relation to the physical complexities of shallow tundra snow whose properties evolve at scales smaller than the observing instrument. The University of Waterloo scatterometer (UW-Scat) integrates observations over wide azimuth sweeps, several meters in length, to minimize errors resulting from radar fade and poor signal-to-noise ratios. Under ideal conditions, an assumption is made that the observed snow target is homogeneous. Despite an often-outward appearance of homogeneity, topographic elements of the Canadian open tundra produce significant local scale variability in snow properties, including snow water equivalent (SWE). Snow at open tundra sites observed during this campaign was found to vary by as much as 20 cm in depth and 40 mm in SWE within the scatterometer field of view. Previous studies suggest that changes in snow properties on this order will produce significant variation in backscatter, potentially introducing bias into products used for analysis. To assess the influence of sub-scan variability, extensive snow surveys were completed within the scatterometer field of view immediately after each scan at 32 sites. A standardized sampling protocol captured a grid of geo-located measurements, characterizing the horizontal variability of bulk properties including depth, density, and SWE. Based upon these measurements, continuous surfaces were generated to represent the observed snow target. Two snow pits were also completed within the field of view, quantifying vertical variability in density, permittivity, temperature, grain size, and stratigraphy. A new post-processing method is applied to divide the previously aggregated scatterometer observations into smaller sub-sets, which are then co-located with the physical snow observations. Sub-scan backscatter coefficients and their relationship to tundra snowpack parameters are then explored. The results presented here provide quantitative methods relevant to the radar observation science of snow and, therefore, to potential future space-borne missions such as the Cold Regions Hydrology High-resolution Observatory (CoReH2O), a candidate European Space Agency Earth Explorer mission. Moreover, this paper provides guidelines for future studies exploring ground-based scatterometer observations of tundra snow.

  3. New possibilities of neodymium-doped vanadate crystals as active media for diode-pumped lasers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vlasov, V I; Garnov, S V; Zavartsev, Yu D

    The spectral and lasing parameters of Nd:GdVO{sub 4}, Nd:YVO{sub 4}, and Nd:Gd{sub 0.7}Y{sub 0.3}VO{sub 4} vanadate crystals cut along the c axis are studied. Lasing is obtained for the first time in a nonselective resonator at the {sup 4}F{sub 3/2}-{sup 4}I{sub 11/2} transition at 1065.5 nm in a Nd:GdVO{sub 4} crystal. Tuning is realised in the range from 1062.3 to 1066.1 nm and two-frequency lasing is obtained. (special issue devoted to the 25th anniversary of the a.m. prokhorov general physics institute)

  4. A Taxonomy on Accountability and Privacy Issues in Smart Grids

    NASA Astrophysics Data System (ADS)

    Naik, Ameya; Shahnasser, Hamid

    2017-07-01

    Cyber-Physical Systems (CPS) are combinations of computation, networking, and physical processes. Embedded computers and networks monitor control the physical processes, which affect computations and vice versa. Two applications of cyber physical systems include health-care and smart grid. In this paper, we have considered privacy aspects of cyber-physical system applicable to smart grid. Smart grid in collaboration with different stockholders can help in the improvement of power generation, communication, circulation and consumption. The proper management with monitoring feature by customers and utility of energy usage can be done through proper transmission and electricity flow; however cyber vulnerability could be increased due to an increased assimilation and linkage. This paper discusses various frameworks and architectures proposed for achieving accountability in smart grids by addressing privacy issues in Advance Metering Infrastructure (AMI). This paper also highlights additional work needed for accountability in more precise specifications such as uncertainty or ambiguity, indistinct, unmanageability, and undetectably.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malik, Hitendra K., E-mail: hkmalik@physics.iitd.ac.in; Singh, Omveer; Dahiya, Raj P.

    We have established a hot cathode arc discharge plasma system, where different stainless steel samples can be treated by monitoring the plasma parameters and nitriding parameters independently. In the present work, a mixture of 70% N{sub 2} and 30% H{sub 2} gases was fed into the plasma chamber and the treatment time and substrate temperature were optimized for treating 304L Stainless Steel samples. Various physical techniques such as x-ray diffraction, energy dispersive x-ray spectroscopy and micro-vickers hardness tester were employed to determine the structural, surface composition and surface hardness of the treated samples.

  6. Model atmospheres for M (sub)dwarf stars. 1: The base model grid

    NASA Technical Reports Server (NTRS)

    Allard, France; Hauschildt, Peter H.

    1995-01-01

    We have calculated a grid of more than 700 model atmospheres valid for a wide range of parameters encompassing the coolest known M dwarfs, M subdwarfs, and brown dwarf candidates: 1500 less than or equal to T(sub eff) less than or equal to 4000 K, 3.5 less than or equal to log g less than or equal to 5.5, and -4.0 less than or equal to (M/H) less than or equal to +0.5. Our equation of state includes 105 molecules and up to 27 ionization stages of 39 elements. In the calculations of the base grid of model atmospheres presented here, we include over 300 molecular bands of four molecules (TiO, VO, CaH, FeH) in the JOLA approximation, the water opacity of Ludwig (1971), collision-induced opacities, b-f and f-f atomic processes, as well as about 2 million spectral lines selected from a list with more than 42 million atomic and 24 million molecular (H2, CH, NH, OH, MgH, SiH, C2, CN, CO, SiO) lines. High-resolution synthetic spectra are obtained using an opacity sampling method. The model atmospheres and spectra are calculated with the generalized stellar atmosphere code PHOENIX, assuming LTE, plane-parallel geometry, energy (radiative plus convective) conservation, and hydrostatic equilibrium. The model spectra give close agreement with observations of M dwarfs across a wide spectral range from the blue to the near-IR, with one notable exception: the fit to the water bands. We discuss several practical applications of our model grid, e.g., broadband colors derived from the synthetic spectra. In light of current efforts to identify genuine brown dwarfs, we also show how low-resolution spectra of cool dwarfs vary with surface gravity, and how the high-regulation line profile of the Li I resonance doublet depends on the Li abundance.

  7. Sensitivity of High-Resolution Simulations of Hurricane Bob (1991) to Planetary Boundary Layer Parameterizations

    NASA Technical Reports Server (NTRS)

    Braun, Scott A.; Tao, Wei-Kuo

    1999-01-01

    The MM5 mesoscale model is used to simulate Hurricane Bob (1991) using grids nested to high resolution (4 km). Tests are conducted to determine the sensitivity of the simulation to the available planetary boundary layer parameterizations, including the bulk-aerodynamic, Blackadar, Medium-RanGe Forecast (MRF) model, and Burk-Thompson boundary-layer schemes. Significant sensitivity is seen, with minimum central pressures varying by up to 17 mb. The Burk-Thompson and bulk-aerodynamic boundary-layer schemes produced the strongest storms while the MRF scheme produced the weakest storm. Precipitation structure of the simulated hurricanes also varied substantially with the boundary layer parameterizations. Diagnostics of boundary-layer variables indicated that the intensity of the simulated hurricanes generally increased as the ratio of the surface exchange coefficients for heat and momentum, C(sub h)/C(sub M), although the manner in which the vertical mixing takes place was also important. Findings specific to the boundary-layer schemes include: 1) the MRF scheme produces mixing that is too deep and causes drying of the lower boundary layer in the inner-core region of the hurricane; 2) the bulk-aerodynamic scheme produces mixing that is probably too shallow, but results in a strong hurricane because of a large value of C(sub h)/C(sub M) (approximately 1.3); 3) the MRF and Blackadar schemes are weak partly because of smaller surface moisture fluxes that result in a reduced value of C(sub h)/C(sub M) (approximately 0.7); 4) the Burk-Thompson scheme produces a strong storm with C(sub h)/C(sub M) approximately 1; and 5) the formulation of the wind-speed dependence of the surface roughness parameter, z(sub 0), is important for getting appropriate values of the surface exchange coefficients in hurricanes based upon current estimates of these parameters.

  8. Scaling between reanalyses and high-resolution land-surface modelling in mountainous areas - enabling better application and testing of reanalyses in heterogeneous environments

    NASA Astrophysics Data System (ADS)

    Gruber, S.; Fiddes, J.

    2013-12-01

    In mountainous topography, the difference in scale between atmospheric reanalyses (typically tens of kilometres) and relevant processes and phenomena near the Earth surface, such as permafrost or snow cover (meters to tens of meters) is most obvious. This contrast of scales is one of the major obstacles to using reanalysis data for the simulation of surface phenomena and to confronting reanalyses with independent observation. At the example of modelling permafrost in mountain areas (but simple to generalise to other phenomena and heterogeneous environments), we present and test methods against measurements for (A) scaling atmospheric data from the reanalysis to the ground level and (B) smart sampling of the heterogeneous landscape in order to set up a lumped model simulation that represents the high-resolution land surface. TopoSCALE (Part A, see http://dx.doi.org/10.5194/gmdd-6-3381-2013) is a scheme, which scales coarse-grid climate fields to fine-grid topography using pressure level data. In addition, it applies necessary topographic corrections e.g. those variables required for computation of radiation fields. This provides the necessary driving fields to the LSM. Tested against independent ground data, this scheme has been shown to improve the scaling and distribution of meteorological parameters in complex terrain, as compared to conventional methods, e.g. lapse rate based approaches. TopoSUB (Part B, see http://dx.doi.org/10.5194/gmd-5-1245-2012) is a surface pre-processor designed to sample a fine-grid domain (defined by a digital elevation model) along important topographical (or other) dimensions through a clustering scheme. This allows constructing a lumped model representing the main sources of fine-grid variability and applying a 1D LSM efficiently over large areas. Results can processed to derive (i) summary statistics at coarse-scale re-analysis grid resolution, (ii) high-resolution data fields spatialized to e.g., the fine-scale digital elevation model grid, or (iii) validation products for locations at which measurements exist, only. The ability of TopoSUB to approximate results simulated by a 2D distributed numerical LSM at a factor of ~10,000 less computations is demonstrated by comparison of 2D and lumped simulations. Successful application of the combined scheme in the European Alps is reported and based on its results, open issues for future research are outlined.

  9. Physics-based distributed snow models in the operational arena: Current and future challenges

    NASA Astrophysics Data System (ADS)

    Winstral, A. H.; Jonas, T.; Schirmer, M.; Helbig, N.

    2017-12-01

    The demand for modeling tools robust to climate change and weather extremes along with coincident increases in computational capabilities have led to an increase in the use of physics-based snow models in operational applications. Current operational applications include the WSL-SLF's across Switzerland, ASO's in California, and USDA-ARS's in Idaho. While the physics-based approaches offer many advantages there remain limitations and modeling challenges. The most evident limitation remains computation times that often limit forecasters to a single, deterministic model run. Other limitations however remain less conspicuous amidst the assumptions that these models require little to no calibration based on their foundation on physical principles. Yet all energy balance snow models seemingly contain parameterizations or simplifications of processes where validation data are scarce or present understanding is limited. At the research-basin scale where many of these models were developed these modeling elements may prove adequate. However when applied over large areas, spatially invariable parameterizations of snow albedo, roughness lengths and atmospheric exchange coefficients - all vital to determining the snowcover energy balance - become problematic. Moreover as we apply models over larger grid cells, the representation of sub-grid variability such as the snow-covered fraction adds to the challenges. Here, we will demonstrate some of the major sensitivities of distributed energy balance snow models to particular model constructs, the need for advanced and spatially flexible methods and parameterizations, and prompt the community for open dialogue and future collaborations to further modeling capabilities.

  10. Metadata Creation Tool Content Template For Data Stewards

    EPA Science Inventory

    A space-time Bayesian fusion model (McMillan, Holland, Morara, and Feng, 2009) is used to provide daily, gridded predictive PM2.5 (daily average) and O3 (daily 8-hr maximum) surfaces for 2001-2005. The fusion model uses both air quality monitoring data from ...

  11. Benefits of using enhanced air quality information in human health studies

    EPA Science Inventory

    The ability of four (4) enhancements of gridded PM2.5 concentrations derived from observations and air quality models to detect the relative risk of long-term exposure to PM2.5 are evaluated with a simulation study. The four enhancements include nearest-nei...

  12. Uncertain Representations of Sub-Grid Pollutant Transport in Chemistry-Transport Models and Impacts on Long-Range Transport and Global Composition

    NASA Technical Reports Server (NTRS)

    Pawson, Steven; Zhu, Z.; Ott, L. E.; Molod, A.; Duncan, B. N.; Nielsen, J. E.

    2009-01-01

    Sub-grid transport, by convection and turbulence, is known to play an important role in lofting pollutants from their source regions. Consequently, the long-range transport and climatology of simulated atmospheric composition are impacted. This study uses the Goddard Earth Observing System, Version 5 (GEOS-5) atmospheric model to study pollutant transport. The baseline model uses a Relaxed Arakawa-Schubert (RAS) scheme that represents convection through a sequence of linearly entraining cloud plumes characterized by unique detrainment levels. Thermodynamics, moisture and trace gases are transported in the same manner. Various approximate forms of trace-gas transport are implemented, in which the box-averaged cloud mass fluxes from RAS are used with different numerical approaches. Substantial impacts on forward-model simulations of CO (using a linearized chemistry) are evident. In particular, some aspects of simulations using a diffusive form of sub-grid transport bear more resemblance to space-biased CO observations than do the baseline simulations with RAS transport. Implications for transport in the real atmosphere will be discussed. Another issue of importance is that many adjoint/inversion computations use simplified representations of sub-grid transport that may be inconsistent with the forward models: implications will be discussed. Finally, simulations using a complex chemistry model in GEOS-5 (in place of the linearized CO model) are underway: noteworthy results from this simulation will be mentioned.

  13. Techniques for grid manipulation and adaptation. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Choo, Yung K.; Eisemann, Peter R.; Lee, Ki D.

    1992-01-01

    Two approaches have been taken to provide systematic grid manipulation for improved grid quality. One is the control point form (CPF) of algebraic grid generation. It provides explicit control of the physical grid shape and grid spacing through the movement of the control points. It works well in the interactive computer graphics environment and hence can be a good candidate for integration with other emerging technologies. The other approach is grid adaptation using a numerical mapping between the physical space and a parametric space. Grid adaptation is achieved by modifying the mapping functions through the effects of grid control sources. The adaptation process can be repeated in a cyclic manner if satisfactory results are not achieved after a single application.

  14. Energy regeneration model of self-consistent field of electron beams into electric power*

    NASA Astrophysics Data System (ADS)

    Kazmin, B. N.; Ryzhov, D. R.; Trifanov, I. V.; Snezhko, A. A.; Savelyeva, M. V.

    2016-04-01

    We consider physic-mathematical models of electric processes in electron beams, conversion of beam parameters into electric power values and their transformation into users’ electric power grid (onboard spacecraft network). We perform computer simulation validating high energy efficiency of the studied processes to be applied in the electric power technology to produce the power as well as electric power plants and propulsion installation in the spacecraft.

  15. Forecast skill of a high-resolution real-time mesoscale model designed for weather support of operations at Kennedy Space Center and Cape Canaveral Air Station

    NASA Technical Reports Server (NTRS)

    Taylor, Gregory E.; Zack, John W.; Manobianco, John

    1994-01-01

    NASA funded Mesoscale Environmental Simulations and Operations (MESO), Inc. to develop a version of the Mesoscale Atmospheric Simulation System (MASS). The model has been modified specifically for short-range forecasting in the vicinity of KSC/CCAS. To accomplish this, the model domain has been limited to increase the number of horizontal grid points (and therefore grid resolution) and the model' s treatment of precipitation, radiation, and surface hydrology physics has been enhanced to predict convection forced by local variations in surface heat, moisture fluxes, and cloud shading. The objective of this paper is to (1) provide an overview of MASS including the real-time initialization and configuration for running the data pre-processor and model, and (2) to summarize the preliminary evaluation of the model's forecasts of temperature, moisture, and wind at selected rawinsonde station locations during February 1994 and July 1994. MASS is a hydrostatic, three-dimensional modeling system which includes schemes to represent planetary boundary layer processes, surface energy and moisture budgets, free atmospheric long and short wave radiation, cloud microphysics, and sub-grid scale moist convection.

  16. Cyber-physical security of Wide-Area Monitoring, Protection and Control in a smart grid environment

    PubMed Central

    Ashok, Aditya; Hahn, Adam; Govindarasu, Manimaran

    2013-01-01

    Smart grid initiatives will produce a grid that is increasingly dependent on its cyber infrastructure in order to support the numerous power applications necessary to provide improved grid monitoring and control capabilities. However, recent findings documented in government reports and other literature, indicate the growing threat of cyber-based attacks in numbers and sophistication targeting the nation’s electric grid and other critical infrastructures. Specifically, this paper discusses cyber-physical security of Wide-Area Monitoring, Protection and Control (WAMPAC) from a coordinated cyber attack perspective and introduces a game-theoretic approach to address the issue. Finally, the paper briefly describes how cyber-physical testbeds can be used to evaluate the security research and perform realistic attack-defense studies for smart grid type environments. PMID:25685516

  17. Cyber-physical security of Wide-Area Monitoring, Protection and Control in a smart grid environment.

    PubMed

    Ashok, Aditya; Hahn, Adam; Govindarasu, Manimaran

    2014-07-01

    Smart grid initiatives will produce a grid that is increasingly dependent on its cyber infrastructure in order to support the numerous power applications necessary to provide improved grid monitoring and control capabilities. However, recent findings documented in government reports and other literature, indicate the growing threat of cyber-based attacks in numbers and sophistication targeting the nation's electric grid and other critical infrastructures. Specifically, this paper discusses cyber-physical security of Wide-Area Monitoring, Protection and Control (WAMPAC) from a coordinated cyber attack perspective and introduces a game-theoretic approach to address the issue. Finally, the paper briefly describes how cyber-physical testbeds can be used to evaluate the security research and perform realistic attack-defense studies for smart grid type environments.

  18. CVD-MPFA full pressure support, coupled unstructured discrete fracture-matrix Darcy-flux approximations

    NASA Astrophysics Data System (ADS)

    Ahmed, Raheel; Edwards, Michael G.; Lamine, Sadok; Huisman, Bastiaan A. H.; Pal, Mayur

    2017-11-01

    Two novel control-volume methods are presented for flow in fractured media, and involve coupling the control-volume distributed multi-point flux approximation (CVD-MPFA) constructed with full pressure support (FPS), to two types of discrete fracture-matrix approximation for simulation on unstructured grids; (i) involving hybrid grids and (ii) a lower dimensional fracture model. Flow is governed by Darcy's law together with mass conservation both in the matrix and the fractures, where large discontinuities in permeability tensors can occur. Finite-volume FPS schemes are more robust than the earlier CVD-MPFA triangular pressure support (TPS) schemes for problems involving highly anisotropic homogeneous and heterogeneous full-tensor permeability fields. We use a cell-centred hybrid-grid method, where fractures are modelled by lower-dimensional interfaces between matrix cells in the physical mesh but expanded to equi-dimensional cells in the computational domain. We present a simple procedure to form a consistent hybrid-grid locally for a dual-cell. We also propose a novel hybrid-grid for intersecting fractures, for the FPS method, which reduces the condition number of the global linear system and leads to larger time steps for tracer transport. The transport equation for tracer flow is coupled with the pressure equation and provides flow parameter assessment of the fracture models. Transport results obtained via TPS and FPS hybrid-grid formulations are compared with the corresponding results of fine-scale explicit equi-dimensional formulations. The results show that the hybrid-grid FPS method applies to general full-tensor fields and provides improved robust approximations compared to the hybrid-grid TPS method for fractured domains, for both weakly anisotropic permeability fields and very strong anisotropic full-tensor permeability fields where the TPS scheme exhibits spurious oscillations. The hybrid-grid FPS formulation is extended to compressible flow and the results demonstrate the method is also robust for transient flow. Furthermore, we present FPS coupled with a lower-dimensional fracture model, where fractures are strictly lower-dimensional in the physical mesh as well as in the computational domain. We present a comparison of the hybrid-grid FPS method and the lower-dimensional fracture model for several cases of isotropic and anisotropic fractured media which illustrate the benefits of the respective methods.

  19. Basic research and data analysis for the earth and ocean physics applications program and for the National Geodetic Satellite program

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Data acquisition using single image and seven image data processing is used to provide a precise and accurate geometric description of the earth's surface. Transformation parameters and network distortions are determined, Sea slope along the continental boundaries of the U.S. and earth rotation are examined, along with close grid geodynamic satellite system. Data are derived for a mathematical description of the earth's gravitational field; time variations are determined for geometry of the ocean surface, the solid earth, gravity field, and other geophysical parameters.

  20. VizieR Online Data Catalog: Fundamental parameters of Kepler stars (Silva Aguirre+, 2015)

    NASA Astrophysics Data System (ADS)

    Silva Aguirre, V.; Davies, G. R.; Basu, S.; Christensen-Dalsgaard, J.; Creevey, O.; Metcalfe, T. S.; Bedding, T. R.; Casagrande, L.; Handberg, R.; Lund, M. N.; Nissen, P. E.; Chaplin, W. J.; Huber, D.; Serenelli, A. M.; Stello, D.; van Eylen, V.; Campante, T. L.; Elsworth, Y.; Gilliland, R. L.; Hekker, S.; Karoff, C.; Kawaler, S. D.; Kjeldsen, H.; Lundkvist, M. S.

    2016-02-01

    Our sample has been extracted from the 77 exoplanet host stars presented in Huber et al. (2013, Cat. J/ApJ/767/127). We have made use of the full time-base of observations from the Kepler satellite to uniformly determine precise fundamental stellar parameters, including ages, for a sample of exoplanet host stars where high-quality asteroseismic data were available. We devised a Bayesian procedure flexible in its input and applied it to different grids of models to study systematics from input physics and extract statistically robust properties for all stars. (4 data files).

  1. Preliminary results on heavy flavor physics at SLD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Usher, T.

    1994-12-01

    The author reports on preliminary heavy flavor physics results from the SLD detector at the SLAC Linear Collider. Efficient tagging of b{bar b} events is achieved with an impact parameter technique that takes advantage of the small and stable interaction point of the SLC and all charged tracks in Z{sup 0} decays. This technique is applied to samples of Z{sup 0} events collected during the 1992 and 1993 physics runs. Preliminary measurements of the ratio R{sub b} = {Gamma}(Z{sup 0} {yields} b{bar b})/{Gamma}(Z{sup 0} {yields} hadrons) and the average B hadron lifetime <{tau}{sub B}> are reported. In a sample ofmore » 27K Z{sup 0} events, values of R{sub b} = 0.235 {+-} 0.006(stat.) {+-} 0.018(syst.) and <{tau}{sub B}> = 1.53 {+-} 0.006(stat.) {+-} 0.018(syst.) are obtained. In addition, the first measurement of the left-right asymmetry A{sub b} is reported. Using a sample of 38K Z{sup 0} events with a luminosity weighted electron polarization of 62%, the author obtains a preliminary value of A{sub b} = 0.94 {+-} 0.006(stat.) {+-} 0.018(syst.).« less

  2. Physics evaluation of compact tokamak ignition experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uckan, N.A.; Houlberg, W.A.; Sheffield, J.

    1985-01-01

    At present, several approaches for compact, high-field tokamak ignition experiments are being considered. A comprehensive method for analyzing the potential physics operating regimes and plasma performance characteristics of such ignition experiments with O-D (analytic) and 1-1/2-D (WHIST) transport models is presented. The results from both calculations are in agreement and show that there are regimes in parameter space in which a class of small (R/sub o/ approx. 1-2 m), high-field (B/sub o/ approx. 8-13 T) tokamaks with aB/sub o/S/q/sub */ approx. 25 +- 5 and kappa = b/a approx. 1.6-2.0 appears ignitable for a reasonable range of transport assumptions. Consideringmore » both the density and beta limits, an evaluation of the performance is presented for various forms of chi/sub e/ and chi/sub i/, including degradation at high power and sawtooth activity. The prospects of ohmic ignition are also examined. 16 refs., 13 figs.« less

  3. Measurement of beam divergence of 30-centimeter dished grids

    NASA Technical Reports Server (NTRS)

    Danilowicz, R. L.; Rawlin, V. K.; Banks, B. A.; Wintucky, E. G.

    1973-01-01

    The beam divergence of a 30-centimeter diameter thruster with dished grids was calculated from current densities measured with a probe rake containing seventeen planar molybdenum probes. The measured data were analyzed as a function of a number of parameters. The most sensitive parameters were the amount of compensation of the accelerator grid and the ratio of net to total accelerating voltage. The thrust losses were reduced by over 5 percent with the use of compensated grids alone, and by variation of other parameters the overall thrust losses due to beam divergence were reduced to less than 2 percent.

  4. Measurement of beam divergence of 30-centimeter dished grids

    NASA Technical Reports Server (NTRS)

    Danilowicz, R. L.; Rawlin, V. K.; Banks, B. A.; Wintucky, E. G.

    1973-01-01

    The beam divergence of a 30-centimeter diameter thrustor with dished grids was calculated from current densities measured with a probe rake containing seventeen planar molybdenum probes. The measured data were analyzed as a function of a number of parameters. The most sensitive parameters were the amount of compensation of the accelerator grid and the ratio of net to total accelerating voltage. The thrust losses were reduced by over 5 percent with the use of compensated grids alone, and by variation of other parameters the overall thrust losses due to beam divergence were reduced to less than 2 percent.

  5. An alternate protocol to achieve stochastic and deterministic resonances

    NASA Astrophysics Data System (ADS)

    Tiwari, Ishant; Dave, Darshil; Phogat, Richa; Khera, Neev; Parmananda, P.

    2017-10-01

    Periodic and Aperiodic Stochastic Resonance (SR) and Deterministic Resonance (DR) are studied in this paper. To check for the ubiquitousness of the phenomena, two unrelated systems, namely, FitzHugh-Nagumo and a particle in a bistable potential well, are studied. Instead of the conventional scenario of noise amplitude (in the case of SR) or chaotic signal amplitude (in the case of DR) variation, a tunable system parameter ("a" in the case of FitzHugh-Nagumo model and the damping coefficient "j" in the bistable model) is regulated. The operating values of these parameters are defined as the "setpoint" of the system throughout the present work. Our results indicate that there exists an optimal value of the setpoint for which maximum information transfer between the input and the output signals takes place. This information transfer from the input sub-threshold signal to the output dynamics is quantified by the normalised cross-correlation coefficient ( | CCC | ). | CCC | as a function of the setpoint exhibits a unimodal variation which is characteristic of SR (or DR). Furthermore, | CCC | is computed for a grid of noise (or chaotic signal) amplitude and setpoint values. The heat map of | CCC | over this grid yields the presence of a resonance region in the noise-setpoint plane for which the maximum enhancement of the input sub-threshold signal is observed. This resonance region could be possibly used to explain how organisms maintain their signal detection efficacy with fluctuating amounts of noise present in their environment. Interestingly, the method of regulating the setpoint without changing the noise amplitude was not able to induce Coherence Resonance (CR). A possible, qualitative reasoning for this is provided.

  6. NCAR global model topography generation software for unstructured grids

    NASA Astrophysics Data System (ADS)

    Lauritzen, P. H.; Bacmeister, J. T.; Callaghan, P. F.; Taylor, M. A.

    2015-06-01

    It is the purpose of this paper to document the NCAR global model topography generation software for unstructured grids. Given a model grid, the software computes the fraction of the grid box covered by land, the gridbox mean elevation, and associated sub-grid scale variances commonly used for gravity wave and turbulent mountain stress parameterizations. The software supports regular latitude-longitude grids as well as unstructured grids; e.g. icosahedral, Voronoi, cubed-sphere and variable resolution grids. As an example application and in the spirit of documenting model development, exploratory simulations illustrating the impacts of topographic smoothing with the NCAR-DOE CESM (Community Earth System Model) CAM5.2-SE (Community Atmosphere Model version 5.2 - Spectral Elements dynamical core) are shown.

  7. Elliptic surface grid generation on minimal and parmetrized surfaces

    NASA Technical Reports Server (NTRS)

    Spekreijse, S. P.; Nijhuis, G. H.; Boerstoel, J. W.

    1995-01-01

    An elliptic grid generation method is presented which generates excellent boundary conforming grids in domains in 2D physical space. The method is based on the composition of an algebraic and elliptic transformation. The composite mapping obeys the familiar Poisson grid generation system with control functions specified by the algebraic transformation. New expressions are given for the control functions. Grid orthogonality at the boundary is achieved by modification of the algebraic transformation. It is shown that grid generation on a minimal surface in 3D physical space is in fact equivalent to grid generation in a domain in 2D physical space. A second elliptic grid generation method is presented which generates excellent boundary conforming grids on smooth surfaces. It is assumed that the surfaces are parametrized and that the grid only depends on the shape of the surface and is independent of the parametrization. Concerning surface modeling, it is shown that bicubic Hermite interpolation is an excellent method to generate a smooth surface which is passing through a given discrete set of control points. In contrast to bicubic spline interpolation, there is extra freedom to model the tangent and twist vectors such that spurious oscillations are prevented.

  8. MONTE CARLO POPULATION SYNTHESIS OF POST-COMMON-ENVELOPE WHITE DWARF BINARIES AND TYPE Ia SUPERNOVA RATE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ablimit, Iminhaji; Maeda, Keiichi; Li, Xiang-Dong

    Binary population synthesis (BPS) studies provide a comprehensive way to understand the evolution of binaries and their end products. Close white dwarf (WD) binaries have crucial characteristics for examining the influence of unresolved physical parameters on binary evolution. In this paper, we perform Monte Carlo BPS simulations, investigating the population of WD/main-sequence (WD/MS) binaries and double WD binaries using a publicly available binary star evolution code under 37 different assumptions for key physical processes and binary initial conditions. We considered different combinations of the binding energy parameter ( λ {sub g}: considering gravitational energy only; λ {sub b}: considering bothmore » gravitational energy and internal energy; and λ {sub e}: considering gravitational energy, internal energy, and entropy of the envelope, with values derived from the MESA code), CE efficiency, critical mass ratio, initial primary mass function, and metallicity. We find that a larger number of post-CE WD/MS binaries in tight orbits are formed when the binding energy parameters are set by λ {sub e} than in those cases where other prescriptions are adopted. We also determine the effects of the other input parameters on the orbital periods and mass distributions of post-CE WD/MS binaries. As they contain at least one CO WD, double WD systems that evolved from WD/MS binaries may explode as type Ia supernovae (SNe Ia) via merging. In this work, we also investigate the frequency of two WD mergers and compare it to the SNe Ia rate. The calculated Galactic SNe Ia rate with λ = λ {sub e} is comparable to the observed SNe Ia rate, ∼8.2 × 10{sup 5} yr{sup 1} – ∼4 × 10{sup 3} yr{sup 1} depending on the other BPS parameters, if a DD system does not require a mass ratio higher than ∼0.8 to become an SNe Ia. On the other hand, a violent merger scenario, which requires the combined mass of two CO WDs ≥ 1.6 M {sub ⊙} and a mass ratio >0.8, results in a much lower SNe Ia rate than is observed.« less

  9. Improving flood forecasting capability of physically based distributed hydrological model by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Li, J.; Xu, H.

    2015-10-01

    Physically based distributed hydrological models discrete the terrain of the whole catchment into a number of grid cells at fine resolution, and assimilate different terrain data and precipitation to different cells, and are regarded to have the potential to improve the catchment hydrological processes simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters, but unfortunately, the uncertanties associated with this model parameter deriving is very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study, the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using PSO algorithm and to test its competence and to improve its performances, the second is to explore the possibility of improving physically based distributed hydrological models capability in cathcment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improverd Particle Swarm Optimization (PSO) algorithm is developed for the parameter optimization of Liuxihe model in catchment flood forecasting, the improvements include to adopt the linear decreasing inertia weight strategy to change the inertia weight, and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for Liuxihe model parameter optimization effectively, and could improve the model capability largely in catchment flood forecasting, thus proven that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological model. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for Liuxihe model catchment flood forcasting is 20 and 30, respectively.

  10. Development of a low cost integrated 15 kW A.C. solar tracking sub-array for grid connected PV power system applications

    NASA Astrophysics Data System (ADS)

    Stern, M.; West, R.; Fourer, G.; Whalen, W.; Van Loo, M.; Duran, G.

    1997-02-01

    Utility Power Group has achieved a significant reduction in the installed cost of grid-connected PV systems. The two part technical approach focused on 1) The utilization of a large area factory assembled PV panel, and 2) The integration and packaging of all sub-array power conversion and control functions within a single factory produced enclosure. Eight engineering prototype 15kW ac single axis solar tracking sub-arrays were designed, fabricated, and installed at the Sacramento Municipal Utility District's Hedge Substation site in 1996 and are being evaluated for performance and reliability. A number of design enhancements will be implemented in 1997 and demonstrated by the field deployment and operation of over twenty advanced sub-array PV power systems.

  11. A method to calibrate channel friction and bathymetry parameters of a Sub-Grid hydraulic model using SAR flood images

    NASA Astrophysics Data System (ADS)

    Wood, M.; Neal, J. C.; Hostache, R.; Corato, G.; Chini, M.; Giustarini, L.; Matgen, P.; Wagener, T.; Bates, P. D.

    2015-12-01

    Synthetic Aperture Radar (SAR) satellites are capable of all-weather day and night observations that can discriminate between land and smooth open water surfaces over large scales. Because of this there has been much interest in the use of SAR satellite data to improve our understanding of water processes, in particular for fluvial flood inundation mechanisms. Past studies prove that integrating SAR derived data with hydraulic models can improve simulations of flooding. However while much of this work focusses on improving model channel roughness values or inflows in ungauged catchments, improvement of model bathymetry is often overlooked. The provision of good bathymetric data is critical to the performance of hydraulic models but there are only a small number of ways to obtain bathymetry information where no direct measurements exist. Spatially distributed river depths are also rarely available. We present a methodology for calibration of model average channel depth and roughness parameters concurrently using SAR images of flood extent and a Sub-Grid model utilising hydraulic geometry concepts. The methodology uses real data from the European Space Agency's archive of ENVISAT[1] Wide Swath Mode images of the River Severn between Worcester and Tewkesbury during flood peaks between 2007 and 2010. Historic ENVISAT WSM images are currently free and easy to access from archive but the methodology can be applied with any available SAR data. The approach makes use of the SAR image processing algorithm of Giustarini[2] et al. (2013) to generate binary flood maps. A unique feature of the calibration methodology is to also use parameter 'identifiability' to locate the parameters with higher accuracy from a pre-assigned range (adopting the DYNIA method proposed by Wagener[3] et al., 2003). [1] https://gpod.eo.esa.int/services/ [2] Giustarini. 2013. 'A Change Detection Approach to Flood Mapping in Urban Areas Using TerraSAR-X'. IEEE Transactions on Geoscience and Remote Sensing, vol. 51, no. 4. [3] Wagener. 2003. 'Towards reduced uncertainty in conceptual rainfall-runoff modelling: Dynamic identifiability analysis'. Hydrol. Process. 17, 455-476.

  12. TIME-SERIES SPECTROSCOPY OF THE ECLIPSING BINARY Y CAM WITH A PULSATING COMPONENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Kyeongsoo; Lee, Jae Woo; Kim, Seung-Lee

    We present the physical properties of the semi-detached Algol-type eclipsing binary Y Cam based on high resolution spectra obtained using the Bohyunsan Optical Echelle Spectrograph. This is the first spectroscopic monitoring data obtained for this interesting binary system, which has a δ Sct-type pulsating component. We obtained a total of 59 spectra over 14 nights from 2009 December to 2011 March. Double-lined spectral features from the hot primary and cool secondary components were well identified. We determined the effective temperatures of the two stars to be T{sub eff,1} = 8000 ± 250 K and T{sub eff,2} = 4629 ± 150more » K. The projected rotational velocities are v{sub 1}sin i{sub 1} = 51 ± 4 km s{sup −1} and v{sub 2}sin i{sub 2} = 50 ± 10 km s{sup −1}, which are very similar to a synchronous rotation with the orbital motion. Physical parameters of each component were derived by analyzing our radial velocity data together with previous photometric light curves from the literature. The masses and radii are M{sub 1} = 2.08 ± 0.09 M{sub ⊙}, M{sub 2} = 0.48 ± 0.03 M{sub ⊙}, R{sub 1} = 3.14 ± 0.05 R{sub ⊙}, and R{sub 2} = 3.33 ± 0.05 R{sub ⊙}, respectively. A comparison of these parameters with the theoretical evolution tracks showed that the primary component is located between the zero-age main sequence and the terminal-age main sequence, while the low-mass secondary is noticeably evolved. This indicates that the two components have experienced mass exchange with each other and the primary has undergone an evolution process different from that of single δ Sct-type pulsators.« less

  13. Simplified galaxy formation with mesh-less hydrodynamics

    NASA Astrophysics Data System (ADS)

    Lupi, Alessandro; Volonteri, Marta; Silk, Joseph

    2017-09-01

    Numerical simulations have become a necessary tool to describe the complex interactions among the different processes involved in galaxy formation and evolution, unfeasible via an analytic approach. The last decade has seen a great effort by the scientific community in improving the sub-grid physics modelling and the numerical techniques used to make numerical simulations more predictive. Although the recently publicly available code gizmo has proven to be successful in reproducing galaxy properties when coupled with the model of the MUFASA simulations and the more sophisticated prescriptions of the Feedback In Realistic Environment (FIRE) set-up, it has not been tested yet using delayed cooling supernova feedback, which still represent a reasonable approach for large cosmological simulations, for which detailed sub-grid models are prohibitive. In order to limit the computational cost and to be able to resolve the disc structure in the galaxies we perform a suite of zoom-in cosmological simulations with rather low resolution centred around a sub-L* galaxy with a halo mass of 3 × 1011 M⊙ at z = 0, to investigate the ability of this simple model, coupled with the new hydrodynamic method of gizmo, to reproduce observed galaxy scaling relations (stellar to halo mass, stellar and baryonic Tully-Fisher, stellar mass-metallicity and mass-size). We find that the results are in good agreement with the main scaling relations, except for the total stellar mass, larger than that predicted by the abundance matching technique, and the effective sizes for the most massive galaxies in the sample, which are too small.

  14. Theoretical analysis of the correlation observed in fatigue crack growth rate parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chay, S.C.; Liaw, P.K.

    Fatigue crack growth rates have been found to follow the Paris-Erdogan rule, da/dN = C{sub o}({Delta}K){sup n}, for many steels, aluminum, nickel and copper alloys. The fatigue crack growth rate behavior in the Paris regime, thus, can be characterized by the parameters C{sub o} and n, which have been obtained for various materials. When n vs the logarithm of C{sub o} were plotted for various experimental results, a very definite linear relationship has been observed by many investigators, and questions have been raised as to the nature of this correlation. This paper presents a theoretical analysis that explains precisely whymore » such a linear correlation should exist between the two parameters, how strong the relationship should be, and how it can be predicted by analysis. This analysis proves that the source of such a correlation is of mathematical nature rather than physical.« less

  15. Border Collision of Three-Phase Voltage-Source Inverter System with Interacting Loads

    NASA Astrophysics Data System (ADS)

    Li, Zhen; Liu, Bin; Li, Yining; Wong, Siu-Chung; Liu, Xiangdong; Huang, Yuehui

    As a commercial interface, three-phase voltage-source inverters (VSI) are commonly equipped for energy conversion to export DC power from most distributed generation (DG) to the AC utility. Not only do voltage-source converters take charge of converting the power to the loads but support the grid voltage at the point of common connection (PCC) as well, which is dependent on the condition of the grid-connected loads. This paper explores the border collision and its interacting mechanism among the VSI, resistive interacting loads and grids, which manifests as the alternating emergence of the inverting and rectifying operations, where the normal operation is terminated and a new one is assumed. Their mutual effect on the power quality under investigation will cause the circuital stability issue and further deteriorate the voltage regulation capability of VSI by dramatically raising the grid voltage harmonics. It is found in a design-oriented view that the border collision operation will be induced within the unsuitable parameter space with respect to transmission lines of AC grid, resistive loads and internal resistance of VSI. The physical phenomenon is also identified by the theoretical analysis. With numerical simulations for various circuit conditions, the corresponding bifurcation boundaries are collected, where the stability of the system is lost via border collision.

  16. Trans-falcine and contralateral sub-frontal electrode placement in pediatric epilepsy surgery: technical note.

    PubMed

    Pindrik, Jonathan; Hoang, Nguyen; Tubbs, R Shane; Rocque, Brandon J; Rozzelle, Curtis J

    2017-08-01

    Phase II monitoring with intracranial electroencephalography (ICEEG) occasionally requires bilateral placement of subdural (SD) strips, grids, and/or depth electrodes. While phase I monitoring often demonstrates a preponderance of unilateral findings, individual studies (video EEG, single photon emission computed tomography [SPECT], and positron emission tomography [PET]) can suggest or fail to exclude a contralateral epileptogenic onset zone. This study describes previously unreported techniques of trans-falcine and sub-frontal insertion of contralateral SD grids and depth electrodes for phase II monitoring in pediatric epilepsy surgery patients when concern about bilateral abnormalities has been elicited during phase I monitoring. Pediatric patients with medically refractory epilepsy undergoing stage I surgery for phase II monitoring involving sub-frontal and/or trans-falcine insertion of SD grids and/or depth electrodes at the senior author's institution were retrospectively reviewed. Intra-operative technical details of sub-frontal and trans-falcine approaches were studied, while intra-operative complications or events were noted. Operative techniques included gentle subfrontal retraction and elevation of the olfactory tracts (while preserving the relationship between the olfactory bulb and cribriform plate) to insert SD grids across the midline for coverage of the contralateral orbito-frontal regions. Trans-falcine approaches involved accessing the inter-hemispheric space, bipolar cauterization of the anterior falx cerebri below the superior sagittal sinus, and sharp dissection using a blunt elevator and small blade scalpel. The falcine window allowed contralateral SD strip, grid, and depth electrodes to be inserted for coverage of the contralateral frontal regions. The study cohort included seven patients undergoing sub-frontal and/or trans-falcine insertion of contralateral SD strip, grid, and/or depth electrodes from February 2012 through June 2015. Five patients (71%) experienced no intra-operative events related to contralateral ICEEG electrode insertion. Intra-operative events of frontal territory venous engorgement (1/7, 14%) due to sacrifice of anterior bridging veins draining into the SSS and avulsion of a contralateral bridging vein (1/7, 14%), probably due to prior anterior corpus callosotomy, each occurred in one patient. There were no intra-operative or peri-operative complications in any of the patients studied. Two patients required additional surgery for supplemental SD strip and/or depth electrodes via burr hole craniectomy to enhance phase II monitoring. All patients proceeded to stage II surgery for resection of ipsilateral epileptogenic onset zones without adverse events. Trans-falcine and sub-frontal insertion of contralateral SD strip, grid, and depth electrodes are previously unreported techniques for achieving bilateral frontal coverage in phase II monitoring in pediatric epilepsy surgery. This technique obviates the need for contralateral craniotomy and parenchymal exposure with limited, remediable risks. Larger case series using the method described herein are now necessary.

  17. A numerical solution method for acoustic radiation from axisymmetric bodies

    NASA Technical Reports Server (NTRS)

    Caruthers, John E.; Raviprakash, G. K.

    1995-01-01

    A new and very efficient numerical method for solving equations of the Helmholtz type is specialized for problems having axisymmetric geometry. It is then demonstrated by application to the classical problem of acoustic radiation from a vibrating piston set in a stationary infinite plane. The method utilizes 'Green's Function Discretization', to obtain an accurate resolution of the waves using only 2-3 points per wave. Locally valid free space Green's functions, used in the discretization step, are obtained by quadrature. Results are computed for a range of grid spacing/piston radius ratios at a frequency parameter, omega R/c(sub 0), of 2 pi. In this case, the minimum required grid resolution appears to be fixed by the need to resolve a step boundary condition at the piston edge rather than by the length scale imposed by the wave length of the acoustic radiation. It is also demonstrated that a local near-field radiation boundary procedure allows the domain to be truncated very near the radiating source with little effect on the solution.

  18. Dynamic mechanical properties of a Ti-based metallic glass matrix composite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jinshan, E-mail: ljsh@nwpu.edu.cn; Cui, Jing; Bai, Jie

    2015-04-21

    Dynamic mechanical behavior of a Ti{sub 50}Zr{sub 20}Nb{sub 12}Cu{sub 5}Be{sub 13} bulk metallic glass composite was investigated using mechanical spectroscopy in both temperature and frequency domains. Storage modulus G′ and loss modulus G″ are determined by temperature, and three distinct regions corresponding to different states in the bulk metallic glass composite are characterized. Physical parameters, such as atomic mobility and correlation factor χ, are introduced to analyze dynamic mechanical behavior of the bulk metallic glass composite in the framework of quasi-point defects (QPD) model. The experimental results are in good agreement with the prediction of QPD model.

  19. Gridded Calibration of Ensemble Wind Vector Forecasts Using Ensemble Model Output Statistics

    NASA Astrophysics Data System (ADS)

    Lazarus, S. M.; Holman, B. P.; Splitt, M. E.

    2017-12-01

    A computationally efficient method is developed that performs gridded post processing of ensemble wind vector forecasts. An expansive set of idealized WRF model simulations are generated to provide physically consistent high resolution winds over a coastal domain characterized by an intricate land / water mask. Ensemble model output statistics (EMOS) is used to calibrate the ensemble wind vector forecasts at observation locations. The local EMOS predictive parameters (mean and variance) are then spread throughout the grid utilizing flow-dependent statistical relationships extracted from the downscaled WRF winds. Using data withdrawal and 28 east central Florida stations, the method is applied to one year of 24 h wind forecasts from the Global Ensemble Forecast System (GEFS). Compared to the raw GEFS, the approach improves both the deterministic and probabilistic forecast skill. Analysis of multivariate rank histograms indicate the post processed forecasts are calibrated. Two downscaling case studies are presented, a quiescent easterly flow event and a frontal passage. Strengths and weaknesses of the approach are presented and discussed.

  20. Calibrating White Dwarf Asteroseismic Fitting Techniques

    NASA Astrophysics Data System (ADS)

    Castanheira, B. G.; Romero, A. D.; Bischoff-Kim, A.

    2017-03-01

    The main goal of looking for intrinsic variability in stars is the unique opportunity to study their internal structure. Once we have extracted independent modes from the data, it appears to be a simple matter of comparing the period spectrum with those from theoretical model grids to learn the inner structure of that star. However, asteroseismology is much more complicated than this simple description. We must account not only for observational uncertainties in period determination, but most importantly for the limitations of the model grids, coming from the uncertainties in the constitutive physics, and of the fitting techniques. In this work, we will discuss results of numerical experiments where we used different independently calculated model grids (white dwarf cooling models WDEC and fully evolutionary LPCODE-PUL) and fitting techniques to fit synthetic stars. The advantage of using synthetic stars is that we know the details of their interior structure so we can assess how well our models and fitting techniques are able to the recover the interior structure, as well as the stellar parameters.

  1. Atmospheric Boundary Layer Modeling for Combined Meteorology and Air Quality Systems

    EPA Science Inventory

    Atmospheric Eulerian grid models for mesoscale and larger applications require sub-grid models for turbulent vertical exchange processes, particularly within the Planetary Boundary Layer (PSL). In combined meteorology and air quality modeling systems consistent PSL modeling of wi...

  2. Fast ice in the Canadian Arctic: Climatology, Atmospheric Forcing and Relation to Bathymetry

    NASA Astrophysics Data System (ADS)

    Galley, R. J.; Barber, D. G.

    2010-12-01

    Mobile sea ice in the northern hemisphere has experienced significant reductions in both extent and thickness over the last thirty years, and global climate models agree that these decreases will continue. However, the Canadian Arctic Archipelago (CAA) creates a much different icescape than in the central Arctic Ocean due to its distinctive topographic, bathymetric and climatological conditions. Of particular interest is the continued viability of landfast sea ice as a means of transportation and platform for transportation and hunting for the Canadian Inuit that reside in the region, as is the possibility of the Northwest Passage becoming a viable shipping lane in the future. Here we determine the climatological average landfast ice conditions in the Canadian Arctic Archipelago over the last 27 years, we investigate variability and trends in these landfast ice conditions, and we attempt to elucidate the physical parameters conducive to landfast sea ice formation in sub-regions of the CAA during different times of the year. We use the Canadian Ice Service digital sea ice charts between 1983 and 2009 on a 2x2km grid to determine the sea ice concentration-by-type and whether the sea ice in a grid cell was landfast on a weekly, bi-weekly or monthly basis depending on the time of year. North American Regional Reanalysis (NARR) atmospheric data were used in this work, including air temperature, surface level pressure and wind speed and direction. The bathymetric data employed was from the International Bathymetric Chart of the Arctic Ocean. Results indicate that the CAA sea ice regime is not climatologically analogous to the mobile sea ice of the central Arctic Ocean. The sea ice and the atmospheric and bathymetric properties that control the amount and timing of landfast sea ice within the CAA are regionally variable.

  3. Method of determining the x-ray limit of an ion gauge

    DOEpatents

    Edwards, Jr., David; Lanni, Christopher P.

    1981-01-01

    An ion gauge having a reduced "x-ray limit" and means for measuring that limit. The gauge comprises an ion gauge of the Bayard-Alpert type having a short collector and having means for varying the grid-collector voltage. The "x-ray limit" (i.e. the collector current resulting from x-rays striking the collector) may then be determined by the formula: ##EQU1## where: I.sub.x ="x-ray limit", I.sub.l and I.sub.h =the collector current at the lower and higher grid voltage respectively; and, .alpha.=the ratio of the collector current due to positive ions at the higher voltage to that at the lower voltage.

  4. OMEGA System Performance Assessment and Coverage Evaluation (PACE) Workstation Design and Implementation. Volume 2

    DTIC Science & Technology

    1991-02-15

    picked, Ce11Pop" .xmonth, CeliPcpUpA .hour’ . Phase kND $80) = 0 ELSE IF (stationinfol36’ [stations. picked, CellIP-- CpA .n=nh, CeSUP pP.hour . Phiase...CellGrid, irt (322,24. 281,314, RightCeliGridAction, ShoCe11~ta, DcNot-hingPr-oc, bJii ne (lfepnIi lfs t.Xj05,efIghplit. Y4, 60,16,white, blak , black...8217.Hilite(oc,yy); with CellPI~p do begin if (SubCells (Hcnth,Hr] .X < (Get~4axX - RightsideStatsA .width - SubCellIs (Month, Hour) Width - SubCellP~ cpA

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeffrey A Appel

    BTeV is a new Fermilab beauty and charm experiment designed to operate in the CZero region of the Tevatron collider. Critical to the success of BTeV is its pixel detector. The unique features of this pixel detector include its proximity to the beam, its operation with a beam crossing time of 132 ns, and the need for the detector information to be read out quickly enough to be used for the lowest level trigger. This talk presents an overview of the pixel detector design, giving the motivations for the technical choices made. The status of the current R&D on detectormore » components is also reviewed. Additional Pixel 2002 talks on the BTeV pixel detector are given by Dave Christian[1], Mayling Wong[2], and Sergio Zimmermann[3]. Table 1 gives a selection of pixel detector parameters for the ALICE, ATLAS, BTeV, and CMS experiments. Comparing the progression of this table, which I have been updating for the last several years, has shown a convergence of specifications. Nevertheless, significant differences endure. The BTeV data-driven readout, horizontal and vertical position resolution better than 9 {micro}m with the {+-} 300 mr forward acceptance, and positioning in vacuum and as close as 6 mm from the circulating beams remain unique. These features are driven by the physics goals of the BTeV experiment. Table 2 demonstrates that the vertex trigger performance made possible by these features is requisite for a very large fraction of the B meson decay physics which is so central to the motivation for BTeV. For most of the physics quantities of interest listed in the table, the vertex trigger is essential. The performance of the BTeV pixel detector may be summarized by looking at particular physics examples; e.g., the B{sub s} meson decay B{sub s} {yields} D{sub s}{sup -} K{sup +}. For that decay, studies using GEANT3 simulations provide quantitative measures of performance. For example, the separation between the B{sub s} decay point and the primary proton-antiproton interaction can be measured with an rms uncertainty of 138 {micro}m. This, with the uncertainty in the decay vertex position, leads to an uncertainty of the B{sub s} proper decay time of 46 fs. Even if the parameter x{sub s} equals 25 (where the current lower limit on x{sub s} is about 15), the corresponding relevant proper time is 400 fs. So, the detector resolution is more than adequate to make an excellent measurement of this parameter.« less

  6. Circumbinary discs: Numerical and physical behaviour

    NASA Astrophysics Data System (ADS)

    Thun, Daniel; Kley, Wilhelm; Picogna, Giovanni

    2017-08-01

    Aims: Discs around a central binary system play an important role in star and planet formation and in the evolution of galactic discs. These circumbinary discs are strongly disturbed by the time varying potential of the binary system and display a complex dynamical evolution that is not well understood. Our goal is to investigate the impact of disc and binary parameters on the dynamical aspects of the disc. Methods: We study the evolution of circumbinary discs under the gravitational influence of the binary using two-dimensional hydrodynamical simulations. To distinguish between physical and numerical effects we apply three hydrodynamical codes. First we analyse in detail numerical issues concerning the conditions at the boundaries and grid resolution. We then perform a series of simulations with different binary parameters (eccentricity, mass ratio) and disc parameters (viscosity, aspect ratio) starting from a reference model with Kepler-16 parameters. Results: Concerning the numerical aspects we find that the length of the inner grid radius and the binary semi-major axis must be comparable, with free outflow conditions applied such that mass can flow onto the central binary. A closed inner boundary leads to unstable evolutions. We find that the inner disc turns eccentric and precesses for all investigated physical parameters. The precession rate is slow with periods (Tprec) starting at around 500 binary orbits (Tbin) for high viscosity and a high aspect ratio H/R where the inner hole is smaller and more circular. Reducing α and H/R increases the gap size and Tprec reaches 2500 Tbin. For varying binary mass ratios qbin the gap size remains constant, whereas Tprec decreases with increasing qbin. For varying binary eccentricities ebin we find two separate branches in the gap size and eccentricity diagram. The bifurcation occurs at around ecrit ≈ 0.18 where the gap is smallest with the shortest Tprec. For ebin lower and higher than ecrit, the gap size and Tprec increase. Circular binaries create the most eccentric discs. Movies associated to Figs. 1 and 8 are available at http://www.aanda.org

  7. On representations of U{sub q}osp(1{vert_bar}2) when q is a root of unity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, W.; Suzuki, T.

    1997-06-01

    The infinite dimensional highest weight representations of U{sub q}osp(1{vert_bar}2) for the deformation parameter q being a root of unity are investigated. As in the cases of q-deformed nongraded Lie algebras, we find that every irreducible representation is isomorphic to the tensor product of a highest weight representation of sl{sub 2}(R) and a finite dimensional one of U{sub q}osp(1{vert_bar}2). The structure is investigated in detail. {copyright} {ital 1997 American Institute of Physics.}

  8. Accurate Grid-based Clustering Algorithm with Diagonal Grid Searching and Merging

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Ye, Chengcheng; Zhu, Erzhou

    2017-09-01

    Due to the advent of big data, data mining technology has attracted more and more attentions. As an important data analysis method, grid clustering algorithm is fast but with relatively lower accuracy. This paper presents an improved clustering algorithm combined with grid and density parameters. The algorithm first divides the data space into the valid meshes and invalid meshes through grid parameters. Secondly, from the starting point located at the first point of the diagonal of the grids, the algorithm takes the direction of “horizontal right, vertical down” to merge the valid meshes. Furthermore, by the boundary grid processing, the invalid grids are searched and merged when the adjacent left, above, and diagonal-direction grids are all the valid ones. By doing this, the accuracy of clustering is improved. The experimental results have shown that the proposed algorithm is accuracy and relatively faster when compared with some popularly used algorithms.

  9. EPIC-Simulated and MODIS-Derived Leaf Area Index (LAI) ...

    EPA Pesticide Factsheets

    Leaf Area Index (LAI) is an important parameter in assessing vegetation structure for characterizing forest canopies over large areas at broad spatial scales using satellite remote sensing data. However, satellite-derived LAI products can be limited by obstructed atmospheric conditions yielding sub-optimal values, or complete non-returns. The United States Environmental Protection Agency’s Exposure Methods and Measurements and Computational Exposure Divisions are investigating the viability of supplemental modelled LAI inputs into satellite-derived data streams to support various regional and local scale air quality models for retrospective and future climate assessments. In this present study, one-year (2002) of plot level stand characteristics at four study sites located in Virginia and North Carolina are used to calibrate species-specific plant parameters in a semi-empirical biogeochemical model. The Environmental Policy Integrated Climate (EPIC) model was designed primarily for managed agricultural field crop ecosystems, but also includes managed woody species that span both xeric and mesic sites (e.g., mesquite, pine, oak, etc.). LAI was simulated using EPIC at a 4 km2 and 12 km2 grid coincident with the regional Community Multiscale Air Quality Model (CMAQ) grid. LAI comparisons were made between model-simulated and MODIS-derived LAI. Field/satellite-upscaled LAI was also compared to the corresponding MODIS LAI value. Preliminary results show field/satel

  10. Impact of the North Atlantic circulation on the climate change patterns of North Sea.

    NASA Astrophysics Data System (ADS)

    Narayan, Nikesh; Mathis, Mortiz; Klein, Birgit; Klein, Holger; Mikolajewicz, Uwe

    2017-04-01

    The physical properties of the North Sea are characterized by the exchange of water masses with the North Atlantic at the northern boundary and Baltic Sea to the east. The combined effects of localized forcing, tidal mixing and advection of water masses make the North Sea a challenging study area. Previous investigations indicated a possibility that the variability of the North Atlantic circulation and the strength of the sub-polar gyre (SPG) might influence the physical properties of the North Sea. The assessment of the complex interaction between the North Atlantic and the North Sea in a climate change scenario requires regionally coupled global RCP simulations with enhanced resolution of the North Sea and the North Atlantic. In this study we analyzed result from the regionally coupled ocean-atmosphere-biogeochemistry model system (MPIOM-REMO-HAMOCC) with a hydrodynamic (HD) model. The ocean model has a zoomed grid which provides the highest resolution over the West European Shelf by shifting its poles over Chicago and Central Europe. An index for the intensity of SPG was estimated by averaging the barotropic stream function (ψ) over the North Atlantic. Various threshold values for ψ were tested to define the strength of the SPG. These SPG indices have been correlated with North Sea hydrographic parameters at various levels to identify areas affected by SPG variability. The influence of the Atlantic's eastern boundary current, contributing more saline waters to the North West European shelf area is also investigated.

  11. Renormalization group analysis of turbulence

    NASA Technical Reports Server (NTRS)

    Smith, Leslie M.

    1989-01-01

    The objective is to understand and extend a recent theory of turbulence based on dynamic renormalization group (RNG) techniques. The application of RNG methods to hydrodynamic turbulence was explored most extensively by Yakhot and Orszag (1986). An eddy viscosity was calculated which was consistent with the Kolmogorov inertial range by systematic elimination of the small scales in the flow. Further, assumed smallness of the nonlinear terms in the redefined equations for the large scales results in predictions for important flow constants such as the Kolmogorov constant. It is emphasized that no adjustable parameters are needed. The parameterization of the small scales in a self-consistent manner has important implications for sub-grid modeling.

  12. Line-source excitation of realistic conformal metasurface cloaks

    NASA Astrophysics Data System (ADS)

    Padooru, Yashwanth R.; Yakovlev, Alexander B.; Chen, Pai-Yen; Alù, Andrea

    2012-11-01

    Following our recently introduced analytical tools to model and design conformal mantle cloaks based on metasurfaces [Padooru et al., J. Appl. Phys. 112, 034907 (2012)], we investigate their performance and physical properties when excited by an electric line source placed in their close proximity. We consider metasurfaces formed by 2-D arrays of slotted (meshes and Jerusalem cross slots) and printed (patches and Jerusalem crosses) sub-wavelength elements. The electromagnetic scattering analysis is carried out using a rigorous analytical model, which utilizes the two-sided impedance boundary conditions at the interface of the sub-wavelength elements. It is shown that the homogenized grid-impedance expressions, originally derived for planar arrays of sub-wavelength elements and plane-wave excitation, may be successfully used to model and tailor the surface reactance of cylindrical conformal mantle cloaks illuminated by near-field sources. Our closed-form analytical results are in good agreement with full-wave numerical simulations, up to sub-wavelength distances from the metasurface, confirming that mantle cloaks may be very effective to suppress the scattering of moderately sized objects, independent of the type of excitation and point of observation. We also discuss the dual functionality of these metasurfaces to boost radiation efficiency and directivity from confined near-field sources.

  13. COMPUTATIONAL MODELING OF CATHODIC LIMITATIONS ON LOCALIZED CORROSION OF WETTED SS 316L, AT ROOM TEMPERATURE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    F. Cui; F.J. Presuel-Moreno; R.G. Kelly

    2005-10-13

    The ability of a SS316L surface wetted with a thin electrolyte layer to serve as an effective cathode for an active localized corrosion site was studied computationally. The dependence of the total net cathodic current, I{sub net}, supplied at the repassivation potential E{sub rp} (of the anodic crevice) on relevant physical parameters including water layer thickness (WL), chloride concentration ([Cl{sup -}]) and length of cathode (Lc) were investigated using a three-level, full factorial design. The effects of kinetic parameters including the exchange current density (i{sub o,c}) and Tafel slope ({beta}{sub c}) of oxygen reduction, the anodic passive current density (i{submore » p}) (on the cathodic surface), and E{sub rp} were studied as well using three-level full factorial designs of [Cl{sup -}] and Lc with a fixed WL of 25 {micro}m. The study found that all the three parameters WL, [Cl{sup -}] and Lc as well as the interactions of Lc x WL and Lc x [Cl{sup -}] had significant impact on I{sub net}. A five-factor regression equation was obtained which fits the computation results reasonably well, but demonstrated that interactions are more complicated than can be explained with a simple linear model. Significant effects on I{sub net} were found upon varying either i{sub o,c}, {beta}{sub c}, or E{sub rp}, whereas i{sub p} in the studied range was found to have little impact. It was observed that I{sub net} asymptotically approached maximum values (I{sub max}) when Lc increased to critical minimum values. I{sub max} can be used to determine the stability of coupled localized corrosion and the critical Lc provides important information for experimental design and corrosion protection.« less

  14. Evaluation of decadal hindcasts using satellite simulators

    NASA Astrophysics Data System (ADS)

    Spangehl, Thomas; Mazurkiewicz, Alex; Schröder, Marc

    2013-04-01

    The evaluation of dynamical ensemble forecast systems requires a solid validation of basic processes such as the global atmospheric water and energy cycle. The value of any validation approach strongly depends on the quality of the observational data records used. Current approaches utilize in situ measurements, remote sensing data and reanalyses. Related data records are subject to a number of uncertainties and limitations such as representativeness, spatial and temporal resolution and homogeneity. However, recently several climate data records with known and sufficient quality became available. In particular, the satellite data records offer the opportunity to obtain reference information on global scales including the oceans. Here we consider the simulation of satellite radiances from the climate model output enabling an evaluation in the instrument's parameter space to avoid uncertainties stemming from the application of retrieval schemes in order to minimise uncertainties on the reference side. Utilizing the CFMIP Observation Simulator Package (COSP) we develop satellite simulators for the Tropical Rainfall Measuring Mission precipitation radar (TRMM PR) and the Infrared Atmospheric Sounding Interferometer (IASI). The simulators are applied within the MiKlip project funded by BMBF (German Federal Ministry of Education and Research) to evaluate decadal climate predictions performed with the MPI-ESM developed at the Max Planck Institute for Meteorology. While TRMM PR enables the evaluation of the vertical structure of precipitation over tropical and sub-tropical areas, IASI is used to support the global evaluation of clouds and radiation. In a first step the reliability of the developed simulators needs to be explored. The simulation of radiances in the instrument space requires the generation of sub-grid scale variability from the climate model output. Furthermore, assumptions are made to simulate radiances such as, for example, the distribution of different hydrometeor types. Therefore, testing is performed to determine the extent to which the quality of the simulator results depends on the applied methods used to generate sub-grid variability (e.g. sub-grid resolution). Moreover, the sensitivity of results to the choice of different distributions of hydrometeors is explored. The model evaluation is carried out in a statistical manner using histograms of radar reflectivities (TRMM PR) and brightness temperatures (IASI). Finally, methods to deduce data suitable for probabilistic evaluation of decadal hindcasts such as simple indices are discussed.

  15. Optimizing solar-cell grid geometry

    NASA Technical Reports Server (NTRS)

    Crossley, A. P.

    1969-01-01

    Trade-off analysis and mathematical expressions calculate optimum grid geometry in terms of various cell parameters. Determination of the grid geometry provides proper balance between grid resistance and cell output to optimize the energy conversion process.

  16. Analysis of turbine-grid interaction of grid-connected wind turbine using HHT

    NASA Astrophysics Data System (ADS)

    Chen, A.; Wu, W.; Miao, J.; Xie, D.

    2018-05-01

    This paper processes the output power of the grid-connected wind turbine with the denoising and extracting method based on Hilbert Huang transform (HHT) to discuss the turbine-grid interaction. At first, the detailed Empirical Mode Decomposition (EMD) and the Hilbert Transform (HT) are introduced. Then, on the premise of decomposing the output power of the grid-connected wind turbine into a series of Intrinsic Mode Functions (IMFs), energy ratio and power volatility are calculated to detect the unessential components. Meanwhile, combined with vibration function of turbine-grid interaction, data fitting of instantaneous amplitude and phase of each IMF is implemented to extract characteristic parameters of different interactions. Finally, utilizing measured data of actual parallel-operated wind turbines in China, this work accurately obtains the characteristic parameters of turbine-grid interaction of grid-connected wind turbine.

  17. Fully Automated Single-Zone Elliptic Grid Generation for Mars Science Laboratory (MSL) Aeroshell and Canopy Geometries

    NASA Technical Reports Server (NTRS)

    kaul, Upender K.

    2008-01-01

    A procedure for generating smooth uniformly clustered single-zone grids using enhanced elliptic grid generation has been demonstrated here for the Mars Science Laboratory (MSL) geometries such as aeroshell and canopy. The procedure obviates the need for generating multizone grids for such geometries, as reported in the literature. This has been possible because the enhanced elliptic grid generator automatically generates clustered grids without manual prescription of decay parameters needed with the conventional approach. In fact, these decay parameters are calculated as decay functions as part of the solution, and they are not constant over a given boundary. Since these decay functions vary over a given boundary, orthogonal grids near any arbitrary boundary can be clustered automatically without having to break up the boundaries and the corresponding interior domains into various zones for grid generation.

  18. Gravitational wave production from preheating: parameter dependence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Figueroa, Daniel G.; Torrentí, Francisco, E-mail: daniel.figueroa@cern.ch, E-mail: f.torrenti@csic.es

    Parametric resonance is among the most efficient phenomena generating gravitational waves (GWs) in the early Universe. The dynamics of parametric resonance, and hence of the GWs, depend exclusively on the resonance parameter q . The latter is determined by the properties of each scenario: the initial amplitude and potential curvature of the oscillating field, and its coupling to other species. Previous works have only studied the GW production for fixed value(s) of q . We present an analytical derivation of the GW amplitude dependence on q , valid for any scenario, which we confront against numerical results. By running latticemore » simulations in an expanding grid, we study for a wide range of q values, the production of GWs in post-inflationary preheating scenarios driven by parametric resonance. We present simple fits for the final amplitude and position of the local maxima in the GW spectrum. Our parametrization allows to predict the location and amplitude of the GW background today, for an arbitrary q . The GW signal can be rather large, as h {sup 2Ω}{sub GW}( f {sub p} ) ∼< 10{sup −11}, but it is always peaked at high frequencies f {sub p} ∼> 10{sup 7} Hz. We also discuss the case of spectator-field scenarios, where the oscillatory field can be e.g. a curvaton, or the Standard Model Higgs.« less

  19. Physical results from 2+1 flavor domain wall QCD and SU(2) chiral perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allton, C.; Antonio, D. J.; Boyle, P. A.

    2008-12-01

    We have simulated QCD using 2+1 flavors of domain wall quarks and the Iwasaki gauge action on a (2.74 fm){sup 3} volume with an inverse lattice scale of a{sup -1}=1.729(28) GeV. The up and down (light) quarks are degenerate in our calculations and we have used four values for the ratio of light quark masses to the strange (heavy) quark mass in our simulations: 0.217, 0.350, 0.617, and 0.884. We have measured pseudoscalar meson masses and decay constants, the kaon bag parameter B{sub K}, and vector meson couplings. We have used SU(2) chiral perturbation theory, which assumes only the upmore » and down quark masses are small, and SU(3) chiral perturbation theory to extrapolate to the physical values for the light quark masses. While next-to-leading order formulas from both approaches fit our data for light quarks, we find the higher-order corrections for SU(3) very large, making such fits unreliable. We also find that SU(3) does not fit our data when the quark masses are near the physical strange quark mass. Thus, we rely on SU(2) chiral perturbation theory for accurate results. We use the masses of the {omega} baryon, and the {pi} and K mesons to set the lattice scale and determine the quark masses. We then find f{sub {pi}}=124.1(3.6){sub stat}(6.9){sub syst} MeV, f{sub K}=149.6(3.6){sub stat}(6.3){sub syst} MeV, and f{sub K}/f{sub {pi}}=1.205(0.018){sub stat}(0.062){sub syst}. Using nonperturbative renormalization to relate lattice regularized quark masses to regularization independent momentum scheme masses, and perturbation theory to relate these to MS, we find m{sub ud}{sup MS}(2 GeV)=3.72(0.16){sub stat}(0.33){sub ren}(0.18){sub syst} MeV, m{sub s}{sup MS}(2 GeV)=107.3(4.4){sub stat}(9.7){sub ren}(4.9){sub syst} MeV, and m-tilde{sub ud} ratio m-tilde{sub s}=1 ratio 28.8(0.4){sub stat}(1.6){sub syst}. For the kaon bag parameter, we find B{sub K}{sup MS}(2 GeV)=0.524(0.010){sub stat}(0.013){sub ren}(0.025){sub syst}. Finally, for the ratios of the couplings of the vector mesons to the vector and tensor currents (f{sub V} and f{sub V}{sup T}, respectively) in the MS scheme at 2 GeV we obtain f{sub {rho}}{sup T}/f{sub {rho}}=0.687(27); f{sub K*}{sup T}/f{sub K*}=0.712(12), and f{sub {phi}}{sup T}/f{sub {phi}}=0.750(8)« less

  20. MID Plot: a new lithology technique. [Matrix identification plot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clavier, C.; Rust, D.H.

    1976-01-01

    Lithology interpretation by the Litho-Porosity (M-N) method has been used for years, but is evidently too cumbersome and ambiguous for widespread acceptance as a field technique. To set aside these objections, another method has been devised. Instead of the log-derived parameters M and N, the MID Plot uses quasi-physical quantities, (rho/sub ma/)/sub a/ and (..delta..t/sub ma/)/sub a/, as its porosity-independent variables. These parameters, taken from suitably scaled Neutron-Density and Sonic-Neutron crossplots, define a unique matrix mineral or mixture for each point on the logs. The matrix points on the MID Plot thus remain constant in spite of changes in mudmore » filtrate, porosity, or neutron tool types (all of which significantly affect the M-N Plot). This new development is expected to bring welcome relief in areas where lithology identification is a routine part of log analysis.« less

  1. X-ray relative intensities at incident photon energies across the L{sub i} (i=1–3) absorption edges of elements with 35≤Z≤92

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Puri, Sanjiv, E-mail: sanjivpurichd@yahoo.com

    The intensity ratios, I{sub Lk}/I{sub Lα1} (k=l,η,α{sub 2},β{sub 1},β{sub 2,15},β{sub 3},β{sub 4},β{sub 5,7},β{sub 6},β{sub 9,10},γ{sub 1,5},γ{sub 6,8},γ{sub 2,3},γ{sub 4}) and I{sub Lj}/I{sub Lα} (j=β,γ), have been evaluated at incident photon energies across the L{sub i} (i=1–3) absorption edge energies of all the elements with 35≤Z≤92. Use is made of what are currently considered to be more reliable theoretical data sets of different physical parameters, namely, the L{sub i} (i=1–3) sub-shell photoionization cross sections based on the relativistic Hartree–Fock–Slater (RHFS) model, the X-ray emission rates based on the Dirac–Fock model, and the fluorescence and Coster–Kronig yields based on the Dirac–Hartree–Slater model.more » In addition, the Lα{sub 1} X-ray production cross sections for different elements at various incident photon energies have been tabulated so as to facilitate the evaluation of production cross sections for different resolved L X-ray components from the tabulated intensity ratios. Further, to assist evaluation of the prominent (L{sub i}−S{sub j}) (S{sub j}=M{sub j}, N{sub j} and i=1–3, j=1–7) resonant Raman scattered (RRS) peak energies for an element at a given incident photon energy (below the L{sub i} sub-shell absorption edge), the neutral-atom electron binding energies based on the relaxed orbital RHFS calculations are also listed so as to enable identification of the RRS peaks, which can overlap with the fluorescent X-ray lines. -- Highlights: •The L X-ray relative intensities and Lα{sub 1} XRP cross sections are evaluated using physical parameters based on the IPA models. •Comparison of the intensity ratios evaluated using the DHS and DF models based photoionization cross sections is presented. •Importance of many body effects including electron exchange effects is highlighted.« less

  2. Determination of ocean tides from the first year of TOPEX/POSEIDON altimeter measurements

    NASA Technical Reports Server (NTRS)

    Ma, X. C.; Shum, C. K.; Eanes, R. J.; Tapley, B. D.

    1994-01-01

    An improved geocentric global ocean tide model has been determined using 1 year of TOPEX/POSEIDON altimeter measurements to provide corrections to the Cartwright and Ray (1991) model (CR91). The corrections were determined on a 3 deg x 3 deg grid using both the harmonic analysis method and the response method. The two approaches produce similar solutions. The effect on the tide solution of simultaneously adjusting radial orbit correction parameters using altimeter measurements was examined. Four semidiurnal (N(sub 2), M(sub 2), S(sub 2) and K(sub 2)), four diurnal (Q(sdub 1), O(sub 1), P(sub 1), and K(sub 1)), and three long-period (S(sub sa), M(sub m), and M(sub f)) constituents, along with the variations at the annual frequency, were included in the harmomnic analysis solution. The observed annual variations represents the first global measurement describing accurate seasonal changes of the ocean during an El Nino year. The corrections to the M(sub 2) constituent have an root mean square (RMS) of 3.6 cm and display a clear banding pattern with regional highs and lows reaching 8 cm. The improved tide model reduces the weighted altimeter crossover residual from 9.8 cm RMS, when the CR91 tide model is used, to 8.2 cm on RMS. Comparison of the improved model to pelagic tidal constants determined from 80 tide gauges gives RMS differences of 2.7 cm for M(sub 2) and 1.7 cm for K(sub 1). Comparable values when the CR91 model is used are 3.9 cm and 2.0 cm, respectively. Examination of TOPEX/POSEIDON sea level anomaly variations using the new tide model further confirms that the tide model has been improved.

  3. Grid Block Design Based on Monte Carlo Simulated Dosimetry, the Linear Quadratic and Hug–Kellerer Radiobiological Models

    PubMed Central

    Gholami, Somayeh; Nedaie, Hassan Ali; Longo, Francesco; Ay, Mohammad Reza; Dini, Sharifeh A.; Meigooni, Ali S.

    2017-01-01

    Purpose: The clinical efficacy of Grid therapy has been examined by several investigators. In this project, the hole diameter and hole spacing in Grid blocks were examined to determine the optimum parameters that give a therapeutic advantage. Methods: The evaluations were performed using Monte Carlo (MC) simulation and commonly used radiobiological models. The Geant4 MC code was used to simulate the dose distributions for 25 different Grid blocks with different hole diameters and center-to-center spacing. The therapeutic parameters of these blocks, namely, the therapeutic ratio (TR) and geometrical sparing factor (GSF) were calculated using two different radiobiological models, including the linear quadratic and Hug–Kellerer models. In addition, the ratio of the open to blocked area (ROTBA) is also used as a geometrical parameter for each block design. Comparisons of the TR, GSF, and ROTBA for all of the blocks were used to derive the parameters for an optimum Grid block with the maximum TR, minimum GSF, and optimal ROTBA. A sample of the optimum Grid block was fabricated at our institution. Dosimetric characteristics of this Grid block were measured using an ionization chamber in water phantom, Gafchromic film, and thermoluminescent dosimeters in Solid Water™ phantom materials. Results: The results of these investigations indicated that Grid blocks with hole diameters between 1.00 and 1.25 cm and spacing of 1.7 or 1.8 cm have optimal therapeutic parameters (TR > 1.3 and GSF~0.90). The measured dosimetric characteristics of the optimum Grid blocks including dose profiles, percentage depth dose, dose output factor (cGy/MU), and valley-to-peak ratio were in good agreement (±5%) with the simulated data. Conclusion: In summary, using MC-based dosimetry, two radiobiological models, and previously published clinical data, we have introduced a method to design a Grid block with optimum therapeutic response. The simulated data were reproduced by experimental data. PMID:29296035

  4. Application of the advanced engineering environment for optimization energy consumption in designed vehicles

    NASA Astrophysics Data System (ADS)

    Monica, Z.; Sękala, A.; Gwiazda, A.; Banaś, W.

    2016-08-01

    Nowadays a key issue is to reduce the energy consumption of road vehicles. In particular solution one could find different strategies of energy optimization. The most popular but not sophisticated is so called eco-driving. In this strategy emphasized is particular behavior of drivers. In more sophisticated solution behavior of drivers is supported by control system measuring driving parameters and suggesting proper operation of the driver. The other strategy is concerned with application of different engineering solutions that aid optimization the process of energy consumption. Such systems take into consideration different parameters measured in real time and next take proper action according to procedures loaded to the control computer of a vehicle. The third strategy bases on optimization of the designed vehicle taking into account especially main sub-systems of a technical mean. In this approach the optimal level of energy consumption by a vehicle is obtained by synergetic results of individual optimization of particular constructional sub-systems of a vehicle. It is possible to distinguish three main sub-systems: the structural one the drive one and the control one. In the case of the structural sub-system optimization of the energy consumption level is related with the optimization or the weight parameter and optimization the aerodynamic parameter. The result is optimized body of a vehicle. Regarding the drive sub-system the optimization of the energy consumption level is related with the fuel or power consumption using the previously elaborated physical models. Finally the optimization of the control sub-system consists in determining optimal control parameters.

  5. Effects of anisotropic electron-ion interactions in atomic photoelectron angular distributions

    NASA Technical Reports Server (NTRS)

    Dill, D.; Starace, A. F.; Manson, S. T.

    1974-01-01

    The photoelectron asymmetry parameter beta in LS-coupling is obtained as an expansion into contributions from alternative angular momentum transfers j sub t. The physical significance of this expansion of beta is shown to be that: (1) the electric dipole interaction transfers to the atom a charcteristic single angular momentum j sub t = sub o, where sub o is the photoelectron's initial orbital momentum; and (2) angular momentum transfers indicate the presence of anisotropic interaction of the outgoing photoelectron with the residual ion. For open shell atoms the photoelectron-ion interaction is generally anisotropic; photoelectron phase shifts and electric dipole matrix elements depend on both the multiplet term of the residual ion and the total orbital momentum of the ion-photoelectron final state channel. Consequently beta depends on the term levels of the residual ion and contains contributions from all allowed values of j sub t. Numerical calculations of the asymmetry parameters and partial cross sections for photoionization of atomic sulfur are presented.

  6. Two- and three-dimensional natural and mixed convection simulation using modular zonal models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wurtz, E.; Nataf, J.M.; Winkelmann, F.

    We demonstrate the use of the zonal model approach, which is a simplified method for calculating natural and mixed convection in rooms. Zonal models use a coarse grid and use balance equations, state equations, hydrostatic pressure drop equations and power law equations of the form {ital m} = {ital C}{Delta}{sup {ital n}}. The advantage of the zonal approach and its modular implementation are discussed. The zonal model resolution of nonlinear equation systems is demonstrated for three cases: a 2-D room, a 3-D room and a pair of 3-D rooms separated by a partition with an opening. A sensitivity analysis withmore » respect to physical parameters and grid coarseness is presented. Results are compared to computational fluid dynamics (CFD) calculations and experimental data.« less

  7. Weekly Gridded Aquarius L-band Radiometer-Scatterometer Observations and Salinity Retrievals over the Polar Regions - Part 2: Initial Product Analysis

    NASA Technical Reports Server (NTRS)

    Brucker, L.; Dinnat, E. P.; Koenig, L. S.

    2014-01-01

    Following the development and availability of Aquarius weekly polar-gridded products, this study presents the spatial and temporal radiometer and scatterometer observations at L band (frequency1.4 GHz) over the cryosphere including the Greenland and Antarctic ice sheets, sea ice in both hemispheres, and over sub-Arctic land for monitoring the soil freeze-thaw state. We provide multiple examples of scientific applications for the L-band data over the cryosphere. For example, we show that over the Greenland Ice Sheet, the unusual 2012 melt event lead to an L-band brightness temperature (TB) sustained decrease of 5 K at horizontal polarization. Over the Antarctic ice sheet, normalized radar cross section (NRCS) observations recorded during ascending and descending orbits are significantly different, highlighting the anisotropy of the ice cover. Over sub-Arctic land, both passive and active observations show distinct values depending on the soil physical state (freeze-thaw). Aquarius sea surface salinity (SSS) retrievals in the polar waters are also presented. SSS variations could serve as an indicator of fresh water input to the ocean from the cryosphere, however the presence of sea ice often contaminates the SSS retrievals, hindering the analysis. The weekly grided Aquarius L-band products used a redistributed by the US Snow and Ice Data Center at http:nsidc.orgdataaquariusindex.html, and show potential for cryospheric studies.

  8. Scale-dependent coupling of hysteretic capillary pressure, trapping, and fluid mobilities

    NASA Astrophysics Data System (ADS)

    Doster, F.; Celia, M. A.; Nordbotten, J. M.

    2012-12-01

    Many applications of multiphase flow in porous media, including CO2-storage and enhanced oil recovery, require mathematical models that span a large range of length scales. In the context of numerical simulations, practical grid sizes are often on the order of tens of meters, thereby de facto defining a coarse model scale. Under particular conditions, it is possible to approximate the sub-grid-scale distribution of the fluid saturation within a grid cell; that reconstructed saturation can then be used to compute effective properties at the coarse scale. If both the density difference between the fluids and the vertical extend of the grid cell are large, and buoyant segregation within the cell on a sufficiently shorte time scale, then the phase pressure distributions are essentially hydrostatic and the saturation profile can be reconstructed from the inferred capillary pressures. However, the saturation reconstruction may not be unique because the parameters and parameter functions of classical formulations of two-phase flow in porous media - the relative permeability functions, the capillary pressure -saturation relationship, and the residual saturations - show path dependence, i.e. their values depend not only on the state variables but also on their drainage and imbibition histories. In this study we focus on capillary pressure hysteresis and trapping and show that the contribution of hysteresis to effective quantities is dependent on the vertical length scale. By studying the transition from the two extreme cases - the homogeneous saturation distribution for small vertical extents and the completely segregated distribution for large extents - we identify how hysteretic capillary pressure at the local scale induces hysteresis in all coarse-scale quantities for medium vertical extents and finally vanishes for large vertical extents. Our results allow for more accurate vertically integrated modeling while improving our understanding of the coupling of capillary pressure and relative permeabilities over larger length scales.

  9. High-resolution daily gridded datasets of air temperature and wind speed for Europe

    NASA Astrophysics Data System (ADS)

    Brinckmann, S.; Krähenmann, S.; Bissolli, P.

    2015-08-01

    New high-resolution datasets for near surface daily air temperature (minimum, maximum and mean) and daily mean wind speed for Europe (the CORDEX domain) are provided for the period 2001-2010 for the purpose of regional model validation in the framework of DecReg, a sub-project of the German MiKlip project, which aims to develop decadal climate predictions. The main input data sources are hourly SYNOP observations, partly supplemented by station data from the ECA&D dataset (http://www.ecad.eu). These data are quality tested to eliminate erroneous data and various kinds of inhomogeneities. Grids in a resolution of 0.044° (5 km) are derived by spatial interpolation of these station data into the CORDEX area. For temperature interpolation a modified version of a regression kriging method developed by Krähenmann et al. (2011) is used. At first, predictor fields of altitude, continentality and zonal mean temperature are chosen for a regression applied to monthly station data. The residuals of the monthly regression and the deviations of the daily data from the monthly averages are interpolated using simple kriging in a second and third step. For wind speed a new method based on the concept used for temperature was developed, involving predictor fields of exposure, roughness length, coastal distance and ERA Interim reanalysis wind speed at 850 hPa. Interpolation uncertainty is estimated by means of the kriging variance and regression uncertainties. Furthermore, to assess the quality of the final daily grid data, cross validation is performed. Explained variance ranges from 70 to 90 % for monthly temperature and from 50 to 60 % for monthly wind speed. The resulting RMSE for the final daily grid data amounts to 1-2 °C and 1-1.5 m s-1 (depending on season and parameter) for daily temperature parameters and daily mean wind speed, respectively. The datasets presented in this article are published at http://dx.doi.org/10.5676/DWD_CDC/DECREG0110v1.

  10. LPV Modeling of a Flexible Wing Aircraft Using Modal Alignment and Adaptive Gridding Methods

    NASA Technical Reports Server (NTRS)

    Al-Jiboory, Ali Khudhair; Zhu, Guoming; Swei, Sean Shan-Min; Su, Weihua; Nguyen, Nhan T.

    2017-01-01

    One of the earliest approaches in gain-scheduling control is the gridding based approach, in which a set of local linear time-invariant models are obtained at various gridded points corresponding to the varying parameters within the flight envelop. In order to ensure smooth and effective Linear Parameter-Varying control, aligning all the flexible modes within each local model and maintaining small number of representative local models over the gridded parameter space are crucial. In addition, since the flexible structural models tend to have large dimensions, a tractable model reduction process is necessary. In this paper, the notion of s-shifted H2- and H Infinity-norm are introduced and used as a metric to measure the model mismatch. A new modal alignment algorithm is developed which utilizes the defined metric for aligning all the local models over the entire gridded parameter space. Furthermore, an Adaptive Grid Step Size Determination algorithm is developed to minimize the number of local models required to represent the gridded parameter space. For model reduction, we propose to utilize the concept of Composite Modal Cost Analysis, through which the collective contribution of each flexible mode is computed and ranked. Therefore, a reduced-order model is constructed by retaining only those modes with significant contribution. The NASA Generic Transport Model operating at various flight speeds is studied for verification purpose, and the analysis and simulation results demonstrate the effectiveness of the proposed modeling approach.

  11. The diskmass survey. VIII. On the relationship between disk stability and star formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Westfall, Kyle B.; Verheijen, Marc A. W.; Andersen, David R.

    2014-04-10

    We study the relationship between the stability level of late-type galaxy disks and their star-formation activity using integral-field gaseous and stellar kinematic data. Specifically, we compare the two-component (gas+stars) stability parameter from Romeo and Wiegert (Q {sub RW}), incorporating stellar kinematic data for the first time, and the star-formation rate estimated from 21 cm continuum emission. We determine the stability level of each disk probabilistically using a Bayesian analysis of our data and a simple dynamical model. Our method incorporates the shape of the stellar velocity ellipsoid (SVE) and yields robust SVE measurements for over 90% of our sample. Averagingmore » over this subsample, we find a meridional shape of σ{sub z}/σ{sub R}=0.51{sub −0.25}{sup +0.36} for the SVE and, at 1.5 disk scale lengths, a stability parameter of Q {sub RW} = 2.0 ± 0.9. We also find that the disk-averaged star-formation-rate surface density ( Σ-dot {sub e,∗}) is correlated with the disk-averaged gas and stellar mass surface densities (Σ {sub e,} {sub g} and Σ {sub e,} {sub *}) and anti-correlated with Q {sub RW}. We show that an anti-correlation between Σ-dot {sub e,∗} and Q {sub RW} can be predicted using empirical scaling relations, such that this outcome is consistent with well-established statistical properties of star-forming galaxies. Interestingly, Σ-dot {sub e,∗} is not correlated with the gas-only or star-only Toomre parameters, demonstrating the merit of calculating a multi-component stability parameter when comparing to star-formation activity. Finally, our results are consistent with the Ostriker et al. model of self-regulated star-formation, which predicts Σ-dot {sub e,∗}/Σ{sub e,g}∝Σ{sub e,∗}{sup 1/2}. Based on this and other theoretical expectations, we discuss the possibility of a physical link between disk stability level and star-formation rate in light of our empirical results.« less

  12. Spectral Analysis of B Stars: An Application of Bayesian Statistics

    NASA Astrophysics Data System (ADS)

    Mugnes, J.-M.; Robert, C.

    2012-12-01

    To better understand the processes involved in stellar physics, it is necessary to obtain accurate stellar parameters (effective temperature, surface gravity, abundances…). Spectral analysis is a powerful tool for investigating stars, but it is also vital to reduce uncertainties at a decent computational cost. Here we present a spectral analysis method based on a combination of Bayesian statistics and grids of synthetic spectra obtained with TLUSTY. This method simultaneously constrains the stellar parameters by using all the lines accessible in observed spectra and thus greatly reduces uncertainties and improves the overall spectrum fitting. Preliminary results are shown using spectra from the Observatoire du Mont-Mégantic.

  13. FY2017 Electrification Annual Progress Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    During fiscal year 2017 (FY 2017), the U.S. Department of Energy (DOE) Vehicle Technologies Office (VTO) funded early stage research & development (R&D) projects that address Batteries and Electrification of the U.S. transportation sector. The VTO Electrification Sub-Program is composed of Electric Drive Technologies, and Grid Integration activities. The Electric Drive Technologies group conducts R&D projects that advance Electric Motors and Power Electronics technologies. The Grid and Charging Infrastructure group conducts R&D projects that advance Grid Modernization and Electric Vehicle Charging technologies. This document presents a brief overview of the Electrification Sub-Program and progress reports for its R&D projects. Eachmore » of the progress reports provide a project overview and highlights of the technical results that were accomplished in FY 2017.« less

  14. Ground-Based Robotic Sensing of an Agricultural Sub-Canopy Environment

    NASA Astrophysics Data System (ADS)

    Burns, A.; Peschel, J.

    2015-12-01

    Airborne remote sensing is a useful method for measuring agricultural crop parameters over large areas; however, the approach becomes limited to above-canopy characterization as a crop matures due to reduced visual access of the sub-canopy environment. During the growth cycle of an agricultural crop, such as soybeans, the micrometeorology of the sub-canopy environment can significantly impact pod development and reduced yields may result. Larger-scale environmental conditions aside, the physical structure and configuration of the sub-canopy matrix will logically influence local climate conditions for a single plant; understanding the state and development of the sub-canopy could inform crop models and improve best practices but there are currently no low-cost methods to quantify the sub-canopy environment at a high spatial and temporal resolution over an entire growth cycle. This work describes the modification of a small tactical and semi-autonomous, ground-based robotic platform with sensors capable of mapping the physical structure of an agricultural row crop sub-canopy; a soybean crop is used as a case study. Point cloud data representing the sub-canopy structure are stored in LAS format and can be used for modeling and visualization in standard GIS software packages.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gustafson, William I.; Ma, Po-Lun; Singh, Balwinder

    The physics suite of the Community Atmosphere Model version 5 (CAM5) has recently been implemented in the Weather Research and Forecasting (WRF) model to explore the behavior of the parameterization suite at high resolution and in the more controlled setting of a limited area model. The initial paper documenting this capability characterized the behavior for northern high latitude conditions. This present paper characterizes the precipitation characteristics for continental, mid-latitude, springtime conditions during the Midlatitude Continental Convective Clouds Experiment (MC3E) over the central United States. This period exhibited a range of convective conditions from those driven strongly by large-scale synoptic regimesmore » to more locally driven convection. The study focuses on the precipitation behavior at 32 km grid spacing to better anticipate how the physics will behave in the global model when used at similar grid spacing in the coming years. Importantly, one change to the Zhang-McFarlane deep convective parameterization when implemented in WRF was to make the convective timescale parameter an explicit function of grid spacing. This study examines the sensitivity of the precipitation to the default value of the convective timescale in WRF, which is 600 seconds for 32 km grid spacing, to the value of 3600 seconds used for 2 degree grid spacing in CAM5. For comparison, an infinite convective timescale is also used. The results show that the 600 second timescale gives the most accurate precipitation over the central United States in terms of rain amount. However, this setting has the worst precipitation diurnal cycle, with the convection too tightly linked to the daytime surface heating. Longer timescales greatly improve the diurnal cycle but result in less precipitation and produce a low bias. An analysis of rain rates shows the accurate precipitation amount with the shorter timescale is assembled from an over abundance of drizzle combined with too little heavy rain events. With longer timescales one can improve the distribution, particularly for the extreme rain rates. Ultimately, without changing other aspects of the physics, one must choose between accurate diurnal timing and rain amount when choosing an appropriate convective timescale.« less

  16. Measurement of the {ital {tau}} Neutrino Helicity and Michel Parameters in Polarized {ital e}{sup +}{ital e}{sup -} Collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steiner, R.; Benvenuti, A.C.; Coller, J.A.

    1997-06-01

    We present a new measurement of the {tau} neutrino helicity h{sub {nu}{sub {tau}}} and the {tau} Michel parameters {rho} , {eta} , {xi} , and the product {delta}{xi} . The analysis exploits the highly polarized SLC electron beam to extract these quantities directly from a measurement of the {tau} decay spectra, using the 1993{endash}1995 SLD data sample of 4328 e{sup +}e{sup -}{r_arrow}Z{sup 0}{r_arrow}{tau}{sup +}{tau}{sup -} events. From the decays {tau}{r_arrow}{pi}{nu}{sub {tau}} and {tau}{r_arrow}{rho}{nu}{sub {tau}} we obtain a combined value h{sub {nu}{sub {tau}}}=-0.93{plus_minus}0.10{plus_minus} 0.04 . The leptonic decay channels yield combined values of {rho}=0.72{plus_minus}0.09{plus_minus}0.03 , {xi}=1.05{plus_minus}0.35{plus_minus}0.04 , and {delta}{xi}=0.88{plus_minus}0.27{plus_minus}0.04 . {copyright}more » {ital 1997} {ital The American Physical Society}« less

  17. Discretization of three-dimensional free surface flows and moving boundary problems via elliptic grid methods based on variational principles

    NASA Astrophysics Data System (ADS)

    Fraggedakis, D.; Papaioannou, J.; Dimakopoulos, Y.; Tsamopoulos, J.

    2017-09-01

    A new boundary-fitted technique to describe free surface and moving boundary problems is presented. We have extended the 2D elliptic grid generator developed by Dimakopoulos and Tsamopoulos (2003) [19] and further advanced by Chatzidai et al. (2009) [18] to 3D geometries. The set of equations arises from the fulfillment of the variational principles established by Brackbill and Saltzman (1982) [21], and refined by Christodoulou and Scriven (1992) [22]. These account for both smoothness and orthogonality of the grid lines of tessellated physical domains. The elliptic-grid equations are accompanied by new boundary constraints and conditions which are based either on the equidistribution of the nodes on boundary surfaces or on the existing 2D quasi-elliptic grid methodologies. The capabilities of the proposed algorithm are first demonstrated in tests with analytically described complex surfaces. The sequence in which these tests are presented is chosen to help the reader build up experience on the best choice of the elliptic grid parameters. Subsequently, the mesh equations are coupled with the Navier-Stokes equations, in order to reveal the full potential of the proposed methodology in free surface flows. More specifically, the problem of gas assisted injection in ducts of circular and square cross-sections is examined, where the fluid domain experiences extreme deformations. Finally, the flow-mesh solver is used to calculate the equilibrium shapes of static menisci in capillary tubes.

  18. Analytical Computation of Effective Grid Parameters for the Finite-Difference Seismic Waveform Modeling With the PREM, IASP91, SP6, and AK135

    NASA Astrophysics Data System (ADS)

    Toyokuni, G.; Takenaka, H.

    2007-12-01

    We propose a method to obtain effective grid parameters for the finite-difference (FD) method with standard Earth models using analytical ways. In spite of the broad use of the heterogeneous FD formulation for seismic waveform modeling, accurate treatment of material discontinuities inside the grid cells has been a serious problem for many years. One possible way to solve this problem is to introduce effective grid elastic moduli and densities (effective parameters) calculated by the volume harmonic averaging of elastic moduli and volume arithmetic averaging of density in grid cells. This scheme enables us to put a material discontinuity into an arbitrary position in the spatial grids. Most of the methods used for synthetic seismogram calculation today receives the blessing of the standard Earth models, such as the PREM, IASP91, SP6, and AK135, represented as functions of normalized radius. For the FD computation of seismic waveform with such models, we first need accurate treatment of material discontinuities in radius. This study provides a numerical scheme for analytical calculations of the effective parameters for an arbitrary spatial grids in radial direction as to these major four standard Earth models making the best use of their functional features. This scheme can analytically obtain the integral volume averages through partial fraction decompositions (PFDs) and integral formulae. We have developed a FORTRAN subroutine to perform the computations, which is opened to utilization in a large variety of FD schemes ranging from 1-D to 3-D, with conventional- and staggered-grids. In the presentation, we show some numerical examples displaying the accuracy of the FD synthetics simulated with the analytical effective parameters.

  19. Reliability analysis for the smart grid : from cyber control and communication to physical manifestations of failure.

    DOT National Transportation Integrated Search

    2010-01-01

    The Smart Grid is a cyber-physical system comprised of physical components, such as transmission lines and generators, and a : network of embedded systems deployed for their cyber control. Our objective is to qualitatively and quantitatively analyze ...

  20. Evaluation, Calibration and Comparison of the Precipitation-Runoff Modeling System (PRMS) National Hydrologic Model (NHM) Using Moderate Resolution Imaging Spectroradiometer (MODIS) and Snow Data Assimilation System (SNODAS) Gridded Datasets

    NASA Astrophysics Data System (ADS)

    Norton, P. A., II; Haj, A. E., Jr.

    2014-12-01

    The United States Geological Survey is currently developing a National Hydrologic Model (NHM) to support and facilitate coordinated and consistent hydrologic modeling efforts at the scale of the continental United States. As part of this effort, the Geospatial Fabric (GF) for the NHM was created. The GF is a database that contains parameters derived from datasets that characterize the physical features of watersheds. The GF was used to aggregate catchments and flowlines defined in the National Hydrography Dataset Plus dataset for more than 100,000 hydrologic response units (HRUs), and to establish initial parameter values for input to the Precipitation-Runoff Modeling System (PRMS). Many parameter values are adjusted in PRMS using an automated calibration process. Using these adjusted parameter values, the PRMS model estimated variables such as evapotranspiration (ET), potential evapotranspiration (PET), snow-covered area (SCA), and snow water equivalent (SWE). In order to evaluate the effectiveness of parameter calibration, and model performance in general, several satellite-based Moderate Resolution Imaging Spectroradiometer (MODIS) and Snow Data Assimilation System (SNODAS) gridded datasets including ET, PET, SCA, and SWE were compared to PRMS-simulated values. The MODIS and SNODAS data were spatially averaged for each HRU, and compared to PRMS-simulated ET, PET, SCA, and SWE values for each HRU in the Upper Missouri River watershed. Default initial GF parameter values and PRMS calibration ranges were evaluated. Evaluation results, and the use of MODIS and SNODAS datasets to update GF parameter values and PRMS calibration ranges, are presented and discussed.

  1. Parameterizing Grid-Averaged Longwave Fluxes for Inhomogeneous Marine Boundary Layer Clouds

    NASA Technical Reports Server (NTRS)

    Barker, Howard W.; Wielicki, Bruce A.

    1997-01-01

    This paper examines the relative impacts on grid-averaged longwave flux transmittance (emittance) for Marine Boundary Layer (MBL) cloud fields arising from horizontal variability of optical depth tau and cloud sides, First, using fields of Landsat-inferred tau and a Monte Carlo photon transport algorithm, it is demonstrated that mean all-sky transmittances for 3D variable MBL clouds can be computed accurately by the conventional method of linearly weighting clear and cloudy transmittances by their respective sky fractions. Then, the approximations of decoupling cloud and radiative properties and assuming independent columns are shown to be adequate for computation of mean flux transmittance. Since real clouds have nonzero geometric thicknesses, cloud fractions A'(sub c) presented to isotropic beams usually exceed the more familiar vertically projected cloud fractions A(sub c). It is shown, however, that when A(sub c)less than or equal to 0.9, biases for all-sky transmittance stemming from use of A(sub c) as opposed to A'(sub c) are roughly 2-5 times smaller than, and opposite in sign to, biases due to neglect of horizontal variability of tau. By neglecting variable tau, all-sky transmittances are underestimated often by more than 0.1 for A(sub c) near 0.75 and this translates into relative errors that can exceed 40% (corresponding errors for all-sky emittance are about 20% for most values of A(sub c). Thus, priority should be given to development of General Circulation Model (GCM) parameterizations that account for the effects of horizontal variations in unresolved tau, effects of cloud sides are of secondary importance. On this note, an efficient stochastic model for computing grid-averaged cloudy-sky flux transmittances is furnished that assumes that distributions of tau, for regions comparable in size to GCM grid cells, can be described adequately by gamma distribution functions. While the plane-parallel, homogeneous model underestimates cloud transmittance by about an order of magnitude when 3D variable cloud transmittances are less than or equal to 0.2 and by approx. 20% to 100% otherwise, the stochastic model reduces these biases often by more than 80%.

  2. Influences of the inner retinal sublayers and analytical areas in macular scans by spectral-domain OCT on the diagnostic ability of early glaucoma.

    PubMed

    Nakatani, Yusuke; Higashide, Tomomi; Ohkubo, Shinji; Sugiyama, Kazuhisa

    2014-10-23

    We investigated the influences of the inner retinal sublayers and analytical areas in macular scans by spectral-domain optical coherence tomography (OCT) on the diagnostic ability of early glaucoma. A total of 64 early (including 24 preperimetric) glaucomatous and 40 normal eyes underwent macular and peripapillary retinal nerve fiber layer (pRNFL) scans (3D-OCT-2000). The area under the receiver operating characteristics (AUC) for glaucoma diagnosis was determined from the average thickness of the total 100 grids (6 × 6 mm), central 44 grids (3.6 × 4.8 mm), and peripheral 56 grids (outside of the 44 grids), and for each macular sublayer: macular RNFL (mRNFL), ganglion cell layer plus inner plexiform layer (GCL/IPL), and mRNFL plus GCL/IPL (ganglion cell complex [GCC]). Correlation of OCT parameters with visual field parameters was evaluated by Spearman's rank correlation coefficients (rs). The GCC-related parameters had a significantly larger AUC (0.82-0.97) than GCL/IPL (0.81-0.91), mRNFL-related parameters (0.72-0.94), or average pRNFL (0.88) in more than half of all comparisons. The central 44 grids had a significantly lower AUC than other analytical areas in GCC and mRNFL thickness. Conversely, the peripheral 56 grids had a significantly lower AUC than the 100 grids in GCL/IPL inferior thickness. Inferior thickness of GCC (rs, 0.45-0.49) and mRNFL (rs, 0.43-0.51) showed comparably high correlations with central visual field parameters to average pRNFL thickness (rs, 0.41, 0.47) even in the central 44 grids. The diagnostic ability of macular OCT parameters for early glaucoma differed by inner retinal sublayers and also by the analytical areas studied. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  3. Studies of Sub-Synchronous Oscillations in Large-Scale Wind Farm Integrated System

    NASA Astrophysics Data System (ADS)

    Yue, Liu; Hang, Mend

    2018-01-01

    With the rapid development and construction of large-scale wind farms and grid-connected operation, the series compensation wind power AC transmission is gradually becoming the main way of power usage and improvement of wind power availability and grid stability, but the integration of wind farm will change the SSO (Sub-Synchronous oscillation) damping characteristics of synchronous generator system. Regarding the above SSO problem caused by integration of large-scale wind farms, this paper focusing on doubly fed induction generator (DFIG) based wind farms, aim to summarize the SSO mechanism in large-scale wind power integrated system with series compensation, which can be classified as three types: sub-synchronous control interaction (SSCI), sub-synchronous torsional interaction (SSTI), sub-synchronous resonance (SSR). Then, SSO modelling and analysis methods are categorized and compared by its applicable areas. Furthermore, this paper summarizes the suppression measures of actual SSO projects based on different control objectives. Finally, the research prospect on this field is explored.

  4. Sensitivity of land surface modeling to parameters: An uncertainty quantification method applied to the Community Land Model

    NASA Astrophysics Data System (ADS)

    Ricciuto, D. M.; Mei, R.; Mao, J.; Hoffman, F. M.; Kumar, J.

    2015-12-01

    Uncertainties in land parameters could have important impacts on simulated water and energy fluxes and land surface states, which will consequently affect atmospheric and biogeochemical processes. Therefore, quantification of such parameter uncertainties using a land surface model is the first step towards better understanding of predictive uncertainty in Earth system models. In this study, we applied a random-sampling, high-dimensional model representation (RS-HDMR) method to analyze the sensitivity of simulated photosynthesis, surface energy fluxes and surface hydrological components to selected land parameters in version 4.5 of the Community Land Model (CLM4.5). Because of the large computational expense of conducting ensembles of global gridded model simulations, we used the results of a previous cluster analysis to select one thousand representative land grid cells for simulation. Plant functional type (PFT)-specific uniform prior ranges for land parameters were determined using expert opinion and literature survey, and samples were generated with a quasi-Monte Carlo approach-Sobol sequence. Preliminary analysis of 1024 simulations suggested that four PFT-dependent parameters (including slope of the conductance-photosynthesis relationship, specific leaf area at canopy top, leaf C:N ratio and fraction of leaf N in RuBisco) are the dominant sensitive parameters for photosynthesis, surface energy and water fluxes across most PFTs, but with varying importance rankings. On the other hand, for surface ans sub-surface runoff, PFT-independent parameters, such as the depth-dependent decay factors for runoff, play more important roles than the previous four PFT-dependent parameters. Further analysis by conditioning the results on different seasons and years are being conducted to provide guidance on how climate variability and change might affect such sensitivity. This is the first step toward coupled simulations including biogeochemical processes, atmospheric processes or both to determine the full range of sensitivity of Earth system modeling to land-surface parameters. This can facilitate sampling strategies in measurement campaigns targeted at reduction of climate modeling uncertainties and can also provide guidance on land parameter calibration for simulation optimization.

  5. Sonora: A New Generation Model Atmosphere Grid for Brown Dwarfs and Young Extrasolar Giant Planets

    NASA Technical Reports Server (NTRS)

    Marley, Mark S.; Saumon, Didier; Fortney, Jonathan J.; Morley, Caroline; Lupu, Roxana Elena; Freedman, Richard; Visscher, Channon

    2017-01-01

    Brown dwarf and giant planet atmospheric structure and composition has been studied both by forward models and, increasingly so, by retrieval methods. While indisputably informative, retrieval methods are of greatest value when judged in the context of grid model predictions. Meanwhile retrieval models can test the assumptions inherent in the forward modeling procedure. In order to provide a new, systematic survey of brown dwarf atmospheric structure, emergent spectra, and evolution, we have constructed a new grid of brown dwarf model atmospheres. We ultimately aim for our grid to span substantial ranges of atmospheric metallilcity, C/O ratios, cloud properties, atmospheric mixing, and other parameters. Spectra predicted by our modeling grid can be compared to both observations and retrieval results to aid in the interpretation and planning of future telescopic observations. We thus present Sonora, a new generation of substellar atmosphere models, appropriate for application to studies of L, T, and Y-type brown dwarfs and young extrasolar giant planets. The models describe the expected temperature-pressure profile and emergent spectra of an atmosphere in radiative-convective equilibrium for ranges of effective temperatures and gravities encompassing 200 less than or equal to T(sub eff) less than or equal to 2400 K and 2.5 less than or equal to log g less than or equal to 5.5. In our poster we briefly describe our modeling methodology, enumerate various updates since our group's previous models, and present our initial tranche of models for cloudless, solar metallicity, and solar carbon-to-oxygen ratio, chemical equilibrium atmospheres. These models will be available online and will be updated as opacities and cloud modeling methods continue to improve.

  6. Evaluation of CASL boiling model for DNB performance in full scale 5x5 fuel bundle with spacer grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Seung Jun

    As one of main tasks for FY17 CASL-THM activity, Evaluation study on applicability of the CASL baseline boiling model for 5x5 DNB application is conducted and the predictive capability of the DNB analysis is reported here. While the baseline CASL-boiling model (GEN- 1A) approach has been successfully implemented and validated with a single pipe application in the previous year’s task, the extended DNB validation for realistic sub-channels with detailed spacer grid configurations are tasked in FY17. The focus area of the current study is to demonstrate the robustness and feasibility of the CASL baseline boiling model for DNB performance inmore » a full 5x5 fuel bundle application. A quantitative evaluation of the DNB predictive capability is performed by comparing with corresponding experimental measurements (i.e. reference for the model validation). The reference data are provided from the Westinghouse Electricity Company (WEC). Two different grid configurations tested here include Non-Mixing Vane Grid (NMVG), and Mixing Vane Grid (MVG). Thorough validation studies with two sub-channel configurations are performed at a wide range of realistic PWR operational conditions.« less

  7. Improved mask-based CD uniformity for gridded-design-rule lithography

    NASA Astrophysics Data System (ADS)

    Faivishevsky, Lev; Khristo, Sergey; Sagiv, Amir; Mangan, Shmoolik

    2009-03-01

    The difficulties encountered during lithography of state-of-the-art 2D patterns are formidable, and originate from the fact that deep sub-wavelength features are being printed. This results in a practical limit of k1 >=0.4 as well as a multitude of complex restrictive design rules, in order to mitigate or minimize lithographic hot spots. An alternative approach, that is gradually attracting the lithographic community's attention, restricts the design of critical layers to straight, dense lines (a 1D grid), that can be relatively easily printed using current lithographic technology. This is then followed by subsequent, less critical trimming stages to obtain circuit functionality. Thus, the 1D gridded approach allows hotspot-free, proximity-effect free lithography of ultra low- k1 features. These advantages must be supported by a stable CD control mechanism. One of the overriding parameters impacting CDU performance is photo mask quality. Previous publications have demonstrated that IntenCDTM - a novel, mask-based CDU mapping technology running on Applied Materials' Aera2TM aerial imaging mask inspection tool - is ideally fit for detecting mask-based CDU issues in 1D (L&S) patterned masks for memory production. Owing to the aerial nature of image formation, IntenCD directly probes the CD as it is printed on the wafer. In this paper we suggest that IntenCD is naturally fit for detecting mask-based CDU issues in 1D GDR masks. We then study a novel method of recovering and quantifying the physical source of printed CDU, using a novel implementation of the IntenCD technology. We demonstrate that additional, simple measurements, which can be readily performed on board the Aera2TM platform with minimal throughput penalty, may complement IntenCD and allow a robust estimation of the specific nature and strength of mask error source, such as pattern width variation or phase variation, which leads to CDU issues on the printed wafer. We finally discuss the roles played by IntenCD in advanced GDR mask production, starting with tight control over mask production process, continuing to mask qualification at mask shop and ending at in-line wafer CDU correction in fabs.

  8. Global hydrodynamic modelling of flood inundation in continental rivers: How can we achieve it?

    NASA Astrophysics Data System (ADS)

    Yamazaki, D.

    2016-12-01

    Global-scale modelling of river hydrodynamics is essential for understanding global hydrological cycle, and is also required in interdisciplinary research fields . Global river models have been developed continuously for more than two decades, but modelling river flow at a global scale is still a challenging topic because surface water movement in continental rivers is a multi-spatial-scale phenomena. We have to consider the basin-wide water balance (>1000km scale), while hydrodynamics in river channels and floodplains is regulated by much smaller-scale topography (<100m scale). For example, heavy precipitation in upstream regions may later cause flooding in farthest downstream reaches. In order to realistically simulate the timing and amplitude of flood wave propagation for a long distance, consideration of detailed local topography is unavoidable. I have developed the global hydrodynamic model CaMa-Flood to overcome this scale-discrepancy of continental river flow. The CaMa-Flood divides river basins into multiple "unit-catchments", and assumes the water level is uniform within each unit-catchment. One unit-catchment is assigned to each grid-box defined at the typical spatial resolution of global climate models (10 100 km scale). Adopting a uniform water level in a >10km river segment seems to be a big assumption, but it is actually a good approximation for hydrodynamic modelling of continental rivers. The number of grid points required for global hydrodynamic simulations is largely reduced by this "unit-catchment assumption". Alternative to calculating 2-dimensional floodplain flows as in regional flood models, the CaMa-Flood treats floodplain inundation in a unit-catchment as a sub-grid physics. The water level and inundated area in each unit-catchment are diagnosed from water volume using topography parameters derived from high-resolution digital elevation models. Thus, the CaMa-Flood is at least 1000 times computationally more efficient compared to regional flood inundation models while the reality of simulated flood dynamics is kept. I will explain in detail how the CaMa-Flood model has been constructed from high-resolution topography datasets, and how the model can be used for various interdisciplinary applications.

  9. A first large-scale flood inundation forecasting model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie

    2013-11-04

    At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domainmore » has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode revealed that it is crucial to account for basin-wide hydrological response time when assessing lead time performances notwithstanding structural limitations in the hydrological model and possibly large inaccuracies in precipitation data.« less

  10. Global Electricity Trade Network: Structures and Implications

    PubMed Central

    Ji, Ling; Jia, Xiaoping; Chiu, Anthony S. F.; Xu, Ming

    2016-01-01

    Nations increasingly trade electricity, and understanding the structure of the global power grid can help identify nations that are critical for its reliability. This study examines the global grid as a network with nations as nodes and international electricity trade as links. We analyze the structure of the global electricity trade network and find that the network consists of four sub-networks, and provide a detailed analysis of the largest network, Eurasia. Russia, China, Ukraine, and Azerbaijan have high betweenness measures in the Eurasian sub-network, indicating the degrees of centrality of the positions they hold. The analysis reveals that the Eurasian sub-network consists of seven communities based on the network structure. We find that the communities do not fully align with geographical proximity, and that the present international electricity trade in the Eurasian sub-network causes an approximately 11 million additional tons of CO2 emissions. PMID:27504825

  11. Global Electricity Trade Network: Structures and Implications.

    PubMed

    Ji, Ling; Jia, Xiaoping; Chiu, Anthony S F; Xu, Ming

    2016-01-01

    Nations increasingly trade electricity, and understanding the structure of the global power grid can help identify nations that are critical for its reliability. This study examines the global grid as a network with nations as nodes and international electricity trade as links. We analyze the structure of the global electricity trade network and find that the network consists of four sub-networks, and provide a detailed analysis of the largest network, Eurasia. Russia, China, Ukraine, and Azerbaijan have high betweenness measures in the Eurasian sub-network, indicating the degrees of centrality of the positions they hold. The analysis reveals that the Eurasian sub-network consists of seven communities based on the network structure. We find that the communities do not fully align with geographical proximity, and that the present international electricity trade in the Eurasian sub-network causes an approximately 11 million additional tons of CO2 emissions.

  12. Running GCM physics and dynamics on different grids: Algorithm and tests

    NASA Astrophysics Data System (ADS)

    Molod, A.

    2006-12-01

    The major drawback in the use of sigma coordinates in atmospheric GCMs, namely the error in the pressure gradient term near sloping terrain, leaves the use of eta coordinates an important alternative. A central disadvantage of an eta coordinate, the inability to retain fine resolution in the vertical as the surface rises above sea level, is addressed here. An `alternate grid' technique is presented which allows the tendencies of state variables due to the physical parameterizations to be computed on a vertical grid (the `physics grid') which retains fine resolution near the surface, while the remaining terms in the equations of motion are computed using an eta coordinate (the `dynamics grid') with coarser vertical resolution. As a simple test of the technique a set of perpetual equinox experiments using a simplified lower boundary condition with no land and no topography were performed. The results show that for both low and high resolution alternate grid experiments, much of the benefit of increased vertical resolution for the near surface meridional wind (and mass streamfield) can be realized by enhancing the vertical resolution of the `physics grid' in the manner described here. In addition, approximately half of the increase in zonal jet strength seen with increased vertical resolution can be realized using the `alternate grid' technique. A pair of full GCM experiments with realistic lower boundary conditions and topography were also performed. It is concluded that the use of the `alternate grid' approach offers a promising way forward to alleviate a central problem associated with the use of the eta coordinate in atmospheric GCMs.

  13. Code IN Exhibits - Supercomputing 2000

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob F.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    The creation of parameter study suites has recently become a more challenging problem as the parameter studies have become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers immense resource opportunities but at the expense of great difficulty of use. We present ILab, an advanced graphical user interface approach to this problem. Our novel strategy stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.

  14. Interpolation Environment of Tensor Mathematics at the Corpuscular Stage of Computational Experiments in Hydromechanics

    NASA Astrophysics Data System (ADS)

    Bogdanov, Alexander; Degtyarev, Alexander; Khramushin, Vasily; Shichkina, Yulia

    2018-02-01

    Stages of direct computational experiments in hydromechanics based on tensor mathematics tools are represented by conditionally independent mathematical models for calculations separation in accordance with physical processes. Continual stage of numerical modeling is constructed on a small time interval in a stationary grid space. Here coordination of continuity conditions and energy conservation is carried out. Then, at the subsequent corpuscular stage of the computational experiment, kinematic parameters of mass centers and surface stresses at the boundaries of the grid cells are used in modeling of free unsteady motions of volume cells that are considered as independent particles. These particles can be subject to vortex and discontinuous interactions, when restructuring of free boundaries and internal rheological states has place. Transition from one stage to another is provided by interpolation operations of tensor mathematics. Such interpolation environment formalizes the use of physical laws for mechanics of continuous media modeling, provides control of rheological state and conditions for existence of discontinuous solutions: rigid and free boundaries, vortex layers, their turbulent or empirical generalizations.

  15. In situ determination of the static inductance and resistance of a plasma focus capacitor bank

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saw, S. H.; Institute for Plasma Focus Studies, 32 Oakpark Drive, Chadstone, Victoria 3148; Lee, S.

    2010-05-15

    The static (unloaded) electrical parameters of a capacitor bank are of utmost importance for the purpose of modeling the system as a whole when the capacitor bank is discharged into its dynamic electromagnetic load. Using a physical short circuit across the electromagnetic load is usually technically difficult and is unnecessary. The discharge can be operated at the highest pressure permissible in order to minimize current sheet motion, thus simulating zero dynamic load, to enable bank parameters, static inductance L{sub 0}, and resistance r{sub 0} to be obtained using lightly damped sinusoid equations given the bank capacitance C{sub 0}. However, formore » a plasma focus, even at the highest permissible pressure it is found that there is significant residual motion, so that the assumption of a zero dynamic load introduces unacceptable errors into the determination of the circuit parameters. To overcome this problem, the Lee model code is used to fit the computed current trace to the measured current waveform. Hence the dynamics is incorporated into the solution and the capacitor bank parameters are computed using the Lee model code, and more accurate static bank parameters are obtained.« less

  16. Aggregation server for grid-integrated vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kempton, Willett

    2015-05-26

    Methods, systems, and apparatus for aggregating electric power flow between an electric grid and electric vehicles are disclosed. An apparatus for aggregating power flow may include a memory and a processor coupled to the memory to receive electric vehicle equipment (EVE) attributes from a plurality of EVEs, aggregate EVE attributes, predict total available capacity based on the EVE attributes, and dispatch at least a portion of the total available capacity to the grid. Power flow may be aggregated by receiving EVE operational parameters from each EVE, aggregating the received EVE operational parameters, predicting total available capacity based on the aggregatedmore » EVE operational parameters, and dispatching at least a portion of the total available capacity to the grid.« less

  17. A Computational Investigation of Gear Windage

    NASA Technical Reports Server (NTRS)

    Hill, Matthew J.; Kunz, Robert F.

    2012-01-01

    A CFD method has been developed for application to gear windage aerodynamics. The goals of this research are to develop and validate numerical and modeling approaches for these systems, to develop physical understanding of the aerodynamics of gear windage loss, including the physics of loss mitigation strategies, and to propose and evaluate new approaches for minimizing loss. Absolute and relative frame CFD simulation, overset gridding, multiphase flow analysis, and sub-layer resolved turbulence modeling were brought to bear in achieving these goals. Several spur gear geometries were studied for which experimental data are available. Various shrouding configurations and free-spinning (no shroud) cases were studied. Comparisons are made with experimental data from the open literature, and data recently obtained in the NASA Glenn Research Center Gear Windage Test Facility. The results show good agreement with experiment. Interrogation of the validative and exploratory CFD results have led, for the first time, to a detailed understanding of the physical mechanisms of gear windage loss, and have led to newly proposed mitigation strategies whose effectiveness is computationally explored.

  18. Atomic and molecular far-infrared lines from high redshift galaxies

    NASA Astrophysics Data System (ADS)

    Vallini, L.

    2015-03-01

    The advent of Atacama Large Millimeter-submillimeter Array (ALMA), with its unprecedented sensitivity, makes it possible the detection of far-infrared (FIR) metal cooling and molecular lines from the first galaxies that formed after the Big Bang. These lines represent a powerful tool to shed light on the physical properties of the interstellar medium (ISM) in high-redshift sources. In what follows we show the potential of a physically motivated theoretical approach that we developed to predict the ISM properties of high redshift galaxies. The model allows to infer, as a function of the metallicity, the luminosities of various FIR lines observable with ALMA. It is based on high resolution cosmological simulations of star-forming galaxies at the end of the Epoch of Reionization (z˜eq6) , further implemented with sub-grid physics describing the cooling and the heating processes that take place in the neutral diffuse ISM. Finally we show how a different approach based on semi-analytical calculations can allow to predict the CO flux function at z>6.

  19. IMPLEMENTATION OF AN URBAN CANOPY PARAMETERIZATION IN MM5 FOR MESO-GAMMA-SCALE AIR QUALITY MODELING APPLICATIONS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (U.S. EPA) is extending its Models-3/Community Multiscale Air Quality (CMAQ) Modeling System to provide detailed gridded air quality concentration fields and sub-grid variability characterization at neighborhood scales and in urban areas...

  20. Implementation of control point form of algebraic grid-generation technique

    NASA Technical Reports Server (NTRS)

    Choo, Yung K.; Miller, David P.; Reno, Charles J.

    1991-01-01

    The control point form (CPF) provides explicit control of physical grid shape and grid spacing through the movement of the control points. The control point array, called a control net, is a space grid type arrangement of locations in physical space with an index for each direction. As an algebraic method CPF is efficient and works well with interactive computer graphics. A family of menu-driven, interactive grid-generation computer codes (TURBO) is being developed by using CPF. Key features of TurboI (a TURBO member) are discussed and typical results are presented. TurboI runs on any IRIS 4D series workstation.

  1. Ion mobility spectrometer with virtual aperture grid

    DOEpatents

    Pfeifer, Kent B.; Rumpf, Arthur N.

    2010-11-23

    An ion mobility spectrometer does not require a physical aperture grid to prevent premature ion detector response. The last electrodes adjacent to the ion collector (typically the last four or five) have an electrode pitch that is less than the width of the ion swarm and each of the adjacent electrodes is connected to a source of free charge, thereby providing a virtual aperture grid at the end of the drift region that shields the ion collector from the mirror current of the approaching ion swarm. The virtual aperture grid is less complex in assembly and function and is less sensitive to vibrations than the physical aperture grid.

  2. G{sub 2}-MSSM: An M theory motivated model of particle physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acharya, Bobby S.; Bobkov, Konstantin; Kane, Gordon L.

    2008-09-15

    We continue our study of the low energy implications of M theory vacua on G{sub 2}-manifolds, undertaken in B. S. Acharya, K. Bobkov, G. L. Kane, P. Kumar, and J. Shao, Phys. Rev. D 76, 126010 (2007); B. Acharya, K. Bobkov, G. Kane, P. Kumar, and D. Vaman, Phys. Rev. Lett. 97, 191601 (2006), where it was shown that the moduli can be stabilized and a TeV scale generated, with the Planck scale as the only dimensionful input. A well-motivated phenomenological model, the G{sub 2}-MSSM, can be naturally defined within the above framework. In this paper, we study some ofmore » the important phenomenological features of the G{sub 2}-MSSM. In particular, the soft supersymmetry breaking parameters and the superpartner spectrum are computed. The G{sub 2}-MSSM generically gives rise to light gauginos and heavy scalars with wino lightest supersymmetric particles when one tunes the cosmological constant. Electroweak symmetry breaking is present but fine-tuned. The G{sub 2}-MSSM is also naturally consistent with precision gauge coupling unification. The phenomenological consequences for cosmology and collider physics of the G{sub 2}-MSSM will be reported in more detail soon.« less

  3. KEPLER ECLIPSING BINARY STARS. I. CATALOG AND PRINCIPAL CHARACTERIZATION OF 1879 ECLIPSING BINARIES IN THE FIRST DATA RELEASE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prsa, Andrej; Engle, Scott G.; Conroy, Kyle

    2011-03-15

    The Kepler space mission is devoted to finding Earth-size planets orbiting other stars in their habitable zones. Its large, 105 deg{sup 2} field of view features over 156,000 stars that are observed continuously to detect and characterize planet transits. Yet, this high-precision instrument holds great promise for other types of objects as well. Here we present a comprehensive catalog of eclipsing binary stars observed by Kepler in the first 44 days of operation, the data being publicly available through MAST as of 2010 June 15. The catalog contains 1879 unique objects. For each object, we provide its Kepler ID (KID),more » ephemeris (BJD{sub 0}, P{sub 0}), morphology type, physical parameters (T{sub eff}, log g, E(B - V)), the estimate of third light contamination (crowding), and principal parameters (T{sub 2}/T{sub 1}, q, fillout factor, and sin i for overcontacts, and T{sub 2}/T{sub 1}, (R{sub 1} + R{sub 2})/a, esin {omega}, ecos {omega}, and sin i for detached binaries). We present statistics based on the determined periods and measure the average occurrence rate of eclipsing binaries to be {approx}1.2% across the Kepler field. We further discuss the distribution of binaries as a function of galactic latitude and thoroughly explain the application of artificial intelligence to obtain principal parameters in a matter of seconds for the whole sample. The catalog was envisioned to serve as a bridge between the now public Kepler data and the scientific community interested in eclipsing binary stars.« less

  4. Skill assessment of the coupled physical-biogeochemical operational Mediterranean Forecasting System

    NASA Astrophysics Data System (ADS)

    Cossarini, Gianpiero; Clementi, Emanuela; Salon, Stefano; Grandi, Alessandro; Bolzon, Giorgio; Solidoro, Cosimo

    2016-04-01

    The Mediterranean Monitoring and Forecasting Centre (Med-MFC) is one of the regional production centres of the European Marine Environment Monitoring Service (CMEMS-Copernicus). Med-MFC operatively manages a suite of numerical model systems (3DVAR-NEMO-WW3 and 3DVAR-OGSTM-BFM) that provides gridded datasets of physical and biogeochemical variables for the Mediterranean marine environment with a horizontal resolution of about 6.5 km. At the present stage, the operational Med-MFC produces ten-day forecast: daily for physical parameters and bi-weekly for biogeochemical variables. The validation of the coupled model system and the estimate of the accuracy of model products are key issues to ensure reliable information to the users and the downstream services. Product quality activities at Med-MFC consist of two levels of validation and skill analysis procedures. Pre-operational qualification activities focus on testing the improvement of the quality of a new release of the model system and relays on past simulation and historical data. Then, near real time (NRT) validation activities aim at the routinely and on-line skill assessment of the model forecast and relays on the NRT available observations. Med-MFC validation framework uses both independent (i.e. Bio-Argo float data, in-situ mooring and vessel data of oxygen, nutrients and chlorophyll, moored buoys, tide-gauges and ADCP of temperature, salinity, sea level and velocity) and semi-independent data (i.e. data already used for assimilation, such as satellite chlorophyll, Satellite SLA and SST and in situ vertical profiles of temperature and salinity from XBT, Argo and Gliders) We give evidence that different variables (e.g. CMEMS-products) can be validated at different levels (i.e. at the forecast level or at the level of model consistency) and at different spatial and temporal scales. The fundamental physical parameters temperature, salinity and sea level are routinely validated on daily, weekly and quarterly base at regional and sub-regional scale and along specific vertical layers (temperature and salinity); while velocity fields are daily validated against in situ coastal moorings. Since the velocity skill cannot be accurately assessed through coastal measurements due to the actual model horizontal resolution (~6.5 km), new validation metrics and procedures are under investigation. Chlorophyll is the only biogeochemical variable that can be validated routinely at the temporal and spatial scale of the weekly forecast, while nutrients and oxygen predictions can be validated locally or at sub-basin and seasonal scales. For the other biogeochemical variables (i.e. primary production, carbonate system variables) only the accuracy of the average dynamics and model consistency can be evaluated. Then, we discuss the limiting factors of the present validation framework, and the quality and extension of the observing system that would be needed for improving the reliability of the physical and biogeochemical Mediterranean forecast services.

  5. AN ANALYSIS OF THE PULSATING STAR SDSS J160043.6+074802.9 USING NEW NON-LTE MODEL ATMOSPHERES AND SPECTRA FOR HOT O SUBDWARFS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Latour, M.; Fontaine, G.; Brassard, P.

    2011-06-01

    We first present our new grids of model atmospheres and spectra for hot subdwarf O (sdO) stars: standard non-LTE (NLTE) H+He models with no metals, NLTE line-blanketed models with C+N+O, and NLTE line-blanketed models with C+N+O+Fe. Using hydrogen and helium lines in the optical range, we make detailed comparisons between theoretical spectra of different grids in order to characterize the line-blanketing effects of metals. We find these effects to be dependent on both the effective temperature and the surface gravity. Moreover, we find that the helium abundance also influences in an important way the effects of line blanketing on themore » resulting spectra. We further find that the addition of Fe (solar abundance) leads only to incremental effects on the atmospheric structure as compared with the case where the metallicity is defined by C+N+O (solar abundances). We use our grids to perform fits on a 9 A resolution, high signal-to-noise ratio ({approx}300 blueward of 5000 A) optical spectrum of SDSS J160043.6+074802.9, the only known pulsating sdO star. Our best and most reliable result is based on the fit achieved with NLTE synthetic spectra that include C, N, O, and Fe in solar abundances, leading to the following parameters: T{sub eff} = 68,500 {+-} 1770 K, log g = 6.09 {+-} 0.07, and log N(He)/N(H) = -0.64 {+-} 0.05 (formal fitting errors only). This combination of parameters, particularly the comparatively high helium abundance, implies that line-blanketing effects due to metals are not very large in the atmosphere of this sdO star.« less

  6. Electromagnetic sensing for deterministic finishing gridded domes

    NASA Astrophysics Data System (ADS)

    Galbraith, Stephen L.

    2013-06-01

    Electromagnetic sensing is a promising technology for precisely locating conductive grid structures that are buried in optical ceramic domes. Burying grid structures directly in the ceramic makes gridded dome construction easier, but a practical sensing technology is required to locate the grid relative to the dome surfaces. This paper presents a novel approach being developed for locating mesh grids that are physically thin, on the order of a mil, curved, and 75% to 90% open space. Non-contact location sensing takes place over a distance of 1/2 inch. A non-contact approach was required because the presence of the ceramic material precludes touching the grid with a measurement tool. Furthermore, the ceramic which may be opaque or transparent is invisible to the sensing technology which is advantageous for calibration. The paper first details the physical principles being exploited. Next, sensor impedance response is discussed for thin, open mesh, grids versus thick, solid, metal conductors. Finally, the technology approach is incorporated into a practical field tool for use in inspecting gridded domes.

  7. Introduction to wind energy systems

    NASA Astrophysics Data System (ADS)

    Wagner, H.-J.

    2017-07-01

    This article presents the basic concepts of wind energy and deals with the physics and mechanics of operation. It describes the conversion of wind energy into rotation of turbine, and the critical parameters governing the efficiency of this conversion. After that it presents an overview of various parts and components of windmills. The connection to the electrical grid, the world status of wind energy use for electricity production, the cost situation and research and development needs are further aspects which will be considered.

  8. Introduction to wind energy systems

    NASA Astrophysics Data System (ADS)

    Wagner, H.-J.

    2015-08-01

    This article presents the basic concepts of wind energy and deals with the physics and mechanics of operation. It describes the conversion of wind energy into rotation of turbine, and the critical parameters governing the efficiency of this conversion. After that it presents an overview of various parts and components of windmills. The connection to the electrical grid, the world status of wind energy use for electricity production, the cost situation and research and development needs are further aspects which will be considered.

  9. Synchronisation of chaos and its applications

    NASA Astrophysics Data System (ADS)

    Eroglu, Deniz; Lamb, Jeroen S. W.; Pereira, Tiago

    2017-07-01

    Dynamical networks are important models for the behaviour of complex systems, modelling physical, biological and societal systems, including the brain, food webs, epidemic disease in populations, power grids and many other. Such dynamical networks can exhibit behaviour in which deterministic chaos, exhibiting unpredictability and disorder, coexists with synchronisation, a classical paradigm of order. We survey the main theory behind complete, generalised and phase synchronisation phenomena in simple as well as complex networks and discuss applications to secure communications, parameter estimation and the anticipation of chaos.

  10. The Grid[Way] Job Template Manager, a tool for parameter sweeping

    NASA Astrophysics Data System (ADS)

    Lorca, Alejandro; Huedo, Eduardo; Llorente, Ignacio M.

    2011-04-01

    Parameter sweeping is a widely used algorithmic technique in computational science. It is specially suited for high-throughput computing since the jobs evaluating the parameter space are loosely coupled or independent. A tool that integrates the modeling of a parameter study with the control of jobs in a distributed architecture is presented. The main task is to facilitate the creation and deletion of job templates, which are the elements describing the jobs to be run. Extra functionality relies upon the GridWay Metascheduler, acting as the middleware layer for job submission and control. It supports interesting features like multi-dimensional sweeping space, wildcarding of parameters, functional evaluation of ranges, value-skipping and job template automatic indexation. The use of this tool increases the reliability of the parameter sweep study thanks to the systematic bookkeeping of job templates and respective job statuses. Furthermore, it simplifies the porting of the target application to the grid reducing the required amount of time and effort. Program summaryProgram title: Grid[Way] Job Template Manager (version 1.0) Catalogue identifier: AEIE_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIE_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Apache license 2.0 No. of lines in distributed program, including test data, etc.: 3545 No. of bytes in distributed program, including test data, etc.: 126 879 Distribution format: tar.gz Programming language: Perl 5.8.5 and above Computer: Any (tested on PC x86 and x86_64) Operating system: Unix, GNU/Linux (tested on Ubuntu 9.04, Scientific Linux 4.7, centOS 5.4), Mac OS X (tested on Snow Leopard 10.6) RAM: 10 MB Classification: 6.5 External routines: The GridWay Metascheduler [1]. Nature of problem: To parameterize and manage an application running on a grid or cluster. Solution method: Generation of job templates as a cross product of the input parameter sets. Also management of the job template files including the job submission to the grid, control and information retrieval. Restrictions: The parameter sweep is limited by disk space during generation of the job templates. The wild-carding of parameters cannot be done in decreasing order. Job submission, control and information is delegated to the GridWay Metascheduler. Running time: From half a second in the simplest operation to a few minutes for thousands of exponential sampling parameters.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dzhilavyan, L. Z., E-mail: dzhil@cpc.inr.ac.ru

    The cross section for the reaction {sup 115}In(γ, γ′){sub 115m}In was measured for photon energies in the range of E{sub γ} ≅ 4–46 MeV. The parameters of the peak in this cross section near the threshold for the reaction {sup 115}In(γ, n), (E{sub γ}){sub (γ,n)}{sup thr}, were refined. It is shown that, in the cross section for the reaction {sup 115}In(γ, γ′){sup 115m}In at Eγ ∼ 27 MeV, there is no second peak for which δ{sub II}{sup int} would exceed about 0.2δ{sub I}{sup int} for the peak at E{sub γ} ∼ (E{sub γ}){sub (γ,n)}{sup thr}. The possibility of employing thismore » reaction both in studying photonuclear reaction physics and in monitoring bremsstrahlung photons in gamma-activation studies was examined.« less

  12. Sub-Grid Modeling of Electrokinetic Effects in Micro Flows

    NASA Technical Reports Server (NTRS)

    Chen, C. P.

    2005-01-01

    Advances in micro-fabrication processes have generated tremendous interests in miniaturizing chemical and biomedical analyses into integrated microsystems (Lab-on-Chip devices). To successfully design and operate the micro fluidics system, it is essential to understand the fundamental fluid flow phenomena when channel sizes are shrink to micron or even nano dimensions. One important phenomenon is the electro kinetic effect in micro/nano channels due to the existence of the electrical double layer (EDL) near a solid-liquid interface. Not only EDL is responsible for electro-osmosis pumping when an electric field parallel to the surface is imposed, EDL also causes extra flow resistance (the electro-viscous effect) and flow anomaly (such as early transition from laminar to turbulent flow) observed in pressure-driven microchannel flows. Modeling and simulation of electro-kinetic effects on micro flows poses significant numerical challenge due to the fact that the sizes of the double layer (10 nm up to microns) are very thin compared to channel width (can be up to 100 s of m). Since the typical thickness of the double layer is extremely small compared to the channel width, it would be computationally very costly to capture the velocity profile inside the double layer by placing sufficient number of grid cells in the layer to resolve the velocity changes, especially in complex, 3-d geometries. Existing approaches using "slip" wall velocity and augmented double layer are difficult to use when the flow geometry is complicated, e.g. flow in a T-junction, X-junction, etc. In order to overcome the difficulties arising from those two approaches, we have developed a sub-grid integration method to properly account for the physics of the double layer. The integration approach can be used on simple or complicated flow geometries. Resolution of the double layer is not needed in this approach, and the effects of the double layer can be accounted for at the same time. With this approach, the numeric grid size can be much larger than the thickness of double layer. Presented in this report are a description of the approach, methodology for implementation and several validation simulations for micro flows.

  13. Research on control strategy based on fuzzy PR for grid-connected inverter

    NASA Astrophysics Data System (ADS)

    Zhang, Qian; Guan, Weiguo; Miao, Wen

    2018-04-01

    In the traditional PI controller, there is static error in tracking ac signals. To solve the problem, the control strategy of a fuzzy PR and the grid voltage feed-forward is proposed. The fuzzy PR controller is to eliminate the static error of the system. It also adjusts parameters of PR controller in real time, which avoids the defect of fixed parameter fixed. The grid voltage feed-forward control can ensure the quality of current and improve the system's anti-interference ability when the grid voltage is distorted. Finally, the simulation results show that the system can output grid current with good quality and also has good dynamic and steady state performance.

  14. Photoionization Modeling of Oxygen K Absorption in the Interstellar Medium: The Chandra Grating Spectra of XTE J1817-330

    NASA Technical Reports Server (NTRS)

    Gatuzz, E.; Garcia, J.; Menodza, C.; Kallman, T. R.; Witthoeft, M.; Lohfink, A.; Bautista, M. A.; Palmeri, P.; Quinet, P.

    2013-01-01

    We present detailed analyses of oxygen K absorption in the interstellar medium (ISM) using four high-resolution Chandra spectra towards the X-ray low-mass binary XTE J1817-330. The 11-25 A broadband is described with a simple absorption model that takes into account the pileup effect and results in an estimate of the hydrogen column density. The oxygen K-edge region (21-25 A) is fitted with the physical warmabs model, which is based on a photoionization model grid generated with the XSTAR code with the most up-to-date atomic database. This approach allows a benchmark of the atomic data which involves wavelength shifts of both the K lines and photoionization cross sections in order to fit the observed spectra accurately. As a result we obtain: a column density of N(sub H) = 1.38 +/- 0.01 x 10(exp 21) cm(exp -2); ionization parameter of log xi = .2.70 +/- 0.023; oxygen abundance of A(sub O) = 0.689(exp +0.015./-0.010); and ionization fractions of O I/O = 0.911, O II/O = 0.077, and O III/O = 0.012 that are in good agreement with previous studies. Since the oxygen abundance in warmabs is given relative to the solar standard of Grevesse and Sauval (1998), a rescaling with the revision by Asplund et al. (2009) yields A(sub O) = 0.952(exp +0.020/-0.013, a value close to solar that reinforces the new standard. We identify several atomic absorption lines.K-alpha , K-beta, and K-gamma in O I and O II; and K-alpha in O III, O VI, and O VII--last two probably residing in the neighborhood of the source rather than in the ISM. This is the first firm detection of oxygen K resonances with principal quantum numbers n greater than 2 associated to ISM cold absorption.

  15. Theoretical X-ray production cross sections at incident photon energies across L{sub i} (i=1-3) absorption edges of Br

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Puri, Sanjiv

    The X-ray production (XRP) cross sections, σ{sub Lk} (k = l, η, α, β{sub 6}, β{sub 1}, β{sub 3}, β{sub 4}, β{sub 9,10}, γ{sub 1,5}, γ{sub 2,3}) have been evaluated at incident photon energies across the L{sub i}(i=1-3) absorption edge energies of {sub 35}Br using theoretical data sets of different physical parameters, namely, the L{sub i}(i=1-3) sub-shell the X-ray emission rates based on the Dirac-Fock (DF) model, the fluorescence and Coster Kronig yields based on the Dirac-Hartree-Slater (DHS) model, and two sets of the photoionisation cross sections based on the relativistic Hartree-Fock-Slater (RHFS) model and the Dirac-Fock (DF) model, inmore » order to highlight the importance of electron exchange effects at photon energies in vicinity of absorption edge energies.« less

  16. Large-eddy simulation of a backward facing step flow using a least-squares spectral element method

    NASA Technical Reports Server (NTRS)

    Chan, Daniel C.; Mittal, Rajat

    1996-01-01

    We report preliminary results obtained from the large eddy simulation of a backward facing step at a Reynolds number of 5100. The numerical platform is based on a high order Legendre spectral element spatial discretization and a least squares time integration scheme. A non-reflective outflow boundary condition is in place to minimize the effect of downstream influence. Smagorinsky model with Van Driest near wall damping is used for sub-grid scale modeling. Comparisons of mean velocity profiles and wall pressure show good agreement with benchmark data. More studies are needed to evaluate the sensitivity of this method on numerical parameters before it is applied to complex engineering problems.

  17. Near real-time traffic routing

    NASA Technical Reports Server (NTRS)

    Yang, Chaowei (Inventor); Xie, Jibo (Inventor); Zhou, Bin (Inventor); Cao, Ying (Inventor)

    2012-01-01

    A near real-time physical transportation network routing system comprising: a traffic simulation computing grid and a dynamic traffic routing service computing grid. The traffic simulator produces traffic network travel time predictions for a physical transportation network using a traffic simulation model and common input data. The physical transportation network is divided into a multiple sections. Each section has a primary zone and a buffer zone. The traffic simulation computing grid includes multiple of traffic simulation computing nodes. The common input data includes static network characteristics, an origin-destination data table, dynamic traffic information data and historical traffic data. The dynamic traffic routing service computing grid includes multiple dynamic traffic routing computing nodes and generates traffic route(s) using the traffic network travel time predictions.

  18. Improving flood forecasting capability of physically based distributed hydrological models by parameter optimization

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Li, J.; Xu, H.

    2016-01-01

    Physically based distributed hydrological models (hereafter referred to as PBDHMs) divide the terrain of the whole catchment into a number of grid cells at fine resolution and assimilate different terrain data and precipitation to different cells. They are regarded to have the potential to improve the catchment hydrological process simulation and prediction capability. In the early stage, physically based distributed hydrological models are assumed to derive model parameters from the terrain properties directly, so there is no need to calibrate model parameters. However, unfortunately the uncertainties associated with this model derivation are very high, which impacted their application in flood forecasting, so parameter optimization may also be necessary. There are two main purposes for this study: the first is to propose a parameter optimization method for physically based distributed hydrological models in catchment flood forecasting by using particle swarm optimization (PSO) algorithm and to test its competence and to improve its performances; the second is to explore the possibility of improving physically based distributed hydrological model capability in catchment flood forecasting by parameter optimization. In this paper, based on the scalar concept, a general framework for parameter optimization of the PBDHMs for catchment flood forecasting is first proposed that could be used for all PBDHMs. Then, with the Liuxihe model as the study model, which is a physically based distributed hydrological model proposed for catchment flood forecasting, the improved PSO algorithm is developed for the parameter optimization of the Liuxihe model in catchment flood forecasting. The improvements include adoption of the linearly decreasing inertia weight strategy to change the inertia weight and the arccosine function strategy to adjust the acceleration coefficients. This method has been tested in two catchments in southern China with different sizes, and the results show that the improved PSO algorithm could be used for the Liuxihe model parameter optimization effectively and could improve the model capability largely in catchment flood forecasting, thus proving that parameter optimization is necessary to improve the flood forecasting capability of physically based distributed hydrological models. It also has been found that the appropriate particle number and the maximum evolution number of PSO algorithm used for the Liuxihe model catchment flood forecasting are 20 and 30 respectively.

  19. An extensive analysis of the triple W UMa type binary FI BOO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christopoulou, P.-E.; Papageorgiou, A.

    We present a detailed analysis of the interesting W UMa binary FI Boo in view of the spectroscopic signature of a third body through photometry, period variation, and a thorough investigation of solution uniqueness. We obtained new BVR{sub c}I{sub c} photometric data that, when combined with spectroscopic data, enable us to analyze the system FI Boo and determine its basic orbital and physical properties through PHOEBE, as well as the period variation by studying the times of the minima. This combined approach allows us to study the long-term period changes in the system for the first time in order tomore » investigate the presence of a third body and to check extensively the solution uniqueness and the uncertainties of derived parameters. Our modeling indicates that FI Boo is a W-type moderate (f = 50.15% ± 8.10%) overcontact binary with component masses of M {sub h} = 0.40 ± 0.05 M {sub ☉} and M {sub c} = 1.07 ± 0.05 M {sub ☉}, temperatures of T {sub h} = 5746 ± 33 K and T {sub c} = 5420 ± 56 K, and a third body, which may play an important role in the formation and evolution. The results were tested by heuristic scanning and parameter kicking to provide the consistent and reliable set of parameters that was used to obtain the initial masses of the progenitors (1.71 ± 0.10 M {sub ☉} and 0.63 ± 0.01 M {sub ☉}, respectively). We also investigated the evolutionary status of massive components with several sets of widely used isochrones.« less

  20. 78 FR 22846 - Smart Grid Advisory Committee Meeting Cancellation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-17

    ... DEPARTMENT OF COMMERCE National Institute of Standards and Technology Smart Grid Advisory... Commerce. ACTION: Notice of meeting cancellation. SUMMARY: The meeting of the Smart Grid Advisory Committee... INFORMATION CONTACT: Mr. Cuong Nguyen, Smart Grid and Cyber-Physical Systems Program Office, National...

  1. Co-Simulation Platform For Characterizing Cyber Attacks in Cyber Physical Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadi, Mohammad A. H.; Ali, Mohammad Hassan; Dasgupta, Dipankar

    Smart grid is a complex cyber physical system containing a numerous and variety of sources, devices, controllers and loads. Communication/Information infrastructure is the backbone of the smart grid system where different grid components are connected with each other through this structure. Therefore, the drawbacks of the information technology related issues are also becoming a part of the smart grid. Further, smart grid is also vulnerable to the grid related disturbances. For such a dynamic system, disturbance and intrusion detection is a paramount issue. This paper presents a Simulink and OPNET based co-simulated test bed to carry out a cyber-intrusion inmore » a cyber-network for modern power systems and smart grid. The effect of the cyber intrusion on the physical power system is also presented. The IEEE 30 bus power system model is used to demonstrate the effectiveness of the simulated testbed. The experiments were performed by disturbing the circuit breakers reclosing time through a cyber-attack in the cyber network. Different disturbance situations in the proposed test system are considered and the results indicate the effectiveness of the proposed co-simulated scheme.« less

  2. Analyzing Effect of System Inertia on Grid Frequency Forecasting Usnig Two Stage Neuro-Fuzzy System

    NASA Astrophysics Data System (ADS)

    Chourey, Divyansh R.; Gupta, Himanshu; Kumar, Amit; Kumar, Jitesh; Kumar, Anand; Mishra, Anup

    2018-04-01

    Frequency forecasting is an important aspect of power system operation. The system frequency varies with load-generation imbalance. Frequency variation depends upon various parameters including system inertia. System inertia determines the rate of fall of frequency after the disturbance in the grid. Though, inertia of the system is not considered while forecasting the frequency of power system during planning and operation. This leads to significant errors in forecasting. In this paper, the effect of inertia on frequency forecasting is analysed for a particular grid system. In this paper, a parameter equivalent to system inertia is introduced. This parameter is used to forecast the frequency of a typical power grid for any instant of time. The system gives appreciable result with reduced error.

  3. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, J.D., E-mail: jdjakem@sandia.gov; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchicalmore » surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  4. Effect of particle size distribution on the hydrodynamics of dense CFB risers

    NASA Astrophysics Data System (ADS)

    Bakshi, Akhilesh; Khanna, Samir; Venuturumilli, Raj; Altantzis, Christos; Ghoniem, Ahmed

    2015-11-01

    Circulating Fluidized Beds (CFB) are favorable in the energy and chemical industries, due to their high efficiency. While accurate hydrodynamic modeling is essential for optimizing performance, most CFB riser simulations are performed assuming equally-sized solid particles, owing to limited computational resources. Even though this approach yields reasonable predictions, it neglects commonly observed experimental findings suggesting the strong effect of particle size distribution (psd) on the hydrodynamics and chemical conversion. Thus, this study is focused on the inclusion of discrete particle sizes to represent the psd and its effect on fluidization via 2D numerical simulations. The particle sizes and corresponding mass fluxes are obtained using experimental data in dense CFB riser while the modeling framework is described in Bakshi et al 2015. Simulations are conducted at two scales: (a) fine grid to resolve heterogeneous structures and (b) coarse grid using EMMS sub-grid modifications. Using suitable metrics which capture bed dynamics, this study provides insights into segregation and mixing of particles as well as highlights need for improved sub-grid models.

  5. The Particle Physics Data Grid. Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livny, Miron

    2002-08-16

    The main objective of the Particle Physics Data Grid (PPDG) project has been to implement and evaluate distributed (Grid-enabled) data access and management technology for current and future particle and nuclear physics experiments. The specific goals of PPDG have been to design, implement, and deploy a Grid-based software infrastructure capable of supporting the data generation, processing and analysis needs common to the physics experiments represented by the participants, and to adapt experiment-specific software to operate in the Grid environment and to exploit this infrastructure. To accomplish these goals, the PPDG focused on the implementation and deployment of several critical services:more » reliable and efficient file replication service, high-speed data transfer services, multisite file caching and staging service, and reliable and recoverable job management services. The focus of the activity was the job management services and the interplay between these services and distributed data access in a Grid environment. Software was developed to study the interaction between HENP applications and distributed data storage fabric. One key conclusion was the need for a reliable and recoverable tool for managing large collections of interdependent jobs. An attached document provides an overview of the current status of the Directed Acyclic Graph Manager (DAGMan) with its main features and capabilities.« less

  6. The effects of spatial heterogeneity and subsurface lateral transfer on evapotranspiration estimates in large scale Earth system models

    NASA Astrophysics Data System (ADS)

    Rouholahnejad, E.; Fan, Y.; Kirchner, J. W.; Miralles, D. G.

    2017-12-01

    Most Earth system models (ESM) average over considerable sub-grid heterogeneity in land surface properties, and overlook subsurface lateral flow. This could potentially bias evapotranspiration (ET) estimates and has implications for future temperature predictions, since overestimations in ET imply greater latent heat fluxes and potential underestimation of dry and warm conditions in the context of climate change. Here we quantify the bias in evaporation estimates that may arise from the fact that ESMs average over considerable heterogeneity in surface properties, and also neglect lateral transfer of water across the heterogeneous landscapes at global scale. We use a Budyko framework to express ET as a function of P and PET to derive simple sub-grid closure relations that quantify how spatial heterogeneity and lateral transfer could affect average ET as seen from the atmosphere. We show that averaging over sub-grid heterogeneity in P and PET, as typical Earth system models do, leads to overestimation of average ET. Our analysis at global scale shows that the effects of sub-grid heterogeneity will be most pronounced in steep mountainous areas where the topographic gradient is high and where P is inversely correlated with PET across the landscape. In addition, we use the Total Water Storage (TWS) anomaly estimates from the Gravity Recovery and Climate Experiment (GRACE) remote sensing product and assimilate it into the Global Land Evaporation Amsterdam Model (GLEAM) to correct for existing free drainage lower boundary condition in GLEAM and quantify whether, and how much, accounting for changes in terrestrial storage can improve the simulation of soil moisture and regional ET fluxes at global scale.

  7. CO2 Flux Estimation Errors Associated with Moist Atmospheric Processes

    NASA Technical Reports Server (NTRS)

    Parazoo, N. C.; Denning, A. S.; Kawa, S. R.; Pawson, S.; Lokupitiya, R.

    2012-01-01

    Vertical transport by moist sub-grid scale processes such as deep convection is a well-known source of uncertainty in CO2 source/sink inversion. However, a dynamical link between vertical transport, satellite based retrievals of column mole fractions of CO2, and source/sink inversion has not yet been established. By using the same offline transport model with meteorological fields from slightly different data assimilation systems, we examine sensitivity of frontal CO2 transport and retrieved fluxes to different parameterizations of sub-grid vertical transport. We find that frontal transport feeds off background vertical CO2 gradients, which are modulated by sub-grid vertical transport. The implication for source/sink estimation is two-fold. First, CO2 variations contained in moist poleward moving air masses are systematically different from variations in dry equatorward moving air. Moist poleward transport is hidden from orbital sensors on satellites, causing a sampling bias, which leads directly to small but systematic flux retrieval errors in northern mid-latitudes. Second, differences in the representation of moist sub-grid vertical transport in GEOS-4 and GEOS-5 meteorological fields cause differences in vertical gradients of CO2, which leads to systematic differences in moist poleward and dry equatorward CO2 transport and therefore the fraction of CO2 variations hidden in moist air from satellites. As a result, sampling biases are amplified and regional scale flux errors enhanced, most notably in Europe (0.43+/-0.35 PgC /yr). These results, cast from the perspective of moist frontal transport processes, support previous arguments that the vertical gradient of CO2 is a major source of uncertainty in source/sink inversion.

  8. Forecasting Global Point Rainfall using ECMWF's Ensemble Forecasting System

    NASA Astrophysics Data System (ADS)

    Pillosu, Fatima; Hewson, Timothy; Zsoter, Ervin; Baugh, Calum

    2017-04-01

    ECMWF (the European Centre for Medium range Weather Forecasts), in collaboration with the EFAS (European Flood Awareness System) and GLOFAS (GLObal Flood Awareness System) teams, has developed a new operational system that post-processes grid box rainfall forecasts from its ensemble forecasting system to provide global probabilistic point-rainfall predictions. The project attains a higher forecasting skill by applying an understanding of how different rainfall generation mechanisms lead to different degrees of sub-grid variability in rainfall totals. In turn this approach facilitates identification of cases in which very localized extreme totals are much more likely. This approach aims also to improve the rainfall input required in different hydro-meteorological applications. Flash flood forecasting, in particular in urban areas, is a good example. In flash flood scenarios precipitation is typically characterised by high spatial variability and response times are short. In this case, to move beyond radar based now casting, the classical approach has been to use very high resolution hydro-meteorological models. Of course these models are valuable but they can represent only very limited areas, may not be spatially accurate and may give reasonable results only for limited lead times. On the other hand, our method aims to use a very cost-effective approach to downscale global rainfall forecasts to a point scale. It needs only rainfall totals from standard global reporting stations and forecasts over a relatively short period to train it, and it can give good results even up to day 5. For these reasons we believe that this approach better satisfies user needs around the world. This presentation aims to describe two phases of the project: The first phase, already completed, is the implementation of this new system to provide 6 and 12 hourly point-rainfall accumulation probabilities. To do this we use a limited number of physically relevant global model parameters (i.e. convective precipitation ratio, speed of steering winds, CAPE - Convective Available Potential Energy - and solar radiation), alongside the rainfall forecasts themselves, to define the "weather types" that in turn define the expected sub-grid variability. The calibration and computational strategy intrinsic to the system will be illustrated. The quality of the global point rainfall forecasts is also illustrated by analysing recent case studies in which extreme totals and a greatly elevated flash flood risk could be foreseen some days in advance but especially by a longer-term verification that arises out of retrospective global point rainfall forecasting for 2016. The second phase, currently in development, is focussing on the relationships with other relevant geographical aspects, for instance, orography and coastlines. Preliminary results will be presented. These are promising but need further study to fully understand their impact on the spatial distribution of point rainfall totals.

  9. Distributed approximating functional fit of the H{sub 3} {ital ab initio} potential-energy data of Liu and Siegbahn

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frishman, A.; Hoffman, D.K.; Kouri, D.J.

    1997-07-01

    We report a distributed approximating functional (DAF) fit of the {ital ab initio} potential-energy data of Liu [J. Chem. Phys. {bold 58}, 1925 (1973)] and Siegbahn and Liu [{ital ibid}. {bold 68}, 2457 (1978)]. The DAF-fit procedure is based on a variational principle, and is systematic and general. Only two adjustable parameters occur in the DAF leading to a fit which is both accurate (to the level inherent in the input data; RMS error of 0.2765 kcal/mol) and smooth ({open_quotes}well-tempered,{close_quotes} in DAF terminology). In addition, the LSTH surface of Truhlar and Horowitz based on this same data [J. Chem. Phys.more » {bold 68}, 2466 (1978)] is itself approximated using only the values of the LSTH surface on the same grid coordinate points as the {ital ab initio} data, and the same DAF parameters. The purpose of this exercise is to demonstrate that the DAF delivers a well-tempered approximation to a known function that closely mimics the true potential-energy surface. As is to be expected, since there is only roundoff error present in the LSTH input data, even more significant figures of fitting accuracy are obtained. The RMS error of the DAF fit, of the LSTH surface at the input points, is 0.0274 kcal/mol, and a smooth fit, accurate to better than 1cm{sup {minus}1}, can be obtained using more than 287 input data points. {copyright} {ital 1997 American Institute of Physics.}« less

  10. Global Sampling for Integrating Physics-Specific Subsystems and Quantifying Uncertainties of CO 2 Geological Sequestration

    DOE PAGES

    Sun, Y.; Tong, C.; Trainor-Guitten, W. J.; ...

    2012-12-20

    The risk of CO 2 leakage from a deep storage reservoir into a shallow aquifer through a fault is assessed and studied using physics-specific computer models. The hypothetical CO 2 geological sequestration system is composed of three subsystems: a deep storage reservoir, a fault in caprock, and a shallow aquifer, which are modeled respectively by considering sub-domain-specific physics. Supercritical CO 2 is injected into the reservoir subsystem with uncertain permeabilities of reservoir, caprock, and aquifer, uncertain fault location, and injection rate (as a decision variable). The simulated pressure and CO 2/brine saturation are connected to the fault-leakage model as amore » boundary condition. CO 2 and brine fluxes from the fault-leakage model at the fault outlet are then imposed in the aquifer model as a source term. Moreover, uncertainties are propagated from the deep reservoir model, to the fault-leakage model, and eventually to the geochemical model in the shallow aquifer, thus contributing to risk profiles. To quantify the uncertainties and assess leakage-relevant risk, we propose a global sampling-based method to allocate sub-dimensions of uncertain parameters to sub-models. The risk profiles are defined and related to CO 2 plume development for pH value and total dissolved solids (TDS) below the EPA's Maximum Contaminant Levels (MCL) for drinking water quality. A global sensitivity analysis is conducted to select the most sensitive parameters to the risk profiles. The resulting uncertainty of pH- and TDS-defined aquifer volume, which is impacted by CO 2 and brine leakage, mainly results from the uncertainty of fault permeability. Subsequently, high-resolution, reduced-order models of risk profiles are developed as functions of all the decision variables and uncertain parameters in all three subsystems.« less

  11. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  12. Far-infrared bandpass filters from cross-shaped grids

    NASA Technical Reports Server (NTRS)

    Tomaselli, V. P.; Edewaard, D. C.; Gillan, P.; Moller, K. D.

    1981-01-01

    The optical transmission characteristics of electroformed metal grids with inductive and capacitive cross patterns have been investigated in the far-infrared spectral region. The transmission characteristics of one- and two-grid devices are represented by transmission line theory parameters. Results are used to suggest construction guidelines for two-grid bandpass filters.

  13. Report on Integration of Existing Grid Models for N-R HES Interaction Focused on Balancing Authorities for Sub-hour Penalties and Opportunities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McJunkin, Timothy; Epiney, Aaron; Rabiti, Cristian

    2017-06-01

    This report provides a summary of the effort in the Nuclear-Renewable Hybrid Energy System (N-R HES) project on the level 4 milestone to consider integration of existing grid models into the factors for optimization on shorter time intervals than the existing electric grid models with the Risk Analysis Virtual Environment (RAVEN) and Modelica [1] optimizations and economic analysis that are the focus of the project to date.

  14. Potential biases in evapotranspiration estimates from Earth system models due to spatial heterogeneity and lateral moisture redistribution

    NASA Astrophysics Data System (ADS)

    Rouholahnejad, E.; Kirchner, J. W.

    2016-12-01

    Evapotranspiration (ET) is a key process in land-climate interactions and affects the dynamics of the atmosphere at local and regional scales. In estimating ET, most earth system models average over considerable sub-grid heterogeneity in land surface properties, precipitation (P), and potential evapotranspiration (PET). This spatial averaging could potentially bias ET estimates, due to the nonlinearities in the underlying relationships. In addition, most earth system models ignore lateral redistribution of water within and between grid cells, which could potentially alter both local and regional ET. Here we present a first attempt to quantify the effects of spatial heterogeneity and lateral redistribution on grid-cell-averaged ET as seen from the atmosphere over heterogeneous landscapes. Using a Budyko framework to express ET as a function of P and PET, we quantify how sub-grid heterogeneity affects average ET at the scale of typical earth system model grid cells. We show that averaging over sub-grid heterogeneity in P and PET, as typical earth system models do, leads to overestimates of average ET. We use a similar approach to quantify how lateral redistribution of water could affect average ET, as seen from the atmosphere. We show that where the aridity index P/PET increases with altitude, gravitationally driven lateral redistribution will increase average ET, implying that models that neglect lateral moisture redistribution will underestimate average ET. In contrast, where the aridity index P/PET decreases with altitude, gravitationally driven lateral redistribution will decrease average ET. This approach yields a simple conceptual framework and mathematical expressions for determining whether, and how much, spatial heterogeneity and lateral redistribution can affect regional ET fluxes as seen from the atmosphere. This analysis provides the basis for quantifying heterogeneity and redistribution effects on ET at regional and continental scales, which will be the focus of future work.

  15. Characterization of scatter in digital mammography from physical measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leon, Stephanie M., E-mail: Stephanie.Leon@uth.tmc.edu; Wagner, Louis K.; Brateman, Libby F.

    2014-06-15

    Purpose: That scattered radiation negatively impacts the quality of medical radiographic imaging is well known. In mammography, even slight amounts of scatter reduce the high contrast required for subtle soft-tissue imaging. In current clinical mammography, image contrast is partially improved by use of an antiscatter grid. This form of scatter rejection comes with a sizeable dose penalty related to the concomitant elimination of valuable primary radiation. Digital mammography allows the use of image processing as a method of scatter correction that might avoid effects that negatively impact primary radiation, while potentially providing more contrast improvement than is currently possible withmore » a grid. For this approach to be feasible, a detailed characterization of the scatter is needed. Previous research has modeled scatter as a constant background that serves as a DC bias across the imaging surface. The goal of this study was to provide a more substantive data set for characterizing the spatially-variant features of scatter radiation at the image detector of modern mammography units. Methods: This data set was acquired from a model of the radiation beam as a matrix of very narrow rays or pencil beams. As each pencil beam penetrates tissue, the pencil widens in a predictable manner due to the production of scatter. The resultant spreading of the pencil beam at the detector surface can be characterized by two parameters: mean radial extent (MRE) and scatter fraction (SF). The SF and MRE were calculated from measurements obtained using the beam stop method. Two digital mammography units were utilized, and the SF and MRE were found as functions of target, filter, tube potential, phantom thickness, and presence or absence of a grid. These values were then used to generate general equations allowing the SF and MRE to be calculated for any combination of the above parameters. Results: With a grid, the SF ranged from a minimum of about 0.05 to a maximum of about 0.16, and the MRE ranged from about 3 to 13 mm. Without a grid, the SF ranged from a minimum of 0.25 to a maximum of 0.52, and the MRE ranged from about 20 to 45 mm. The SF with a grid demonstrated a mild dependence on target/filter combination and kV, whereas the SF without a grid was independent of these factors. The MRE demonstrated a complex relationship as a function of kV, with notable difference among target/filter combinations. The primary source of change in both the SF and MRE was phantom thickness. Conclusions: Because breast tissue varies spatially in physical density and elemental content, the effective thickness of breast tissue varies spatially across the imaging field, resulting in a spatially-variant scatter distribution in the imaging field. The data generated in this study can be used to characterize the scatter contribution on a point-by-point basis, for a variety of different techniques.« less

  16. Weekly gridded Aquarius L-band radiometer/scatterometer observations and salinity retrievals over the polar regions - Part 2: Initial product analysis

    NASA Astrophysics Data System (ADS)

    Brucker, L.; Dinnat, E. P.; Koenig, L. S.

    2014-05-01

    Following the development and availability of Aquarius weekly polar-gridded products, this study presents the spatial and temporal radiometer and scatterometer observations at L band (frequency ~1.4 GHz) over the cryosphere including the Greenland and Antarctic ice sheets, sea ice in both hemispheres, and over sub-Arctic land for monitoring the soil freeze/thaw state. We provide multiple examples of scientific applications for the L-band data over the cryosphere. For example, we show that over the Greenland Ice Sheet, the unusual 2012 melt event lead to an L-band brightness temperature (TB) sustained decrease of ~5 K at horizontal polarization. Over the Antarctic ice sheet, normalized radar cross section (NRCS) observations recorded during ascending and descending orbits are significantly different, highlighting the anisotropy of the ice cover. Over sub-Arctic land, both passive and active observations show distinct values depending on the soil physical state (freeze/thaw). Aquarius sea surface salinity (SSS) retrievals in the polar waters are also presented. SSS variations could serve as an indicator of fresh water input to the ocean from the cryosphere, however the presence of sea ice often contaminates the SSS retrievals, hindering the analysis. The weekly grided Aquarius L-band products used are distributed by the US Snow and Ice Data Center at http://nsidc.org/data/aquarius/index.html , and show potential for cryospheric studies.

  17. Energy dynamics and current sheet structure in fluid and kinetic simulations of decaying magnetohydrodynamic turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makwana, K. D., E-mail: kirit.makwana@gmx.com; Cattaneo, F.; Zhdankin, V.

    Simulations of decaying magnetohydrodynamic (MHD) turbulence are performed with a fluid and a kinetic code. The initial condition is an ensemble of long-wavelength, counter-propagating, shear-Alfvén waves, which interact and rapidly generate strong MHD turbulence. The total energy is conserved and the rate of turbulent energy decay is very similar in both codes, although the fluid code has numerical dissipation, whereas the kinetic code has kinetic dissipation. The inertial range power spectrum index is similar in both the codes. The fluid code shows a perpendicular wavenumber spectral slope of k{sub ⊥}{sup −1.3}. The kinetic code shows a spectral slope of k{submore » ⊥}{sup −1.5} for smaller simulation domain, and k{sub ⊥}{sup −1.3} for larger domain. We estimate that collisionless damping mechanisms in the kinetic code can account for the dissipation of the observed nonlinear energy cascade. Current sheets are geometrically characterized. Their lengths and widths are in good agreement between the two codes. The length scales linearly with the driving scale of the turbulence. In the fluid code, their thickness is determined by the grid resolution as there is no explicit diffusivity. In the kinetic code, their thickness is very close to the skin-depth, irrespective of the grid resolution. This work shows that kinetic codes can reproduce the MHD inertial range dynamics at large scales, while at the same time capturing important kinetic physics at small scales.« less

  18. Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary

    NASA Astrophysics Data System (ADS)

    Anugu, N.; Garcia, P.

    2016-04-01

    Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its computational efficiency. In the first step, the cross-correlation is implemented at the original image spatial resolution grid (1 pixel). In the second step, the cross-correlation is performed using a sub-pixel level grid by limiting the field of search to 4 × 4 pixels centered at the first step delivered initial position. The generation of these sub-pixel grid based region of interest images is achieved with the bi-cubic interpolation. The correlation matching with sub-pixel grid technique was previously reported in electronic speckle photography Sjö'dahl (1994). This technique is applied here for the solar wavefront sensing. A large dynamic range and a better accuracy in the measurements are achieved with the combination of the original pixel grid based correlation matching in a large field of view and a sub-pixel interpolated image grid based correlation matching within a small field of view. The results revealed that the proposed method outperforms all the different peak-finding algorithms studied in the first approach. It reduces both the systematic error and the RMS error by a factor of 5 (i.e., 75% systematic error reduction), when 5 times improved image sampling was used. This measurement is achieved at the expense of twice the computational cost. With the 5 times improved image sampling, the wave front accuracy is increased by a factor of 5. The proposed solution is strongly recommended for wave front sensing in the solar telescopes, particularly, for measuring large dynamic image shifts involved open loop adaptive optics. Also, by choosing an appropriate increment of image sampling in trade-off between the computational speed limitation and the aimed sub-pixel image shift accuracy, it can be employed in closed loop adaptive optics. The study is extended to three other class of sub-aperture images (a point source; a laser guide star; a Galactic Center extended scene). The results are planned to submit for the Optical Express journal.

  19. Jet formation in spallation of metal film from substrate under action of femtosecond laser pulse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inogamov, N. A., E-mail: nailinogamov@googlemail.com; Zhakhovskii, V. V.; Khokhlov, V. A.

    2015-01-15

    It is well known that during ablation by an ultrashort laser pulse, the main contribution to ablation of the substance is determined not by evaporation, but by the thermomechanical spallation of the substance. For identical metals and pulse parameters, the type of spallation is determined by film thickness d{sub f}. An important gauge is metal heating depth d{sub T} at the two-temperature stage, at which electron temperature is higher than ion temperature. We compare cases with d{sub f} < d{sub T} (thin film) and d{sub f} ≫ d{sub T} (bulk target). Radius R{sub L} of the spot of heating bymore » an optical laser is the next (after d{sub f}) important geometrical parameter. The morphology of film bulging in cases where d{sub f} < d{sub T} on the substrate (blistering) changes upon a change in radius R{sub L} in the range from diffraction limit R{sub L} ∼ λ to high values of R{sub L} ≫ λ, where λ ∼ 1 μm is the wavelength of optical laser radiation. When d{sub f} < d{sub T}, R{sub L} ∼ λ, and F{sub abs} > F{sub m}, gold film deposited on the glass target acquires a cupola-shaped blister with a miniature frozen nanojet in the form of a tip on the circular top of the cupola (F{sub abs} and F{sub m} are the absorbed energy and the melting threshold of the film per unit surface area of the film). A new physical mechanism leading to the formation of the nanojet is proposed.« less

  20. Fast super-resolution estimation of DOA and DOD in bistatic MIMO Radar with off-grid targets

    NASA Astrophysics Data System (ADS)

    Zhang, Dong; Zhang, Yongshun; Zheng, Guimei; Feng, Cunqian; Tang, Jun

    2018-05-01

    In this paper, we focus on the problem of joint DOA and DOD estimation in Bistatic MIMO Radar using sparse reconstruction method. In traditional ways, we usually convert the 2D parameter estimation problem into 1D parameter estimation problem by Kronecker product which will enlarge the scale of the parameter estimation problem and bring more computational burden. Furthermore, it requires that the targets must fall on the predefined grids. In this paper, a 2D-off-grid model is built which can solve the grid mismatch problem of 2D parameters estimation. Then in order to solve the joint 2D sparse reconstruction problem directly and efficiently, three kinds of fast joint sparse matrix reconstruction methods are proposed which are Joint-2D-OMP algorithm, Joint-2D-SL0 algorithm and Joint-2D-SOONE algorithm. Simulation results demonstrate that our methods not only can improve the 2D parameter estimation accuracy but also reduce the computational complexity compared with the traditional Kronecker Compressed Sensing method.

  1. Glaucoma Diagnostic Capability of Global and Regional Measurements of Isolated Ganglion Cell Layer and Inner Plexiform Layer.

    PubMed

    Chien, Jason L; Ghassibi, Mark P; Patthanathamrongkasem, Thipnapa; Abumasmah, Ramiz; Rosman, Michael S; Skaat, Alon; Tello, Celso; Liebmann, Jeffrey M; Ritch, Robert; Park, Sung Chul

    2017-03-01

    To compare glaucoma diagnostic capability of global/regional macular layer parameters in different-sized grids. Serial horizontal spectral-domain optical coherence tomography scans of macula were obtained. Automated macular grids with diameters of 3, 3.45, and 6 mm were used. For each grid, 10 parameters (total volume; average thicknesses in 9 regions) were obtained for 5 layers: macular retinal nerve fiber layer (mRNFL), ganglion cell layer (GCL), inner plexiform layer (IPL), ganglion cell-inner plexiform layer (GCIPL; GCL+IPL), and ganglion cell complex (GCC; mRNFL+GCL+IPL). Sixty-nine normal eyes (69 subjects) and 87 glaucomatous eyes (87 patients) were included. For the total volume parameter, the area under the receiver operating characteristic curves (AUCs) in 6-mm grid were larger than the AUCs in 3- and 3.45-mm grids for GCL, GCC, GCIPL, and mRNFL (all P<0.020). For the average thickness parameters, the best AUC in 6-mm grid (T2 region for GCL, IPL, and GCIPL; I2 region for mRNFL and GCC) was greater than the best AUC in 3-mm grid for GCL, GCC, and mRNFL (P<0.045). The AUC of GCL volume (0.920) was similar to those of GCC (0.920) and GCIPL (0.909) volume. The AUC of GCL T2 region thickness (0.942) was similar to those of GCC I2 region (0.942) and GCIPL T2 region (0.934) thickness. Isolated macular GCL appears to be as good as GCC and GCIPL in glaucoma diagnosis, while IPL does not. Larger macular grids may be better at detecting glaucoma. Each layer has a characteristic region with the best glaucoma diagnostic capability.

  2. Supernova neutrinos and antineutrinos: ternary luminosity diagram and spectral split patterns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fogli, Gianluigi; Marrone, Antonio; Tamborra, Irene

    2009-10-01

    In core-collapse supernovae, the ν{sub e} and ν-bar {sub e} species may experience collective flavor swaps to non-electron species ν{sub x}, within energy intervals limited by relatively sharp boundaries (''splits''). These phenomena appear to depend sensitively upon the initial energy spectra and luminosities. We investigate the effect of generic variations of the fractional luminosities (l{sub e}, l{sub ē}, l{sub x}) with respect to the usual ''energy equipartition'' case (1/6, 1/6, 1/6), within an early-time supernova scenario with fixed thermal spectra and total luminosity. We represent the constraint l{sub e}+l{sub ē}+4l{sub x} = 1 in a ternary diagram, which is exploredmore » via numerical experiments (in single-angle approximation) over an evenly-spaced grid of points. In inverted hierarchy, single splits arise in most cases, but an abrupt transition to double splits is observed for a few points surrounding the equipartition one. In normal hierarchy, collective effects turn out to be unobservable at all grid points but one, where single splits occur. Admissible deviations from equipartition may thus induce dramatic changes in the shape of supernova (anti)neutrino spectra. The observed patterns are interpreted in terms of initial flavor polarization vectors (defining boundaries for the single/double split transitions), lepton number conservation, and minimization of potential energy.« less

  3. Grid Connected Functionality

    DOE Data Explorer

    Baker, Kyri; Jin, Xin; Vaidynathan, Deepthi; Jones, Wesley; Christensen, Dane; Sparn, Bethany; Woods, Jason; Sorensen, Harry; Lunacek, Monte

    2016-08-04

    Dataset demonstrating the potential benefits that residential buildings can provide for frequency regulation services in the electric power grid. In a hardware-in-the-loop (HIL) implementation, simulated homes along with a physical laboratory home are coordinated via a grid aggregator, and it is shown that their aggregate response has the potential to follow the regulation signal on a timescale of seconds. Connected (communication-enabled), devices in the National Renewable Energy Laboratory's (NREL's) Energy Systems Integration Facility (ESIF) received demand response (DR) requests from a grid aggregator, and the devices responded accordingly to meet the signal while satisfying user comfort bounds and physical hardware limitations.

  4. Performance Summary of the 2006 Community Multiscale Air Quality (CMAQ) Simulation for the AQMEII Project: North American Application

    EPA Science Inventory

    The CMAQ modeling system has been used to simulate the CONUS using 12-km by 12-km horizontal grid spacing for the entire year of 2006 as part of the Air Quality Model Evaluation International initiative (AQMEII). The operational model performance for O3 and PM2.5<...

  5. A 50 kilowatt distributed grid-connected photovoltaic generation system for the University of Wyoming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chowdhury, B.H.; Muknahallipatna, S.; Cupal, J.J.

    The University of Wyoming (UW) campus is serving as the site for a 50 kilowatt solar photovoltaic (PV) system. Three sub-systems were sited and built on the UW campus in 1996. The first sub-system, a 10 kW roof-integrated system of PV roof tiles is located on the roof of the Engineering building. The second sub-system--a 5 kW rack-mounted, ballasted PV system is on a walkway roof of the Engineering building. The third sub-system is a 35 kW shade structure system and located adjacent to the parking lot of the university`s football stadium. The three sub-systems differ in their design strategymore » since each is being used for research and education at the university. Each sub-system, being located at some distance away from one another, supplies a different part of the campus grid. Efforts continue at setting up a central monitoring system which will receive data remotely from all locations. A part of this monitoring system is complete. While the initial monitoring data shows satisfactory performance, a number of reliability problems with PV modules and inverters have delayed full functionality of the system.« less

  6. Renewable Electricity Futures. Operational Analysis of the Western Interconnection at Very High Renewable Penetrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brinkman, Gregory

    2015-09-01

    The Renewable Electricity Futures Study (RE Futures)--an analysis of the costs and grid impacts of integrating large amounts of renewable electricity generation into the U.S. power system--examined renewable energy resources, technical issues regarding the integration of these resources into the grid, and the costs associated with high renewable penetration scenarios. These scenarios included up to 90% of annual generation from renewable sources, although most of the analysis was focused on 80% penetration scenarios. Hourly production cost modeling was performed to understand the operational impacts of high penetrations. One of the conclusions of RE Futures was that further work was necessarymore » to understand whether the operation of the system was possible at sub-hourly time scales and during transient events. This study aimed to address part of this by modeling the operation of the power system at sub-hourly time scales using newer methodologies and updated data sets for transmission and generation infrastructure. The goal of this work was to perform a detailed, sub-hourly analysis of very high penetration scenarios for a single interconnection (the Western Interconnection). It focused on operational impacts, and it helps verify that the operational results from the capacity expansion models are useful. The primary conclusion of this study is that sub-hourly operation of the grid is possible with renewable generation levels between 80% and 90%.« less

  7. Delayed neutron spectral data for Hansen-Roach energy group structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, J.M.; Spriggs, G.D.

    A detailed knowledge of delayed neutron spectra is important in reactor physics. It not only allows for an accurate estimate of the effective delayed neutron fraction {beta}{sub eff} but also is essential to calculating important reactor kinetic parameters, such as effective group abundances and the ratio of {beta}{sub eff} to the prompt neutron generation time. Numerous measurements of delayed neutron spectra for various delayed neutron precursors have been performed and reported in the literature. However, for application in reactor physics calculations, these spectra are usually lumped into one of the traditional six groups of delayed neutrons in accordance to theirmore » half-lives. Subsequently, these six-group spectra are binned into energy intervals corresponding to the energy intervals of a chosen nuclear cross-section set. In this work, the authors present a set of delayed neutron spectra that were formulated specifically to match Keepin`s six-group parameters and the 16-energy-group Hansen-Roach cross sections.« less

  8. Seasonal Variations of the Earth's Gravitational Field: An Analysis of Atmospheric Pressure, Ocean Tidal, and Surface Water Excitation

    NASA Technical Reports Server (NTRS)

    Dong, D,; Gross, R.S.; Dickey, J.

    1996-01-01

    Monthly mean gravitational field parameters (denoted here as C(sub even)) that represent linear combinations of the primarily even degree zonal spherical harmonic coefficients of the Earth's gravitational field have been recovered using LAGEOS I data and are compared with those derived from gridded global surface pressure data of the National meteorological center (NMC) spanning 1983-1992. The effect of equilibrium ocean tides and surface water variations are also considered. Atmospheric pressure and surface water fluctuations are shown to be the dominant cause of observed annual C(sub even) variations. Closure with observations is seen at the 1sigma level when atmospheric pressure, ocean tide and surface water effects are include. Equilibrium ocean tides are shown to be the main source of excitation at the semiannual period with closure at the 1sigma level seen when both atmospheric pressure and ocean tide effects are included. The inverted barometer (IB) case is shown to give the best agreement with the observation series. The potential of the observed C(sub even) variations for monitoring mass variations in the polar regions of the Earth and the effect of the land-ocean mask in the IB calculation are discussed.

  9. Examples of data assimilation in mesoscale models

    NASA Technical Reports Server (NTRS)

    Carr, Fred; Zack, John; Schmidt, Jerry; Snook, John; Benjamin, Stan; Stauffer, David

    1993-01-01

    The keynote address was the problem of physical initialization of mesoscale models. The classic purpose of physical or diabatic initialization is to reduce or eliminate the spin-up error caused by the lack, at the initial time, of the fully developed vertical circulations required to support regions of large rainfall rates. However, even if a model has no spin-up problem, imposition of observed moisture and heating rate information during assimilation can improve quantitative precipitation forecasts, especially early in the forecast. The two key issues in physical initialization are the choice of assimilating technique and sources of hydrologic/hydrometeor data. Another example of data assimilation in mesoscale models was presented in a series of meso-beta scale model experiments with and 11 km version of the MASS model designed to investigate the sensitivity of convective initiation forced by thermally direct circulations resulting from differential surface heating to four dimensional assimilation of surface and radar data. The results of these simulations underscore the need to accurately initialize and simulate grid and sub-grid scale clouds in meso- beta scale models. The status of the application of the CSU-RAMS mesoscale model by the NOAA Forecast Systems Lab for producing real-time forecasts with 10-60 km mesh resolutions over (4000 km)(exp 2) domains for use by the aviation community was reported. Either MAPS or LAPS model data are used to initialize the RAMS model on a 12-h cycle. The use of MAPS (Mesoscale Analysis and Prediction System) model was discussed. Also discussed was the mesobeta-scale data assimilation using a triply-nested nonhydrostatic version of the MM5 model.

  10. ZASPE: A Code to Measure Stellar Atmospheric Parameters and their Covariance from Spectra

    NASA Astrophysics Data System (ADS)

    Brahm, Rafael; Jordán, Andrés; Hartman, Joel; Bakos, Gáspár

    2017-05-01

    We describe the Zonal Atmospheric Stellar Parameters Estimator (zaspe), a new algorithm, and its associated code, for determining precise stellar atmospheric parameters and their uncertainties from high-resolution echelle spectra of FGK-type stars. zaspe estimates stellar atmospheric parameters by comparing the observed spectrum against a grid of synthetic spectra only in the most sensitive spectral zones to changes in the atmospheric parameters. Realistic uncertainties in the parameters are computed from the data itself, by taking into account the systematic mismatches between the observed spectrum and the best-fitting synthetic one. The covariances between the parameters are also estimated in the process. zaspe can in principle use any pre-calculated grid of synthetic spectra, but unbiased grids are required to obtain accurate parameters. We tested the performance of two existing libraries, and we concluded that neither is suitable for computing precise atmospheric parameters. We describe a process to synthesize a new library of synthetic spectra that was found to generate consistent results when compared with parameters obtained with different methods (interferometry, asteroseismology, equivalent widths).

  11. Model Uncertainty Quantification Methods For Data Assimilation In Partially Observed Multi-Scale Systems

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; van Leeuwen, P. J.

    2017-12-01

    Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.

  12. Renormalization group estimates of transport coefficients in the advection of a passive scalar by incompressible turbulence

    NASA Technical Reports Server (NTRS)

    Zhou, YE; Vahala, George

    1993-01-01

    The advection of a passive scalar by incompressible turbulence is considered using recursive renormalization group procedures in the differential sub grid shell thickness limit. It is shown explicitly that the higher order nonlinearities induced by the recursive renormalization group procedure preserve Galilean invariance. Differential equations, valid for the entire resolvable wave number k range, are determined for the eddy viscosity and eddy diffusivity coefficients, and it is shown that higher order nonlinearities do not contribute as k goes to 0, but have an essential role as k goes to k(sub c) the cutoff wave number separating the resolvable scales from the sub grid scales. The recursive renormalization transport coefficients and the associated eddy Prandtl number are in good agreement with the k-dependent transport coefficients derived from closure theories and experiments.

  13. Large-watershed flood simulation and forecasting based on different-resolution distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Li, J.

    2017-12-01

    Large-watershed flood simulation and forecasting is very important for a distributed hydrological model in the application. There are some challenges including the model's spatial resolution effect, model performance and accuracy and so on. To cope with the challenge of the model's spatial resolution effect, different model resolution including 1000m*1000m, 600m*600m, 500m*500m, 400m*400m, 200m*200m were used to build the distributed hydrological model—Liuxihe model respectively. The purpose is to find which one is the best resolution for Liuxihe model in Large-watershed flood simulation and forecasting. This study sets up a physically based distributed hydrological model for flood forecasting of the Liujiang River basin in south China. Terrain data digital elevation model (DEM), soil type and land use type are downloaded from the website freely. The model parameters are optimized by using an improved Particle Swarm Optimization(PSO) algorithm; And parameter optimization could reduce the parameter uncertainty that exists for physically deriving model parameters. The different model resolution (200m*200m—1000m*1000m ) are proposed for modeling the Liujiang River basin flood with the Liuxihe model in this study. The best model's spatial resolution effect for flood simulation and forecasting is 200m*200m.And with the model's spatial resolution reduction, the model performance and accuracy also become worse and worse. When the model resolution is 1000m*1000m, the flood simulation and forecasting result is the worst, also the river channel divided based on this resolution is differs from the actual one. To keep the model with an acceptable performance, minimum model spatial resolution is needed. The suggested threshold model spatial resolution for modeling the Liujiang River basin flood is a 500m*500m grid cell, but the model spatial resolution with a 200m*200m grid cell is recommended in this study to keep the model at a best performance.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deka, Deepjyoti; Backhaus, Scott N.; Chertkov, Michael

    Limited placement of real-time monitoring devices in the distribution grid, recent trends notwithstanding, has prevented the easy implementation of demand-response and other smart grid applications. Part I of this paper discusses the problem of learning the operational structure of the grid from nodal voltage measurements. In this work (Part II), the learning of the operational radial structure is coupled with the problem of estimating nodal consumption statistics and inferring the line parameters in the grid. Based on a Linear-Coupled(LC) approximation of AC power flows equations, polynomial time algorithms are designed to identify the structure and estimate nodal load characteristics and/ormore » line parameters in the grid using the available nodal voltage measurements. Then the structure learning algorithm is extended to cases with missing data, where available observations are limited to a fraction of the grid nodes. The efficacy of the presented algorithms are demonstrated through simulations on several distribution test cases.« less

  15. Optical properties of metals: Infrared emissivity in the anomalous skin effect spectral region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Echániz, T.; Pérez-Sáez, R. B., E-mail: raul.perez@ehu.es; Tello, M. J.

    When the penetration depth of an electromagnetic wave in a metal is similar to the mean free path of the conduction electrons, the Drude classical theory is no longer satisfied and the skin effect becomes anomalous. Physical parameters of this theory for twelve metals were calculated and analyzed. The theory predicts an emissivity peak ε{sub peak} at room temperature in the mid-infrared for smooth surface metals that moves towards larger wavelengths as temperature decreases. Furthermore, the theory states that ε{sub peak} increases with the emission angle but its position, λ{sub peak}, is constant. Copper directional emissivity measurements as well asmore » emissivity obtained using optical constants data confirm the predictions of the theory. Considering the relationship between the specularity parameter p and the sample roughness, it is concluded that p is not the simple parameter it is usually assumed to be. Quantitative comparison between experimental data and theoretical predictions shows that the specularity parameter can be equal to one for roughness values larger than those predicted. An exhaustive analysis of the experimental optical parameters shows signs of a reflectance broad peak in Cu, Al, Au, and Mo around the wavelength predicted by the theory for p = 1.« less

  16. High-resolution daily gridded data sets of air temperature and wind speed for Europe

    NASA Astrophysics Data System (ADS)

    Brinckmann, Sven; Krähenmann, Stefan; Bissolli, Peter

    2016-10-01

    New high-resolution data sets for near-surface daily air temperature (minimum, maximum and mean) and daily mean wind speed for Europe (the CORDEX domain) are provided for the period 2001-2010 for the purpose of regional model validation in the framework of DecReg, a sub-project of the German MiKlip project, which aims to develop decadal climate predictions. The main input data sources are SYNOP observations, partly supplemented by station data from the ECA&D data set (http://www.ecad.eu). These data are quality tested to eliminate erroneous data. By spatial interpolation of these station observations, grid data in a resolution of 0.044° (≈ 5km) on a rotated grid with virtual North Pole at 39.25° N, 162° W are derived. For temperature interpolation a modified version of a regression kriging method developed by Krähenmann et al.(2011) is used. At first, predictor fields of altitude, continentality and zonal mean temperature are used for a regression applied to monthly station data. The residuals of the monthly regression and the deviations of the daily data from the monthly averages are interpolated using simple kriging in a second and third step. For wind speed a new method based on the concept used for temperature was developed, involving predictor fields of exposure, roughness length, coastal distance and ERA-Interim reanalysis wind speed at 850 hPa. Interpolation uncertainty is estimated by means of the kriging variance and regression uncertainties. Furthermore, to assess the quality of the final daily grid data, cross validation is performed. Variance explained by the regression ranges from 70 to 90 % for monthly temperature and from 50 to 60 % for monthly wind speed. The resulting RMSE for the final daily grid data amounts to 1-2 K and 1-1.5 ms-1 (depending on season and parameter) for daily temperature parameters and daily mean wind speed, respectively. The data sets presented in this article are published at doi:10.5676/DWD_CDC/DECREG0110v2.

  17. A Theoretical Secure Enterprise Architecture for Multi Revenue Generating Smart Grid Sub Electric Infrastructure

    ERIC Educational Resources Information Center

    Chaudhry, Hina

    2013-01-01

    This study is a part of the smart grid initiative providing electric vehicle charging infrastructure. It is a refueling structure, an energy generating photovoltaic system and charge point electric vehicle charging station. The system will utilize advanced design and technology allowing electricity to flow from the site's normal electric service…

  18. Arithmetic Data Cube as a Data Intensive Benchmark

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael A.; Shabano, Leonid

    2003-01-01

    Data movement across computational grids and across memory hierarchy of individual grid machines is known to be a limiting factor for application involving large data sets. In this paper we introduce the Data Cube Operator on an Arithmetic Data Set which we call Arithmetic Data Cube (ADC). We propose to use the ADC to benchmark grid capabilities to handle large distributed data sets. The ADC stresses all levels of grid memory by producing 2d views of an Arithmetic Data Set of d-tuples described by a small number of parameters. We control data intensity of the ADC by controlling the sizes of the views through choice of the tuple parameters.

  19. Modeling DNP3 Traffic Characteristics of Field Devices in SCADA Systems of the Smart Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Huan; Cheng, Liang; Chuah, Mooi Choo

    In the generation, transmission, and distribution sectors of the smart grid, intelligence of field devices is realized by programmable logic controllers (PLCs). Many smart-grid subsystems are essentially cyber-physical energy systems (CPES): For instance, the power system process (i.e., the physical part) within a substation is monitored and controlled by a SCADA network with hosts running miscellaneous applications (i.e., the cyber part). To study the interactions between the cyber and physical components of a CPES, several co-simulation platforms have been proposed. However, the network simulators/emulators of these platforms do not include a detailed traffic model that takes into account the impactsmore » of the execution model of PLCs on traffic characteristics. As a result, network traces generated by co-simulation only reveal the impacts of the physical process on the contents of the traffic generated by SCADA hosts, whereas the distinction between PLCs and computing nodes (e.g., a hardened computer running a process visualization application) has been overlooked. To generate realistic network traces using co-simulation for the design and evaluation of applications relying on accurate traffic profiles, it is necessary to establish a traffic model for PLCs. In this work, we propose a parameterized model for PLCs that can be incorporated into existing co-simulation platforms. We focus on the DNP3 subsystem of slave PLCs, which automates the processing of packets from the DNP3 master. To validate our approach, we extract model parameters from both the configuration and network traces of real PLCs. Simulated network traces are generated and compared against those from PLCs. Our evaluation shows that our proposed model captures the essential traffic characteristics of DNP3 slave PLCs, which can be used to extend existing co-simulation platforms and gain further insights into the behaviors of CPES.« less

  20. A virtual observatory for photoionized nebulae: the Mexican Million Models database (3MdB).

    NASA Astrophysics Data System (ADS)

    Morisset, C.; Delgado-Inglada, G.; Flores-Fajardo, N.

    2015-04-01

    Photoionization models obtained with numerical codes are widely used to study the physics of the interstellar medium (planetary nebulae, HII regions, etc). Grids of models are performed to understand the effects of the different parameters used to describe the regions on the observables (mainly emission line intensities). Most of the time, only a small part of the computed results of such grids are published, and they are sometimes hard to obtain in a user-friendly format. We present here the Mexican Million Models dataBase (3MdB), an effort to resolve both of these issues in the form of a database of photoionization models, easily accessible through the MySQL protocol, and containing a lot of useful outputs from the models, such as the intensities of 178 emission lines, the ionic fractions of all the ions, etc. Some examples of the use of the 3MdB are also presented.

  1. Mapping the genome of meta-generalized gradient approximation density functionals: The search for B97M-V

    NASA Astrophysics Data System (ADS)

    Mardirossian, Narbe; Head-Gordon, Martin

    2015-02-01

    A meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional is presented. The functional form is selected from more than 1010 choices carved out of a functional space of almost 1040 possibilities. Raw data come from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filtered based on a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.

  2. Uniformity on the grid via a configuration framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Igor V Terekhov et al.

    2003-03-11

    As Grid permeates modern computing, Grid solutions continue to emerge and take shape. The actual Grid development projects continue to provide higher-level services that evolve in functionality and operate with application-level concepts which are often specific to the virtual organizations that use them. Physically, however, grids are comprised of sites whose resources are diverse and seldom project readily onto a grid's set of concepts. In practice, this also creates problems for site administrators who actually instantiate grid services. In this paper, we present a flexible, uniform framework to configure a grid site and its facilities, and otherwise describe the resourcesmore » and services it offers. We start from a site configuration and instantiate services for resource advertisement, monitoring and data handling; we also apply our framework to hosting environment creation. We use our ideas in the Information Management part of the SAM-Grid project, a grid system which will deliver petabyte-scale data to the hundreds of users. Our users are High Energy Physics experimenters who are scattered worldwide across dozens of institutions and always use facilities that are shared with other experiments as well as other grids. Our implementation represents information in the XML format and includes tools written in XQuery and XSLT.« less

  3. An algebraic homotopy method for generating quasi-three-dimensional grids for high-speed configurations

    NASA Technical Reports Server (NTRS)

    Moitra, Anutosh

    1989-01-01

    A fast and versatile procedure for algebraically generating boundary conforming computational grids for use with finite-volume Euler flow solvers is presented. A semi-analytic homotopic procedure is used to generate the grids. Grids generated in two-dimensional planes are stacked to produce quasi-three-dimensional grid systems. The body surface and outer boundary are described in terms of surface parameters. An interpolation scheme is used to blend between the body surface and the outer boundary in order to determine the field points. The method, albeit developed for analytically generated body geometries is equally applicable to other classes of geometries. The method can be used for both internal and external flow configurations, the only constraint being that the body geometries be specified in two-dimensional cross-sections stationed along the longitudinal axis of the configuration. Techniques for controlling various grid parameters, e.g., clustering and orthogonality are described. Techniques for treating problems arising in algebraic grid generation for geometries with sharp corners are addressed. A set of representative grid systems generated by this method is included. Results of flow computations using these grids are presented for validation of the effectiveness of the method.

  4. Methods, software and datasets to verify DVH calculations against analytical values: Twenty years late(r)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelms, Benjamin; Stambaugh, Cassandra; Hunt, Dylan

    2015-08-15

    Purpose: The authors designed data, methods, and metrics that can serve as a standard, independent of any software package, to evaluate dose-volume histogram (DVH) calculation accuracy and detect limitations. The authors use simple geometrical objects at different orientations combined with dose grids of varying spatial resolution with linear 1D dose gradients; when combined, ground truth DVH curves can be calculated analytically in closed form to serve as the absolute standards. Methods: DICOM RT structure sets containing a small sphere, cylinder, and cone were created programmatically with axial plane spacing varying from 0.2 to 3 mm. Cylinders and cones were modeledmore » in two different orientations with respect to the IEC 1217 Y axis. The contours were designed to stringently but methodically test voxelation methods required for DVH. Synthetic RT dose files were generated with 1D linear dose gradient and with grid resolution varying from 0.4 to 3 mm. Two commercial DVH algorithms—PINNACLE (Philips Radiation Oncology Systems) and PlanIQ (Sun Nuclear Corp.)—were tested against analytical values using custom, noncommercial analysis software. In Test 1, axial contour spacing was constant at 0.2 mm while dose grid resolution varied. In Tests 2 and 3, the dose grid resolution was matched to varying subsampled axial contours with spacing of 1, 2, and 3 mm, and difference analysis and metrics were employed: (1) histograms of the accuracy of various DVH parameters (total volume, D{sub max}, D{sub min}, and doses to % volume: D99, D95, D5, D1, D0.03 cm{sup 3}) and (2) volume errors extracted along the DVH curves were generated and summarized in tabular and graphical forms. Results: In Test 1, PINNACLE produced 52 deviations (15%) while PlanIQ produced 5 (1.5%). In Test 2, PINNACLE and PlanIQ differed from analytical by >3% in 93 (36%) and 18 (7%) times, respectively. Excluding D{sub min} and D{sub max} as least clinically relevant would result in 32 (15%) vs 5 (2%) scored deviations for PINNACLE vs PlanIQ in Test 1, while Test 2 would yield 53 (25%) vs 17 (8%). In Test 3, statistical analyses of volume errors extracted continuously along the curves show PINNACLE to have more errors and higher variability (relative to PlanIQ), primarily due to PINNACLE’s lack of sufficient 3D grid supersampling. Another major driver for PINNACLE errors is an inconsistency in implementation of the “end-capping”; the additional volume resulting from expanding superior and inferior contours halfway to the next slice is included in the total volume calculation, but dose voxels in this expanded volume are excluded from the DVH. PlanIQ had fewer deviations, and most were associated with a rotated cylinder modeled by rectangular axial contours; for coarser axial spacing, the limited number of cross-sectional rectangles hinders the ability to render the true structure volume. Conclusions: The method is applicable to any DVH-calculating software capable of importing DICOM RT structure set and dose objects (the authors’ examples are available for download). It includes a collection of tests that probe the design of the DVH algorithm, measure its accuracy, and identify failure modes. Merits and applicability of each test are discussed.« less

  5. Determination of the Michel parameters and the {tau} neutrino helicity in {tau} decay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CLEO Collaboration

    1997-11-01

    Using the CLEO II detector at the Cornell Electron Storage Ring operated at {radical} (s) =10.6GeV, we have determined the Michel parameters {rho}, {xi}, and {delta} in {tau}{sup {minus_plus}}{r_arrow}l{sup {minus_plus}}{nu}{bar {nu}} decay as well as the {tau} neutrino helicity parameter h{sub {nu}{sub {tau}}} in {tau}{sup {minus_plus}}{r_arrow}{pi}{sup {minus_plus}}{pi}{sup 0}{nu} decay. From a data sample of 3.02{times}10{sup 6} produced {tau} pairs we analyzed events of the topologies e{sup +}e{sup {minus}}{r_arrow}{tau}{sup +}{tau}{sup {minus}}{r_arrow}(l{sup {plus_minus}}{nu}{bar {nu}})({pi}{sup {minus_plus}}{pi}{sup 0}{nu}) and e{sup +}e{sup {minus}}{r_arrow}{tau}{sup +}{tau}{sup {minus}}{r_arrow}({pi}{sup {plus_minus}}{pi}{sup 0}{bar {nu}})({pi}{sup {minus_plus}}{pi}{sup 0}{nu}). We obtain {rho}=0.747{rho}=0.747{plus_minus}0.010{plus_minus}0.006, {xi}=1.007{plus_minus}0.040{plus_minus}0.015, {xi}{delta}=0.745{plus_minus}0.026{plus_minus}0.009, and h{sub {nu}{sub {tau}}}={minus}0.995{plus_minus}0.010{plus_minus}0.003, where we have used the previouslymore » determined sign of h{sub {nu}{sub {tau}}} [ARGUS Collaboration, H. Albrecht {ital et al.}, Z. Phys. C {bold 58}, 61 (1993); Phys. Lett. B {bold 349}, 576 (1995)]. We also present the Michel parameters as determined from the electron and muon samples separately. All results are in agreement with the standard model V{minus}A interaction. {copyright} {ital 1997} {ital The American Physical Society}« less

  6. Flexible Energy Scheduling Tool for Integrating Variable Generation | Grid

    Science.gov Websites

    , security-constrained economic dispatch, and automatic generation control programs. DOWNLOAD PAPER Electric commitment, security-constrained economic dispatch, and automatic generation control sub-models. Each sub resolutions and operating strategies can be explored. FESTIV produces not only economic metrics but also

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metzler, Dominik; Oehrlein, Gottlieb S., E-mail: oehrlein@umd.edu; Li, Chen

    The need for atomic layer etching (ALE) is steadily increasing as smaller critical dimensions and pitches are required in device patterning. A flux-control based cyclic Ar/C{sub 4}F{sub 8} ALE based on steady-state Ar plasma in conjunction with periodic, precise C{sub 4}F{sub 8} injection and synchronized plasma-based low energy Ar{sup +} ion bombardment has been established for SiO{sub 2} [Metzler et al., J. Vac. Sci. Technol. A 32, 020603 (2014)]. In this work, the cyclic process is further characterized and extended to ALE of silicon under similar process conditions. The use of CHF{sub 3} as a precursor is examined and comparedmore » to C{sub 4}F{sub 8}. CHF{sub 3} is shown to enable selective SiO{sub 2}/Si etching using a fluorocarbon (FC) film build up. Other critical process parameters investigated are the FC film thickness deposited per cycle, the ion energy, and the etch step length. Etching behavior and mechanisms are studied using in situ real time ellipsometry and x-ray photoelectron spectroscopy. Silicon ALE shows less self-limitation than silicon oxide due to higher physical sputtering rates for the maximum ion energies used in this work, ranged from 20 to 30 eV. The surface chemistry is found to contain fluorinated silicon oxide during the etching of silicon. Plasma parameters during ALE are studied using a Langmuir probe and establish the impact of precursor addition on plasma properties.« less

  8. Synchronization in complex oscillator networks and smart grids.

    PubMed

    Dörfler, Florian; Chertkov, Michael; Bullo, Francesco

    2013-02-05

    The emergence of synchronization in a network of coupled oscillators is a fascinating topic in various scientific disciplines. A widely adopted model of a coupled oscillator network is characterized by a population of heterogeneous phase oscillators, a graph describing the interaction among them, and diffusive and sinusoidal coupling. It is known that a strongly coupled and sufficiently homogeneous network synchronizes, but the exact threshold from incoherence to synchrony is unknown. Here, we present a unique, concise, and closed-form condition for synchronization of the fully nonlinear, nonequilibrium, and dynamic network. Our synchronization condition can be stated elegantly in terms of the network topology and parameters or equivalently in terms of an intuitive, linear, and static auxiliary system. Our results significantly improve upon the existing conditions advocated thus far, they are provably exact for various interesting network topologies and parameters; they are statistically correct for almost all networks; and they can be applied equally to synchronization phenomena arising in physics and biology as well as in engineered oscillator networks, such as electrical power networks. We illustrate the validity, the accuracy, and the practical applicability of our results in complex network scenarios and in smart grid applications.

  9. Generation of unstructured grids and Euler solutions for complex geometries

    NASA Technical Reports Server (NTRS)

    Loehner, Rainald; Parikh, Paresh; Salas, Manuel D.

    1989-01-01

    Algorithms are described for the generation and adaptation of unstructured grids in two and three dimensions, as well as Euler solvers for unstructured grids. The main purpose is to demonstrate how unstructured grids may be employed advantageously for the economic simulation of both geometrically as well as physically complex flow fields.

  10. Schnek: A C++ library for the development of parallel simulation codes on regular grids

    NASA Astrophysics Data System (ADS)

    Schmitz, Holger

    2018-05-01

    A large number of algorithms across the field of computational physics are formulated on grids with a regular topology. We present Schnek, a library that enables fast development of parallel simulations on regular grids. Schnek contains a number of easy-to-use modules that greatly reduce the amount of administrative code for large-scale simulation codes. The library provides an interface for reading simulation setup files with a hierarchical structure. The structure of the setup file is translated into a hierarchy of simulation modules that the developer can specify. The reader parses and evaluates mathematical expressions and initialises variables or grid data. This enables developers to write modular and flexible simulation codes with minimal effort. Regular grids of arbitrary dimension are defined as well as mechanisms for defining physical domain sizes, grid staggering, and ghost cells on these grids. Ghost cells can be exchanged between neighbouring processes using MPI with a simple interface. The grid data can easily be written into HDF5 files using serial or parallel I/O.

  11. ED(MF)n: Humidity-Convection Feedbacks in a Mass Flux Scheme Based on Resolved Size Densities

    NASA Astrophysics Data System (ADS)

    Neggers, R.

    2014-12-01

    Cumulus cloud populations remain at least partially unresolved in present-day numerical simulations of global weather and climate, and accordingly their impact on the larger-scale flow has to be represented through parameterization. Various methods have been developed over the years, ranging in complexity from the early bulk models relying on a single plume to more recent approaches that attempt to reconstruct the underlying probability density functions, such as statistical schemes and multiple plume approaches. Most of these "classic" methods capture key aspects of cumulus cloud populations, and have been successfully implemented in operational weather and climate models. However, the ever finer discretizations of operational circulation models, driven by advances in the computational efficiency of supercomputers, is creating new problems for existing sub-grid schemes. Ideally, a sub-grid scheme should automatically adapt its impact on the resolved scales to the dimension of the grid-box within which it is supposed to act. It can be argued that this is only possible when i) the scheme is aware of the range of scales of the processes it represents, and ii) it can distinguish between contributions as a function of size. How to conceptually represent this knowledge of scale in existing parameterization schemes remains an open question that is actively researched. This study considers a relatively new class of models for sub-grid transport in which ideas from the field of population dynamics are merged with the concept of multi plume modelling. More precisely, a multiple mass flux framework for moist convective transport is formulated in which the ensemble of plumes is created in "size-space". It is argued that thus resolving the underlying size-densities creates opportunities for introducing scale-awareness and scale-adaptivity in the scheme. The behavior of an implementation of this framework in the Eddy Diffusivity Mass Flux (EDMF) model, named ED(MF)n, is examined for a standard case of subtropical marine shallow cumulus. We ask if a system of multiple independently resolved plumes is able to automatically create the vertical profile of bulk (mass) flux at which the sub-grid scale transport balances the imposed larger-scale forcings in the cloud layer.

  12. Four-Nozzle Benchmark Wind Tunnel Model USA Code Solutions for Simulation of Multiple Rocket Base Flow Recirculation at 145,000 Feet Altitude

    NASA Technical Reports Server (NTRS)

    Dougherty, N. S.; Johnson, S. L.

    1993-01-01

    Multiple rocket exhaust plume interactions at high altitudes can produce base flow recirculation with attendant alteration of the base pressure coefficient and increased base heating. A search for a good wind tunnel benchmark problem to check grid clustering technique and turbulence modeling turned up the experiment done at AEDC in 1961 by Goethert and Matz on a 4.25-in. diameter domed missile base model with four rocket nozzles. This wind tunnel model with varied external bleed air flow for the base flow wake produced measured p/p(sub ref) at the center of the base as high as 3.3 due to plume flow recirculation back onto the base. At that time in 1961, relatively inexpensive experimentation with air at gamma = 1.4 and nozzle A(sub e)/A of 10.6 and theta(sub n) = 7.55 deg with P(sub c) = 155 psia simulated a LO2/LH2 rocket exhaust plume with gamma = 1.20, A(sub e)/A of 78 and P(sub c) about 1,000 psia. An array of base pressure taps on the aft dome gave a clear measurement of the plume recirculation effects at p(infinity) = 4.76 psfa corresponding to 145,000 ft altitude. Our CFD computations of the flow field with direct comparison of computed-versus-measured base pressure distribution (across the dome) provide detailed information on velocities and particle traces as well eddy viscosity in the base and nozzle region. The solution was obtained using a six-zone mesh with 284,000 grid points for one quadrant taking advantage of symmetry. Results are compared using a zero-equation algebraic and a one-equation pointwise R(sub t) turbulence model (work in progress). Good agreement with the experimental pressure data was obtained with both; and this benchmark showed the importance of: (1) proper grid clustering and (2) proper choice of turbulence modeling for rocket plume problems/recirculation at high altitude.

  13. FermiGrid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yocum, D.R.; Berman, E.; Canal, P.

    2007-05-01

    As one of the founding members of the Open Science Grid Consortium (OSG), Fermilab enables coherent access to its production resources through the Grid infrastructure system called FermiGrid. This system successfully provides for centrally managed grid services, opportunistic resource access, development of OSG Interfaces for Fermilab, and an interface to the Fermilab dCache system. FermiGrid supports virtual organizations (VOs) including high energy physics experiments (USCMS, MINOS, D0, CDF, ILC), astrophysics experiments (SDSS, Auger, DES), biology experiments (GADU, Nanohub) and educational activities.

  14. The NASTRAN user's manual (level 17.0)

    NASA Technical Reports Server (NTRS)

    1979-01-01

    NASTRAN embodies a lumped element approach, wherein the distributed physical properties of a structure are represented by a model consisting of a finite number of idealized substructures or elements that are interconnected at a finite of grid points, to which loads are applied. All input and output data pertain to the idealized structural model. The general procedures for defining structural models are described and instructions are given for each of the bulk data cards and case control cards. Additional information on the case control cards and use of parameters is included for each rigid format.

  15. Atmospheric and Fundamental Parameters of Stars in Hubble's Next Generation Spectral Library

    NASA Technical Reports Server (NTRS)

    Heap, Sally

    2010-01-01

    Hubble's Next Generation Spectral Library (NGSL) consists of R approximately 1000 spectra of 374 stars of assorted temperature, gravity, and metallicity. We are presently working to determine the atmospheric and fundamental parameters of the stars from the NGSL spectra themselves via full-spectrum fitting of model spectra to the observed (extinction-corrected) spectrum over the full wavelength range, 0.2-1.0 micron. We use two grids of model spectra for this purpose: the very low-resolution spectral grid from Castelli-Kurucz (2004), and the grid from MARCS (2008). Both the observed spectrum and the MARCS spectra are first degraded in resolution to match the very low resolution of the Castelli-Kurucz models, so that our fitting technique is the same for both model grids. We will present our preliminary results with a comparison with those from the Sloan/Segue Stellar Parameter Pipeline, ELODIE, and MILES, etc.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rugheimer, S.; Kaltenegger, L.; Segura, A.

    We model the atmospheres and spectra of Earth-like planets orbiting the entire grid of M dwarfs for active and inactive stellar models with T{sub eff} = 2300 K to T{sub eff} = 3800 K and for six observed MUSCLES M dwarfs with UV radiation data. We set the Earth-like planets at the 1 AU equivalent distance and show spectra from the visible to IR (0.4–20 μm) to compare detectability of features in different wavelength ranges with the James Webb Space Telescope and other future ground- and spaced-based missions to characterize exo-Earths. We focus on the effect of UV activity levels onmore » detectable atmospheric features that indicate habitability on Earth, namely, H{sub 2}O, O{sub 3}, CH{sub 4}, N{sub 2}O, and CH{sub 3}Cl. To observe signatures of life—O{sub 2}/O{sub 3} in combination with reducing species like CH{sub 4}—we find that early and active M dwarfs are the best targets of the M star grid for future telescopes. The O{sub 2} spectral feature at 0.76 μm is increasingly difficult to detect in reflected light of later M dwarfs owing to low stellar flux in that wavelength region. N{sub 2}O, another biosignature detectable in the IR, builds up to observable concentrations in our planetary models around M dwarfs with low UV flux. CH{sub 3}Cl could become detectable, depending on the depth of the overlapping N{sub 2}O feature. We present a spectral database of Earth-like planets around cool stars for directly imaged planets as a framework for interpreting future light curves, direct imaging, and secondary eclipse measurements of the atmospheres of terrestrial planets in the habitable zone to design and assess future telescope capabilities.« less

  17. Method of Implementing Digital Phase-Locked Loops

    NASA Technical Reports Server (NTRS)

    Stephens, Scott A. (Inventor); Thomas, J. Brooks (Inventor)

    1997-01-01

    In a new formulation for digital phase-locked loops, loop-filter constants are determined from loop roots that can each be selectively placed in the s-plane on the basis of a new set of parameters, each with simple and direct physical meaning in terms of loop noise bandwidth, root-specific decay rate, and root-specific damping. Loops of first to fourth order are treated in the continuous-update approximation (B(sub L)T approaches 0) and in a discrete-update formulation with arbitrary B(sub L)T. Deficiencies of the continuous-update approximation in large-B(sub L)T applications are avoided in the new discrete-update formulation.

  18. Joint inversion of marine seismic AVA and CSEM data using statistical rock-physics models and Markov random fields: Stochastic inversion of AVA and CSEM data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, J.; Hoversten, G.M.

    2011-09-15

    Joint inversion of seismic AVA and CSEM data requires rock-physics relationships to link seismic attributes to electrical properties. Ideally, we can connect them through reservoir parameters (e.g., porosity and water saturation) by developing physical-based models, such as Gassmann’s equations and Archie’s law, using nearby borehole logs. This could be difficult in the exploration stage because information available is typically insufficient for choosing suitable rock-physics models and for subsequently obtaining reliable estimates of the associated parameters. The use of improper rock-physics models and the inaccuracy of the estimates of model parameters may cause misleading inversion results. Conversely, it is easy tomore » derive statistical relationships among seismic and electrical attributes and reservoir parameters from distant borehole logs. In this study, we develop a Bayesian model to jointly invert seismic AVA and CSEM data for reservoir parameter estimation using statistical rock-physics models; the spatial dependence of geophysical and reservoir parameters are carried out by lithotypes through Markov random fields. We apply the developed model to a synthetic case, which simulates a CO{sub 2} monitoring application. We derive statistical rock-physics relations from borehole logs at one location and estimate seismic P- and S-wave velocity ratio, acoustic impedance, density, electrical resistivity, lithotypes, porosity, and water saturation at three different locations by conditioning to seismic AVA and CSEM data. Comparison of the inversion results with their corresponding true values shows that the correlation-based statistical rock-physics models provide significant information for improving the joint inversion results.« less

  19. {bold {ital Ab initio}} studies of the structural and electronic properties of solid cubane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richardson, S.L.; Martins, J.L.

    1998-12-01

    In this paper, we report {ital ab initio} calculation of the structural and electronic properties of solid cubane (s-C{sub 8}H{sub 8}) in the local-density approximation. By using an {ital ab initio} constant pressure extended molecular dynamics method with variable cell shape proposed by Wentzcovitch, Martins, and Price, we compute a lattice parameter {ital a} and a bond angle {alpha} for the rhombohedral Bravais lattice and compare it with experimental x-ray data. We obtain bond lengths for the mononuclear C{sub 8}H{sub 8} unit of basis atoms, as well as a density of states and heat of formation. {copyright} {ital 1998} {italmore » The American Physical Society}« less

  20. TRMM Gridded Text Products

    NASA Technical Reports Server (NTRS)

    Stocker, Erich Franz

    2007-01-01

    NASA's Tropical Rainfall Measuring Mission (TRMM) has many products that contain instantaneous or gridded rain rates often among many other parameters. However, these products because of their completeness can often seem intimidating to users just desiring surface rain rates. For example one of the gridded monthly products contains well over 200 parameters. It is clear that if only rain rates are desired, this many parameters might prove intimidating. In addition, for many good reasons these products are archived and currently distributed in HDF format. This also can be an inhibiting factor in using TRMM rain rates. To provide a simple format and isolate just the rain rates from the many other parameters, the TRMM product created a series of gridded products in ASCII text format. This paper describes the various text rain rate products produced. It provides detailed information about parameters and how they are calculated. It also gives detailed format information. These products are used in a number of applications with the TRMM processing system. The products are produced from the swath instantaneous rain rates and contain information from the three major TRMM instruments: radar, radiometer, and combined. They are simple to use, human readable, and small for downloading.

  1. A Nonlinear Regression Model Estimating Single Source Concentrations of Primary and Secondarily Formed 2.5

    EPA Science Inventory

    Various approaches and tools exist to estimate local and regional PM2.5 impacts from a single emissions source, ranging from simple screening techniques to Gaussian based dispersion models and complex grid-based Eulerian photochemical transport models. These approache...

  2. Magnetohydrodynamic Simulations of Black Hole Accretion Flows Using PATCHWORK, a Multi-Patch, multi-code approach

    NASA Astrophysics Data System (ADS)

    Avara, Mark J.; Noble, Scott; Shiokawa, Hotaka; Cheng, Roseanne; Campanelli, Manuela; Krolik, Julian H.

    2017-08-01

    A multi-patch approach to numerical simulations of black hole accretion flows allows one to robustly match numerical grid shape and equations solved to the natural structure of the physical system. For instance, a cartesian gridded patch can be used to cover coordinate singularities on a spherical-polar grid, increasing computational efficiency and better capturing the physical system through natural symmetries. We will present early tests, initial applications, and first results from the new MHD implementation of the PATCHWORK framework.

  3. WRF nested large-eddy simulations of deep convection during SEAC4RS

    NASA Astrophysics Data System (ADS)

    Heath, Nicholas Kyle

    Deep convection is an important component of atmospheric circulations that affects many aspects of weather and climate. Therefore, improved understanding and realistic simulations of deep convection are critical to both operational and climate forecasts. Large-eddy simulations (LESs) often are used with observations to enhance understanding of convective processes. This study develops and evaluates a nested-LES method using the Weather Research and Forecasting (WRF) model. Our goal is to evaluate the extent to which the WRF nested-LES approach is useful for studying deep convection during a real-world case. The method was applied on 2 September 2013, a day of continental convection having a robust set of ground and airborne data available for evaluation. A three domain mesoscale WRF simulation is run first. Then, the finest mesoscale output (1.35 km grid length) is used to separately drive nested-LES domains with grid lengths of 450 and 150 m. Results reveal that the nested-LES approach reasonably simulates a broad spectrum of observations, from reflectivity distributions to vertical velocity profiles, during the study period. However, reducing the grid spacing does not necessarily improve results for our case, with the 450 m simulation outperforming the 150 m version. We find that simulated updrafts in the 150 m simulation are too narrow to overcome the negative effects of entrainment, thereby generating convection that is weaker than observed. Increasing the sub-grid mixing length in the 150 m simulation leads to deeper, more realistic convection, but comes at the expense of delaying the onset of the convection. Overall, results show that both the 450 m and 150 m simulations are influenced considerably by the choice of sub-grid mixing length used in the LES turbulence closure. Finally, the simulations and observations are used to study the processes forcing strong midlevel cloud-edge downdrafts that were observed on 2 September. Results suggest that these downdrafts are forced by evaporative cooling due to mixing near cloud edge and by vertical perturbation pressure gradient forces acting to restore mass continuity around neighboring updrafts. We conclude that the WRF nested-LES approach provides an effective method for studying deep convection for our real-world case. The method can be used to provide insight into physical processes that are important to understanding observations. The WRF nested-LES approach could be adapted for other case studies in which high-resolution observations are available for validation.

  4. Filtered sub-grid constitutive models for fluidized gas-particle flows constructed from 3-D simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarkar, Avik; Milioli, Fernando E.; Ozarkar, Shailesh

    2016-10-01

    The accuracy of fluidized-bed CFD predictions using the two-fluid model can be improved significantly, even when using coarse grids, by replacing the microscopic kinetic-theory-based closures with coarse-grained constitutive models. These coarse-grained constitutive relationships, called filtered models, account for the unresolved gas-particle structures (clusters and bubbles) via sub-grid corrections. Following the previous 2-D approaches of Igci et al. [AIChE J., 54(6), 1431-1448, 2008] and Milioli et al. [AIChE J., 59(9), 3265-3275, 2013], new filtered models are constructed from highly-resolved 3-D simulations of gas-particle flows. Although qualitatively similar to the older 2-D models, the new 3-D relationships exhibit noticeable quantitative and functionalmore » differences. In particular, the filtered stresses are strongly dependent on the gas-particle slip velocity. Closures for the filtered inter-phase drag, gas- and solids-phase pressures and viscosities are reported. A new model for solids stress anisotropy is also presented. These new filtered 3-D constitutive relationships are better suited to practical coarse-grid 3-D simulations of large, commercial-scale devices.« less

  5. Spectra of High-Ionization Seyfert 1 Galaxies: Implications for the Narrow-Line Region

    NASA Technical Reports Server (NTRS)

    Moore, David; Cohen, Ross D.; Marcy, Geoffrey W.

    1996-01-01

    We present line profiles and profile parameters for the Narrow-Line Regions (NLRs) of six Seyfert 1 galaxies with high-ionization lines: MCG 8-11-11, Mrk 79, Mrk 704, Mrk 841, NGC 4151, and NGC 5548. The sample was chosen primarily with the goal of obtaining high-quality [Fe VII] lambda6087 and, when possible, [Fe X] lambda6374 profiles to determine if these lines are more likely formed in a physically distinct 'coronal line region' or are formed throughout the NLR along with lines of lower critical density (n(sub cr)) and/or Ionization Potential (IP). We discuss correlations of velocity shift and width with n(sub cr) and IP. In some objects, lines of high IP and/or n(sub cr) are systematically broader than those of low IP/n(sub cr). Of particular interest, however, are objects that show no correlations of line width with either IP or n(sub cr). In these objects, lines of high and low IP/n(sub cr), are remarkably similar, which is difficult to reconcile with the classical picture of the NLR, in which lines of high and low IP/n(sub cr) are formed in physically distinct regions. We argue for similar spatial extents for the flux in lines with similar profiles. Here, as well as in a modeling-oriented companion paper, we develop further an idea suggested by Moore & Cohen that objects that do and do not show line width correlations with IP/n(sub cr) can both be explained in terms of a single NLR model with only a small difference in the cloud column density distinguishing the two types of object. Overall, our objects do not show correlations between the Full Width at Half-Maximum (FWHM) and IP and/or n(sub cr). The width must be defined by a parameter that is sensitive to extended profile wings in order for the correlations to result. We present models in which FWHM correlations with IP and/or n(sub cr) result only after simulating the lower spectral resolution used in previous observational studies. The models that simulate the higher spectral resolution of our observational study produce line width correlations only if the width is defined by a parameter that is more sensitive to extended profile wings than is the FWHM. Our sample of six objects is in effect augmented by incorporating the larger sample (16 objects) of Veilleux into some of our discussion. This paper focuses on new interpretations of NLR emission-line spectra and line profiles that stem directly from the observations. Paper 2 focuses on modeling and complements this paper by illustrating explicitly the effects that spatial variations in electron density, ionization parameter, and column density have on model profiles. By comparing model profiles with the observed profiles presented here, as well as with those presented by Veilleux, Paper 2 yields insight into how the electron density, ionization parameter, and column density likely vary throughout the NLR.

  6. Estimating Cosmic-Ray Spectral Parameters from Simulated Detector Responses with Detector Design Implications

    NASA Technical Reports Server (NTRS)

    Howell, L. W.

    2001-01-01

    A simple power law model consisting of a single spectral index (alpha-1) is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at knee energy (E(sub k)) to a steeper spectral index alpha-2 > alpha-1 above E(sub k). The maximum likelihood procedure is developed for estimating these three spectral parameters of the broken power law energy spectrum from simulated detector responses. These estimates and their surrounding statistical uncertainty are being used to derive the requirements in energy resolution, calorimeter size, and energy response of a proposed sampling calorimeter for the Advanced Cosmic-ray Composition Experiment for the Space Station (ACCESS). This study thereby permits instrument developers to make important trade studies in design parameters as a function of the science objectives, which is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope.

  7. Self-organization of pulsing and bursting in a CO{sub 2} laser with opto-electronic feedback

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freire, Joana G.; Instituto de Altos Estudos da Paraíba, Rua Infante Dom Henrique 100-1801, 58039-150 João Pessoa; CELC, Departamento de Matemática, Universidade de Lisboa, 1649-003 Lisboa

    We report a detailed investigation of the stability of a CO{sub 2} laser with feedback as described by a six-dimensional rate-equations model which provides satisfactory agreement between numerical and experimental results. We focus on experimentally accessible parameters, like bias voltage, feedback gain, and the bandwidth of the feedback loop. The impact of decay rates and parameters controlling cavity losses are also investigated as well as control planes which imply changes of the laser physical medium. For several parameter combinations, we report stability diagrams detailing how laser spiking and bursting is organized over extended intervals. Laser pulsations are shown to emergemore » organized in several hitherto unseen regular and irregular phases and to exhibit a much richer and complex range of behaviors than described thus far. A significant observation is that qualitatively similar organization of laser spiking and bursting can be obtained by tuning rather distinct control parameters, suggesting the existence of unexpected symmetries in the laser control space.« less

  8. Electric arc discharge damage to ion thruster grids

    NASA Technical Reports Server (NTRS)

    Beebe, D. D.; Nakanishi, S.; Finke, R. C.

    1974-01-01

    Arcs representative of those occurring between the grids of a mercury ion thruster were simulated. Parameters affecting an arc and the resulting damage were studied. The parameters investigated were arc energy, arc duration, and grid geometry. Arc attenuation techniques were also investigated. Potentially serious damage occurred at all energy levels representative of actual thruster operating conditions. Of the grids tested, the lowest open-area configuration sustained the least damage for given conditions. At a fixed energy level a long duration discharge caused greater damage than a short discharge. Attenuation of arc current using various impedances proved to be effective in reducing arc damage. Faults were also deliberately caused using chips of sputtered materials formed during the operation of an actual thruster. These faults were cleared with no serious grid damage resulting using the principles and methods developed in this study.

  9. Characterization of scatter in digital mammography from use of Monte Carlo simulations and comparison to physical measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leon, Stephanie M., E-mail: Stephanie.Leon@uth.tmc.edu; Wagner, Louis K.; Brateman, Libby F.

    2014-11-01

    Purpose: Monte Carlo simulations were performed with the goal of verifying previously published physical measurements characterizing scatter as a function of apparent thickness. A secondary goal was to provide a way of determining what effect tissue glandularity might have on the scatter characteristics of breast tissue. The overall reason for characterizing mammography scatter in this research is the application of these data to an image processing-based scatter-correction program. Methods: MCNPX was used to simulate scatter from an infinitesimal pencil beam using typical mammography geometries and techniques. The spreading of the pencil beam was characterized by two parameters: mean radial extentmore » (MRE) and scatter fraction (SF). The SF and MRE were found as functions of target, filter, tube potential, phantom thickness, and the presence or absence of a grid. The SF was determined by separating scatter and primary by the angle of incidence on the detector, then finding the ratio of the measured scatter to the total number of detected events. The accuracy of the MRE was determined by placing ring-shaped tallies around the impulse and fitting those data to the point-spread function (PSF) equation using the value for MRE derived from the physical measurements. The goodness-of-fit was determined for each data set as a means of assessing the accuracy of the physical MRE data. The effect of breast glandularity on the SF, MRE, and apparent tissue thickness was also considered for a limited number of techniques. Results: The agreement between the physical measurements and the results of the Monte Carlo simulations was assessed. With a grid, the SFs ranged from 0.065 to 0.089, with absolute differences between the measured and simulated SFs averaging 0.02. Without a grid, the range was 0.28–0.51, with absolute differences averaging −0.01. The goodness-of-fit values comparing the Monte Carlo data to the PSF from the physical measurements ranged from 0.96 to 1.00 with a grid and 0.65 to 0.86 without a grid. Analysis of the data suggested that the nongrid data could be better described by a biexponential function than the single exponential used here. The simulations assessing the effect of breast composition on SF and MRE showed only a slight impact on these quantities. When compared to a mix of 50% glandular/50% adipose tissue, the impact of substituting adipose or glandular breast compositions on the apparent thickness of the tissue was about 5%. Conclusions: The findings show agreement between the physical measurements published previously and the Monte Carlo simulations presented here; the resulting data can therefore be used more confidently for an application such as image processing-based scatter correction. The findings also suggest that breast composition does not have a major impact on the scatter characteristics of breast tissue. Application of the scatter data to the development of a scatter-correction software program can be simplified by ignoring the variations in density among breast tissues.« less

  10. Effect of wave function on the proton induced L XRP cross sections for {sub 62}Sm and {sub 74}W

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shehla,; Kaur, Rajnish; Kumar, Anil

    The L{sub k}(k= 1, α, β, γ) X-ray production cross sections have been calculated for {sub 74}W and {sub 62}Sm at different incident proton energies ranging 1-5 MeV using theoretical data sets of different physical parameters, namely, the Li(i=1-3) sub-shell X-ray emission rates based on the Dirac-Fork (DF) model, the fluorescence and Coster Kronig yields based on the Dirac- Hartree-Slater (DHS) model and two sets the proton ionization cross sections based on the DHS model and the ECPSSR in order to assess the influence of the wave function on the XRP cross sections. The calculated cross sections have been compared withmore » the measured cross sections reported in the recent compilation to check the reliability of the calculated values.« less

  11. Efficiency and Accuracy of Time-Accurate Turbulent Navier-Stokes Computations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Sanetrik, Mark D.; Biedron, Robert T.; Melson, N. Duane; Parlette, Edward B.

    1995-01-01

    The accuracy and efficiency of two types of subiterations in both explicit and implicit Navier-Stokes codes are explored for unsteady laminar circular-cylinder flow and unsteady turbulent flow over an 18-percent-thick circular-arc (biconvex) airfoil. Grid and time-step studies are used to assess the numerical accuracy of the methods. Nonsubiterative time-stepping schemes and schemes with physical time subiterations are subject to time-step limitations in practice that are removed by pseudo time sub-iterations. Computations for the circular-arc airfoil indicate that a one-equation turbulence model predicts the unsteady separated flow better than an algebraic turbulence model; also, the hysteresis with Mach number of the self-excited unsteadiness due to shock and boundary-layer separation is well predicted.

  12. Globally-Gridded Interpolated Night-Time Marine Air Temperatures 1900-2014

    NASA Astrophysics Data System (ADS)

    Junod, R.; Christy, J. R.

    2016-12-01

    Over the past century, climate records have pointed to an increase in global near-surface average temperature. Near-surface air temperature over the oceans is a relatively unused parameter in understanding the current state of climate, but is useful as an independent temperature metric over the oceans and serves as a geographical and physical complement to near-surface air temperature over land. Though versions of this dataset exist (i.e. HadMAT1 and HadNMAT2), it has been strongly recommended that various groups generate climate records independently. This University of Alabama in Huntsville (UAH) study began with the construction of monthly night-time marine air temperature (UAHNMAT) values from the early-twentieth century through to the present era. Data from the International Comprehensive Ocean and Atmosphere Data Set (ICOADS) were used to compile a time series of gridded UAHNMAT, (20S-70N). This time series was homogenized to correct for the many biases such as increasing ship height, solar deck heating, etc. The time series of UAHNMAT, once adjusted to a standard reference height, is gridded to 1.25° pentad grid boxes and interpolated using the kriging interpolation technique. This study will present results which quantify the variability and trends and compare to current trends of other related datasets that include HadNMAT2 and sea-surface temperatures (HadISST & ERSSTv4).

  13. Entorhinal cortex receptive fields are modulated by spatial attention, even without movement

    PubMed Central

    König, Peter; König, Seth; Buffalo, Elizabeth A

    2018-01-01

    Grid cells in the entorhinal cortex allow for the precise decoding of position in space. Along with potentially playing an important role in navigation, grid cells have recently been hypothesized to make a general contribution to mental operations. A prerequisite for this hypothesis is that grid cell activity does not critically depend on physical movement. Here, we show that movement of covert attention, without any physical movement, also elicits spatial receptive fields with a triangular tiling of space. In monkeys trained to maintain central fixation while covertly attending to a stimulus moving in the periphery we identified a significant population (20/141, 14% neurons at a FDR <5%) of entorhinal cells with spatially structured receptive fields. This contrasts with recordings obtained in the hippocampus, where grid-like representations were not observed. Our results provide evidence that neurons in macaque entorhinal cortex do not rely on physical movement. PMID:29537964

  14. Physical nature of longevity of light actinides in dynamic failure phenomenon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uchaev, A. Ya., E-mail: uchaev@expd.vniief.ru; Punin, V. T.; Selchenkova, N. I.

    It is shown in this work that the physical nature of the longevity of light actinides under extreme conditions in a range of nonequilibrium states of t ∼ 10{sup –6}–10{sup –10} s is determined by the time needed for the formation of a critical concentration of a cascade of failure centers, which changes connectivity of the body. These centers form a percolation cluster. The longevity is composed of waiting time t{sub w} for the appearance of failure centers and clusterization time t{sub c} of cascade of failure centers, when connectivity in the system of failure centers and the percolation clustermore » arise. A unique mechanism of the dynamic failure process, a unique order parameter, and an equal dimensionality of the space in which the process occurs determine the physical nature of the longevity of metals, including fissionable materials.« less

  15. Investigations of possible states for coexistence of superconductivity and ferromagnetism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ham, T.E.

    1984-01-01

    Ginzburg-Landau theory is used to investigate states in which both superconductivity and ferromagnetism exist simultaneously in certain rare-earth ternary compounds. The spontaneous vortex state of Kuper, Revzen and Ron is reexamined and extended to include magnetic oscillations within each vortex cell and the existence of antiferromagnetically aligned vortices. The linearly polarized state of Greenside, Blount and Varma is reinvestigated in what appears to be a more physically acceptable range of parameters that are used in the Ginzburg-Landau free energy functional. The square antiferromagnetic vortex lattice state proposed by Hu and Ham is investigated here for the first time, energetically comparedmore » to the states proposed by Kuper, et al. and Greenside, et al., and used to model the observed coexistence state observed in ErRh/sub 4/B/sub 4/. The results show that this square antiferromagnetic vortex lattice state is energetically favored over the linearly polarized state in large parameter and temperature range. Such a lattice also appears to be a good model to explain many of the experimental observations made on ErRh/sub 4/B/sub 4/. Thus, it is felt that this vortex lattice is the best model, yet examined, to explain the coexistence state in ErRh/sub 4/B/sub 4/.« less

  16. Geographic patterns of carbon dioxide emissions from fossil-fuel burning, hydraulic cement production, and gas flaring on a one degree by one degree grid cell basis: 1950 to 1990

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brenkert, A.L.; Andres, R.J.; Marland, G.

    1997-03-01

    Data sets of one degree latitude by one degree longitude carbon dioxide (CO{sub 2}) emissions in units of thousand metric tons of carbon (C) per year from anthropogenic sources have been produced for 1950, 1960, 1970, 1980 and 1990. Detailed geographic information on CO{sub 2} emissions can be critical in understanding the pattern of the atmospheric and biospheric response to these emissions. Global, regional and national annual estimates for 1950 through 1992 were published previously. Those national, annual CO{sub 2} emission estimates were based on statistics on fossil-fuel burning, cement manufacturing and gas flaring in oil fields as well asmore » energy production, consumption and trade data, using the methods of Marland and Rotty. The national annual estimates were combined with gridded one-degree data on political units and 1984 human populations to create the new gridded CO{sub 2} emission data sets. The same population distribution was used for each of the years as proxy for the emission distribution within each country. The implied assumption for that procedure was that per capita energy use and fuel mix is uniform over a political unit. The consequence of this first-order procedure is that the spatial changes observed over time are solely due to changes in national energy consumption and nation-based fuel mix. Increases in emissions over time are apparent for most areas.« less

  17. The importance of band tail recombination on current collection and open-circuit voltage in CZTSSe solar cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, James E.; Purdue University, West Lafayette, Indiana 47907; Hages, Charles J.

    2016-07-11

    Cu{sub 2}ZnSn(S,Se){sub 4} (CZTSSe) solar cells typically exhibit high short-circuit current density (J{sub sc}), but have reduced cell efficiencies relative to other thin film technologies due to a deficit in the open-circuit voltage (V{sub oc}), which prevent these devices from becoming commercially competitive. Recent research has attributed the low V{sub oc} in CZTSSe devices to small scale disorder that creates band tail states within the absorber band gap, but the physical processes responsible for this V{sub oc} reduction have not been elucidated. In this paper, we show that carrier recombination through non-mobile band tail states has a strong voltage dependencemore » and is a significant performance-limiting factor, and including these effects in simulation allows us to simultaneously explain the V{sub oc} deficit, reduced fill factor, and voltage-dependent quantum efficiency with a self-consistent set of material parameters. Comparisons of numerical simulations to measured data show that reasonable values for the band tail parameters (characteristic energy, capture rate) can account for the observed low V{sub oc}, high J{sub sc}, and voltage dependent collection efficiency. These results provide additional evidence that the presence of band tail states accounts for the low efficiencies of CZTSSe solar cells and further demonstrates that recombination through non-mobile band tail states is the dominant efficiency limiting mechanism.« less

  18. A comparison of cosmological hydrodynamic codes

    NASA Technical Reports Server (NTRS)

    Kang, Hyesung; Ostriker, Jeremiah P.; Cen, Renyue; Ryu, Dongsu; Hernquist, Lars; Evrard, August E.; Bryan, Greg L.; Norman, Michael L.

    1994-01-01

    We present a detailed comparison of the simulation results of various hydrodynamic codes. Starting with identical initial conditions based on the cold dark matter scenario for the growth of structure, with parameters h = 0.5 Omega = Omega(sub b) = 1, and sigma(sub 8) = 1, we integrate from redshift z = 20 to z = O to determine the physical state within a representative volume of size L(exp 3) where L = 64 h(exp -1) Mpc. Five indenpendent codes are compared: three of them Eulerian mesh-based and two variants of the smooth particle hydrodynamics 'SPH' Lagrangian approach. The Eulerian codes were run at N(exp 3) = (32(exp 3), 64(exp 3), 128(exp 3), and 256(exp 3)) cells, the SPH codes at N(exp 3) = 32(exp 3) and 64(exp 3) particles. Results were then rebinned to a 16(exp 3) grid with the exception that the rebinned data should converge, by all techniques, to a common and correct result as N approaches infinity. We find that global averages of various physical quantities do, as expected, tend to converge in the rebinned model, but that uncertainites in even primitive quantities such as (T), (rho(exp 2))(exp 1/2) persists at the 3%-17% level achieve comparable and satisfactory accuracy for comparable computer time in their treatment of the high-density, high-temeprature regions as measured in the rebinned data; the variance among the five codes (at highest resolution) for the mean temperature (as weighted by rho(exp 2) is only 4.5%. Examined at high resolution we suspect that the density resolution is better in the SPH codes and the thermal accuracy in low-density regions better in the Eulerian codes. In the low-density, low-temperature regions the SPH codes have poor accuracy due to statiscal effects, and the Jameson code gives the temperatures which are too high, due to overuse of artificial viscosity in these high Mach number regions. Overall the comparison allows us to better estimate errors; it points to ways of improving this current generation ofhydrodynamic codes and of suiting their use to problems which exploit their best individual features.

  19. Experimental Study of Vane Heat Transfer and Aerodynamics at Elevated Levels of Turbulence

    NASA Technical Reports Server (NTRS)

    Ames, Forrest E.

    1994-01-01

    A four vane subsonic cascade was used to investigate how free stream turbulence influences pressure surface heat transfer. A simulated combustor turbulence generator was built to generate high level (13 percent) large scale (Lu approximately 44 percent inlet span) turbulence. The mock combustor was also moved upstream to generate a moderate level (8.3 percent) of turbulence for comparison to smaller scale grid generated turbulence (7.8 percent). The high level combustor turbulence caused an average pressure surface heat transfer augmentation of 56 percent above the low turbulence baseline. The smaller scale grid turbulence produced the next greatest effect on heat transfer and demonstrated the importance of scale on heat transfer augmentation. In general, the heat transfer scaling parameter U(sub infinity) TU(sub infinity) LU(sub infinity)(exp -1/3) was found to hold for the turbulence. Heat transfer augmentation was also found to scale approximately on Re(sub ex)(exp 1/3) at constant turbulence conditions. Some evidence of turbulence intensification in terms of elevated dissipation rates was found along the pressure surface outside the boundary layer. However, based on the level of dissipation and the resulting heat transfer augmentation, the amplification of turbulence has only a moderate effect on pressure surface heat transfer. The flow field turbulence does drive turbulent production within the boundary layer which in turn causes the high levels of heat transfer augmentation. Unlike heat transfer, the flow field straining was found to have a significant effect on turbulence isotropy. On examination of the one dimensional spectra for u' and v', the effect to isotropy was largely limited to lower wavenumber spectra. The higher wavenumber spectra showed little or no change. The high level large scale turbulence was found to have a strong influence on wake development. The free stream turbulence significantly enhanced mixing resulting in broader and shallower wakes than the baseline case. High levels of flow field turbulence were found to correlate with a significant increase in total pressure loss in the core of the flow. Documenting the wake growth and characteristics provides boundary conditions for the downstream rotor.

  20. Variational estimation of process parameters in a simplified atmospheric general circulation model

    NASA Astrophysics Data System (ADS)

    Lv, Guokun; Koehl, Armin; Stammer, Detlef

    2016-04-01

    Parameterizations are used to simulate effects of unresolved sub-grid-scale processes in current state-of-the-art climate model. The values of the process parameters, which determine the model's climatology, are usually manually adjusted to reduce the difference of model mean state to the observed climatology. This process requires detailed knowledge of the model and its parameterizations. In this work, a variational method was used to estimate process parameters in the Planet Simulator (PlaSim). The adjoint code was generated using automatic differentiation of the source code. Some hydrological processes were switched off to remove the influence of zero-order discontinuities. In addition, the nonlinearity of the model limits the feasible assimilation window to about 1day, which is too short to tune the model's climatology. To extend the feasible assimilation window, nudging terms for all state variables were added to the model's equations, which essentially suppress all unstable directions. In identical twin experiments, we found that the feasible assimilation window could be extended to over 1-year and accurate parameters could be retrieved. Although the nudging terms transform to a damping of the adjoint variables and therefore tend to erases the information of the data over time, assimilating climatological information is shown to provide sufficient information on the parameters. Moreover, the mechanism of this regularization is discussed.

  1. Large eddy simulation of premixed and non-premixed combustion in a Stagnation Point Reverse Flow combustor

    NASA Astrophysics Data System (ADS)

    Undapalli, Satish

    A new combustor referred to as Stagnation Point Reverse Flow (SPRF) combustor has been developed at Georgia Tech to meet the increasingly stringent emission regulations. The combustor incorporates a novel design to meet the conflicting requirements of low pollution and high stability in both premixed and non-premixed modes. The objective of this thesis work is to perform Large Eddy Simulations (LES) on this lab-scale combustor and elucidate the underlying physics that has resulted in its excellent performance. To achieve this, numerical simulations have been performed in both the premixed and non-premixed combustion modes, and velocity field, species field, entrainment characteristics, flame structure, emissions, and mixing characteristics have been analyzed. Simulations have been carried out first for a non-reactive case to resolve relevant fluid mechanics without heat release by the computational grid. The computed mean and RMS quantities in the non-reacting case compared well with the experimental data. Next, the simulations were extended for the premixed reactive case by employing different sub-grid scale combustion chemistry closures: Eddy Break Up (EBU), Artificially Thickened Flame (TF) and Linear Eddy Mixing (LEM) models. Results from the EBU and TF models exhibit reasonable agreement with the experimental velocity field. However, the computed thermal and species fields have noticeable discrepancies. Only LEM with LES (LEMLES), which is an advanced scalar approach, has been able to accurately predict both the velocity and species fields. Scalar mixing plays an important role in combustion, and this is solved directly at the sub-grid scales in LEM. As a result, LEM accurately predicts the scalar fields. Due to the two way coupling between the super-grid and sub-grid quantities, the velocity predictions also compare very well with the experiments. In other approaches, the sub-grid effects have been either modeled using conventional approaches (EBU) or need some ad hoc adjustments to account these effects accurately (TF). The results from LEMLES, using a reduced chemical mechanism, have been analyzed in the premixed mode. The results show that mass entrainment occurs along the shear layer in the combustor. The entrained mass carries products into the reactant stream and provides reactant preheating. Thus, product entrainment enhances the reaction rates and help stabilize the flame even at very lean conditions. These products have been shown to enter into the flame through local extinction zones present on the flame surface. The flame structure has been further analyzed, and the combustion mode was found to be primarily in thin reaction zones. Closer to the injector, there are isolated regions, where the combustion mode is in broken reaction zones, while the downstream flame structure is closer to a flamelet regime. The emissions in the combustor have been studied using simple global mechanisms for NO x. Computations have shown extremely low NOx values, comparable to the measured emissions. These low emissions have been shown to be primarily due to the low temperatures in the combustor. LEMLES computations have also been performed with a detailed chemistry to capture more accurate flame structure. The flame in the detailed chemistry case shows more extinction zones close to the injector than that in the reduced chemical mechanism. The LEMLES approach has also been used to resolve the combustion mode in the non-premixed case. The studies have indicated that the mixing of the fuel and air close to the injector controls the combustion process. The predictions in the near field have been shown to be very sensitive to the inflow conditions. Analysis has shown that the fuel and air mixing occurs to lean proportions in the combustor before any burning takes place. The flame structure in the non-premixed mode was very similar to the premixed mode. Along with the fuel air mixing, the products also mixed with the reactants and provided the preheating effects to stabilize the flame in the downstream region of the combustor.

  2. Understanding generalized inversions of nuclear magnetic resonance transverse relaxation time in porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, J., E-mail: JMitchell16@slb.com; Chandrasekera, T. C.

    2014-12-14

    The nuclear magnetic resonance transverse relaxation time T{sub 2}, measured using the Carr-Purcell-Meiboom-Gill (CPMG) experiment, is a powerful method for obtaining unique information on liquids confined in porous media. Furthermore, T{sub 2} provides structural information on the porous material itself and has many applications in petrophysics, biophysics, and chemical engineering. Robust interpretation of T{sub 2} distributions demands appropriate processing of the measured data since T{sub 2} is influenced by diffusion through magnetic field inhomogeneities occurring at the pore scale, caused by the liquid/solid susceptibility contrast. Previously, we introduced a generic model for the diffusion exponent of the form −ant{sub e}{supmore » k} (where n is the number and t{sub e} the temporal separation of spin echoes, and a is a composite diffusion parameter) in order to distinguish the influence of relaxation and diffusion in CPMG data. Here, we improve the analysis by introducing an automatic search for the optimum power k that best describes the diffusion behavior. This automated method is more efficient than the manual trial-and-error grid search adopted previously, and avoids variability through subjective judgments of experimentalists. Although our method does not avoid the inherent assumption that the diffusion exponent depends on a single k value, we show through simulation and experiment that it is robust in measurements of heterogeneous systems that violate this assumption. In this way, we obtain quantitative T{sub 2} distributions from complicated porous structures and demonstrate the analysis with examples of ceramics used for filtration and catalysis, and limestone of relevance to the construction and petroleum industries.« less

  3. EVOLUTION OF CATACLYSMIC VARIABLES AND RELATED BINARIES CONTAINING A WHITE DWARF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalomeni, B.; Rappaport, S.; Molnar, M.

    We present a binary evolution study of cataclysmic variables (CVs) and related systems with white dwarf (WD) accretors, including for example, AM CVn systems, classical novae, supersoft X-ray sources (SXSs), and systems with giant donor stars. Our approach intentionally avoids the complications associated with population synthesis algorithms, thereby allowing us to present the first truly comprehensive exploration of all of the subsequent binary evolution pathways that zero-age CVs might follow (assuming fully non-conservative, Roche-lobe overflow onto an accreting WD) using the sophisticated binary stellar evolution code MESA. The grid consists of 56,000 initial models, including 14 WD accretor masses, 43more » donor-star masses (0.1–4.7 M {sub ⊙}), and 100 orbital periods. We explore evolution tracks in the orbital period and donor-mass ( P {sub orb}– M {sub don}) plane in terms of evolution dwell times, masses of the WD accretor, accretion rate, and chemical composition of the center and surface of the donor star. We report on the differences among the standard CV tracks, those with giant donor stars, and ultrashort period systems. We show where in parameter space one can expect to find SXSs, present a diagnostic to distinguish among different evolutionary paths to forming AM CVn binaries, quantify how the minimum orbital period in CVs depends on the chemical composition of the donor star, and update the P {sub orb}( M {sub wd}) relation for binaries containing WDs whose progenitors lost their envelopes via stable Roche-lobe overflow. Finally, we indicate where in the P {sub orb}– M {sub don} the accretion disks will tend to be stable against the thermal-viscous instability, and where gravitational radiation signatures may be found with LISA.« less

  4. Towards a General Turbulence Model for Planetary Boundary Layers Based on Direct Statistical Simulation

    NASA Astrophysics Data System (ADS)

    Skitka, J.; Marston, B.; Fox-Kemper, B.

    2016-02-01

    Sub-grid turbulence models for planetary boundary layers are typically constructed additively, starting with local flow properties and including non-local (KPP) or higher order (Mellor-Yamada) parameters until a desired level of predictive capacity is achieved or a manageable threshold of complexity is surpassed. Such approaches are necessarily limited in general circumstances, like global circulation models, by their being optimized for particular flow phenomena. By building a model reductively, starting with the infinite hierarchy of turbulence statistics, truncating at a given order, and stripping degrees of freedom from the flow, we offer the prospect a turbulence model and investigative tool that is equally applicable to all flow types and able to take full advantage of the wealth of nonlocal information in any flow. Direct statistical simulation (DSS) that is based upon expansion in equal-time cumulants can be used to compute flow statistics of arbitrary order. We investigate the feasibility of a second-order closure (CE2) by performing simulations of the ocean boundary layer in a quasi-linear approximation for which CE2 is exact. As oceanographic examples, wind-driven Langmuir turbulence and thermal convection are studied by comparison of the quasi-linear and fully nonlinear statistics. We also characterize the computational advantages and physical uncertainties of CE2 defined on a reduced basis determined via proper orthogonal decomposition (POD) of the flow fields.

  5. A Storm Surge and Inundation Model of the Back River Watershed at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Loftis, Jon Derek; Wang, Harry V.; DeYoung, Russell J.

    2013-01-01

    This report on a Virginia Institute for Marine Science project demonstrates that the sub-grid modeling technology (now as part of Chesapeake Bay Inundation Prediction System, CIPS) can incorporate high-resolution Lidar measurements provided by NASA Langley Research Center into the sub-grid model framework to resolve detailed topographic features for use as a hydrological transport model for run-off simulations within NASA Langley and Langley Air Force Base. The rainfall over land accumulates in the ditches/channels resolved via the model sub-grid was tested to simulate the run-off induced by heavy precipitation. Possessing both the capabilities for storm surge and run-off simulations, the CIPS model was then applied to simulate real storm events starting with Hurricane Isabel in 2003. It will be shown that the model can generate highly accurate on-land inundation maps as demonstrated by excellent comparison of the Langley tidal gauge time series data (CAPABLE.larc.nasa.gov) and spatial patterns of real storm wrack line measurements with the model results simulated during Hurricanes Isabel (2003), Irene (2011), and a 2009 Nor'easter. With confidence built upon the model's performance, sea level rise scenarios from the ICCP (International Climate Change Partnership) were also included in the model scenario runs to simulate future inundation cases.

  6. Recent Advances in High-Resolution Regional Climate Modeling at the U.S. Environmental Protection Agency

    NASA Astrophysics Data System (ADS)

    Alapaty, Kiran; Bullock, O. Russell; Herwehe, Jerold; Spero, Tanya; Nolte, Christopher; Mallard, Megan

    2014-05-01

    The Regional Climate Modeling Team at the U.S. Environmental Protection Agency has been improving the quality of regional climate fields generated by the Weather Research and Forecasting (WRF) model. Active areas of research include improving core physics within the WRF model and adapting the physics for regional climate applications, improving the representation of inland lakes that are unresolved by the driving fields, evaluating nudging strategies, and devising techniques to demonstrate value added by dynamical downscaling. These research efforts have been conducted using reanalysis data as driving fields, and then their results have been applied to downscale data from global climate models. The goals of this work are to equip environmental managers and policy/decision makers in the U.S. with science, tools, and data to inform decisions related to adapting to and mitigating the potential impacts of climate change on air quality, ecosystems, and human health. Our presentation will focus mainly on one area of the Team's research: Development and testing of a seamless convection parameterization scheme. For the continental U.S., one of the impediments to high-resolution (~3 to 15 km) climate modeling is related to the lack of a seamless convection parameterization that works across many scales. Since many convection schemes are not developed to work at those "gray scales", they often lead to excessive precipitation during warm periods (e.g., summer). The Kain-Fritsch (KF) convection parameterization in the WRF model has been updated such that it can be used seamlessly across spatial scales down to ~1 km grid spacing. First, we introduced subgrid-scale cloud and radiation interactions that had not been previously considered in the KF scheme. Then, a scaling parameter was developed to introduce scale-dependency in the KF scheme for use with various processes. In addition, we developed new formulations for: (1) convective adjustment timescale; (2) entrainment of environmental air; (3) impacts of convective updraft on grid-scale vertical velocity; (4) convective cloud microphysics; (5) stabilizing capacity; (6) elimination of double counting of precipitation; and (7) estimation of updraft mass flux at the lifting condensation level. Some of these scale-dependent formulations make the KF scheme operable at all scales up to about sub-kilometer grid resolution. In this presentation, regional climate simulations using the WRF model will be presented to demonstrate the effects of these changes to the KF scheme. Additionally, we briefly present results obtained from the improved representation of inland lakes, various nudging strategies, and added value of dynamical downscaling of regional climate. Requesting for a plenary talk for the session: "Regional climate modeling, including CORDEX" (session number CL6.4) at the EGU 2014 General Assembly, to be held 27 April - 2 May 2014 in Vienna, Austria.

  7. Low energy theorems and the unitarity bounds in the extra U(1) superstring inspired E{sub 6} models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, N.K.; Saxena, Pranav; Nagawat, Ashok K.

    2005-11-01

    The conventional method using low energy theorems derived by Chanowitz et al. [Phys. Rev. Lett. 57, 2344 (1986);] does not seem to lead to an explicit unitarity limit in the scattering processes of longitudinally polarized gauge bosons for the high energy case in the extra U(1) superstring inspired models, commonly known as {eta} model, emanating from E{sub 6} group of superstring theory. We have made use of an alternative procedure given by Durand and Lopez [Phys. Lett. B 217, 463 (1989);], which is applicable to supersymmetric grand unified theories. Explicit unitarity bounds on the superpotential couplings (identified as Yukawa couplings)more » are obtained from both using unitarity constraints as well as using renormalization group equations (RGE) analysis at one-loop level utilizing critical couplings concepts implying divergence of scalar coupling at M{sub G}. These are found to be consistent with finiteness over the entire range M{sub Z}{<=}{radical}(s){<=}M{sub G} i.e. from grand unification scale to weak scale. For completeness, the similar approach has been made use of in other models i.e., {chi}, {psi}, and {nu} models emanating from E{sub 6} and it has been noticed that at weak scale, the unitarity bounds on Yukawa couplings do not differ among E{sub 6} extra U(1) models significantly except for the case of {chi} model in 16 representations. For the case of the E{sub 6}-{eta} model ({beta}{sub E} congruent with 9.64), the analysis using the unitarity constraints leads to the following bounds on various parameters: {lambda}{sub t(max.)}(M{sub Z})=1.294, {lambda}{sub b(max.)}(M{sub Z})=1.278, {lambda}{sub H(max.)}(M{sub Z})=0.955, {lambda}{sub D(max.)}(M{sub Z})=1.312. The analytical analysis of RGE at the one-loop level provides the following critical bounds on superpotential couplings: {lambda}{sub t,c}(M{sub Z}) congruent with 1.295, {lambda}{sub b,c}(M{sub Z}) congruent with 1.279, {lambda}{sub H,c}(M{sub Z}) congruent with 0.968, {lambda}{sub D,c}(M{sub Z}) congruent with 1.315. Thus superpotential coupling values obtained by both the approaches are in good agreement. Theoretically we have obtained bounds on physical mass parameters using the unitarity constrained superpotential couplings. The bounds are as follows: (i) Absolute upper bound on top quark mass m{sub t}{<=}225 GeV (ii) the upper bound on the lightest neutral Higgs boson mass at the tree level is m{sub H{sub 2}{sup 0}}{sup tree}{<=}169 GeV, and after the inclusion of the one-loop radiative correction it is m{sub H{sub 2}{sup 0}}{<=}229 GeV when {lambda}{sub t}{ne}{lambda}{sub b} at the grand unified theory scale. On the other hand, these are m{sub H{sub 2}{sup 0}}{sup tree}{<=}159 GeV, m{sub H{sub 2}{sup 0}}{<=}222 GeV, respectively, when {lambda}{sub t}={lambda}{sub b} at the grand unified theory scale. A plausible range on D-quark mass as a function of mass scale M{sub Z{sub 2}} is m{sub D}{approx_equal}O(3 TeV) for M{sub Z{sub 2}}{approx_equal}O(1 TeV) for the favored values of tan{beta}{<=}1. The bounds on aforesaid physical parameters in the case of {chi}, {psi}, and {nu} models in the 27 representation are almost identical with those of {eta} model and are consistent with the present day experimental precision measurements.« less

  8. Preparing CAM-SE for Multi-Tracer Applications: CAM-SE-Cslam

    NASA Astrophysics Data System (ADS)

    Lauritzen, P. H.; Taylor, M.; Goldhaber, S.

    2014-12-01

    The NCAR-DOE spectral element (SE) dynamical core comes from the HOMME (High-Order Modeling Environment; Dennis et al., 2012) and it is available in CAM. The CAM-SE dynamical core is designed with intrinsic mimetic properties guaranteeing total energy conservation (to time-truncation errors) and mass-conservation, and has demonstrated excellent scalability on massively parallel compute platforms (Taylor, 2011). For applications involving many tracers such as chemistry and biochemistry modeling, CAM-SE has been found to be significantly more computationally costly than the current "workhorse" model CAM-FV (Finite-Volume; Lin 2004). Hence a multi-tracer efficient scheme, called the CSLAM (Conservative Semi-Lagrangian Multi-tracer; Lauritzen et al., 2011) scheme, has been implemented in the HOMME (Erath et al., 2012). The CSLAM scheme has recently been cast in flux-form in HOMME so that it can be coupled to the SE dynamical core through conventional flux-coupling methods where the SE dynamical core provides background air mass fluxes to CSLAM. Since the CSLAM scheme makes use of a finite-volume gnomonic cubed-sphere grid and hence does not operate on the SE quadrature grid, the capability of running tracer advection, the physical parameterization suite and dynamics on separate grids has been implemented in CAM-SE. The default CAM-SE-CSLAM setup is to run physics on the quasi-equal area CSLAM grid. The capability of running physics on a different grid than the SE dynamical core may provide a more consistent coupling since the physics grid option operates with quasi-equal-area cell average values rather than non-equi-distant grid-point (SE quadrature point) values. Preliminary results on the performance of CAM-SE-CSLAM will be presented.

  9. An integral conservative gridding--algorithm using Hermitian curve interpolation.

    PubMed

    Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K

    2008-11-07

    The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).

  10. Parallelization Issues and Particle-In Codes.

    NASA Astrophysics Data System (ADS)

    Elster, Anne Cathrine

    1994-01-01

    "Everything should be made as simple as possible, but not simpler." Albert Einstein. The field of parallel scientific computing has concentrated on parallelization of individual modules such as matrix solvers and factorizers. However, many applications involve several interacting modules. Our analyses of a particle-in-cell code modeling charged particles in an electric field, show that these accompanying dependencies affect data partitioning and lead to new parallelization strategies concerning processor, memory and cache utilization. Our test-bed, a KSR1, is a distributed memory machine with a globally shared addressing space. However, most of the new methods presented hold generally for hierarchical and/or distributed memory systems. We introduce a novel approach that uses dual pointers on the local particle arrays to keep the particle locations automatically partially sorted. Complexity and performance analyses with accompanying KSR benchmarks, have been included for both this scheme and for the traditional replicated grids approach. The latter approach maintains load-balance with respect to particles. However, our results demonstrate it fails to scale properly for problems with large grids (say, greater than 128-by-128) running on as few as 15 KSR nodes, since the extra storage and computation time associated with adding the grid copies, becomes significant. Our grid partitioning scheme, although harder to implement, does not need to replicate the whole grid. Consequently, it scales well for large problems on highly parallel systems. It may, however, require load balancing schemes for non-uniform particle distributions. Our dual pointer approach may facilitate this through dynamically partitioned grids. We also introduce hierarchical data structures that store neighboring grid-points within the same cache -line by reordering the grid indexing. This alignment produces a 25% savings in cache-hits for a 4-by-4 cache. A consideration of the input data's effect on the simulation may lead to further improvements. For example, in the case of mean particle drift, it is often advantageous to partition the grid primarily along the direction of the drift. The particle-in-cell codes for this study were tested using physical parameters, which lead to predictable phenomena including plasma oscillations and two-stream instabilities. An overview of the most central references related to parallel particle codes is also given.

  11. Isochrones for old (>5 Gyr) stars and stellar populations. I. Models for –2.4 ≤ [Fe/H] ≤+0.6, 0.25 ≤ Y ≤ 0.33, and –0.4 ≤ [α/Fe] ≤+0.4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    VandenBerg, Don A.; Bergbusch, Peter A.; Ferguson, Jason W.

    2014-10-10

    Canonical grids of stellar evolutionary sequences have been computed for the helium mass-fraction abundances Y = 0.25, 0.29, and 0.33, and for iron abundances that vary from –2.4 to +0.4 (in 0.2 dex increments) when [α/Fe] =+0.4, or for the ranges –2.0 ≤ [Fe/H] ≤+0.6, –1.8 ≤ [Fe/H] ≤+0.6 when [α/Fe] =0.0 and –0.4, respectively. The grids, which consist of tracks for masses from 0.12 M{sub ⊙} to 1.1-1.5 M{sub ⊙} (depending on the metallicity) are based on up-to-date physics, including the gravitational settling of helium (but not metals diffusion). Interpolation software is provided to generate isochrones for arbitrary agesmore » between ≈5 and 15 Gyr and any values of Y, [α/Fe], and [Fe/H] within the aformentioned ranges. Comparisons of isochrones with published color-magnitude diagrams (CMDs) for the open clusters M67 ([Fe/H] ≈0.0) and NGC 6791 ([Fe/H] ≈0.3) and for four of the metal-poor globular clusters (47 Tuc, M3, M5, and M92) indicate that the models for the observed metallicities do a reasonably good job of reproducing the locations and slopes of the cluster main sequences and giant branches. The same conclusion is reached from a consideration of plots of nearby subdwarfs that have accurate Hipparcos parallaxes and metallicities in the range –2.0 ≲ [Fe/H] ≲ –1.0 on various CMDs and on the (log T {sub eff}, M{sub V} ) diagram. A relatively hot temperature scale similar to that derived in recent calibrations of the infrared flux method is favored by both the isochrones and the adopted color transformations, which are based on the latest MARCS model atmospheres.« less

  12. Towards Stochastic Optimization-Based Electric Vehicle Penetration in a Novel Archipelago Microgrid.

    PubMed

    Yang, Qingyu; An, Dou; Yu, Wei; Tan, Zhengan; Yang, Xinyu

    2016-06-17

    Due to the advantage of avoiding upstream disturbance and voltage fluctuation from a power transmission system, Islanded Micro-Grids (IMG) have attracted much attention. In this paper, we first propose a novel self-sufficient Cyber-Physical System (CPS) supported by Internet of Things (IoT) techniques, namely "archipelago micro-grid (MG)", which integrates the power grid and sensor networks to make the grid operation effective and is comprised of multiple MGs while disconnected with the utility grid. The Electric Vehicles (EVs) are used to replace a portion of Conventional Vehicles (CVs) to reduce CO 2 emission and operation cost. Nonetheless, the intermittent nature and uncertainty of Renewable Energy Sources (RESs) remain a challenging issue in managing energy resources in the system. To address these issues, we formalize the optimal EV penetration problem as a two-stage Stochastic Optimal Penetration (SOP) model, which aims to minimize the emission and operation cost in the system. Uncertainties coming from RESs (e.g., wind, solar, and load demand) are considered in the stochastic model and random parameters to represent those uncertainties are captured by the Monte Carlo-based method. To enable the reasonable deployment of EVs in each MGs, we develop two scheduling schemes, namely Unlimited Coordinated Scheme (UCS) and Limited Coordinated Scheme (LCS), respectively. An extensive simulation study based on a modified 9 bus system with three MGs has been carried out to show the effectiveness of our proposed schemes. The evaluation data indicates that our proposed strategy can reduce both the environmental pollution created by CO 2 emissions and operation costs in UCS and LCS.

  13. Towards Stochastic Optimization-Based Electric Vehicle Penetration in a Novel Archipelago Microgrid

    PubMed Central

    Yang, Qingyu; An, Dou; Yu, Wei; Tan, Zhengan; Yang, Xinyu

    2016-01-01

    Due to the advantage of avoiding upstream disturbance and voltage fluctuation from a power transmission system, Islanded Micro-Grids (IMG) have attracted much attention. In this paper, we first propose a novel self-sufficient Cyber-Physical System (CPS) supported by Internet of Things (IoT) techniques, namely “archipelago micro-grid (MG)”, which integrates the power grid and sensor networks to make the grid operation effective and is comprised of multiple MGs while disconnected with the utility grid. The Electric Vehicles (EVs) are used to replace a portion of Conventional Vehicles (CVs) to reduce CO2 emission and operation cost. Nonetheless, the intermittent nature and uncertainty of Renewable Energy Sources (RESs) remain a challenging issue in managing energy resources in the system. To address these issues, we formalize the optimal EV penetration problem as a two-stage Stochastic Optimal Penetration (SOP) model, which aims to minimize the emission and operation cost in the system. Uncertainties coming from RESs (e.g., wind, solar, and load demand) are considered in the stochastic model and random parameters to represent those uncertainties are captured by the Monte Carlo-based method. To enable the reasonable deployment of EVs in each MGs, we develop two scheduling schemes, namely Unlimited Coordinated Scheme (UCS) and Limited Coordinated Scheme (LCS), respectively. An extensive simulation study based on a modified 9 bus system with three MGs has been carried out to show the effectiveness of our proposed schemes. The evaluation data indicates that our proposed strategy can reduce both the environmental pollution created by CO2 emissions and operation costs in UCS and LCS. PMID:27322281

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bright, Edward A.; Rose, Amy N.; Urban, Marie L.

    The LandScan data set is a worldwide population database compiled on a 30" x 30" latitude/longitube grid. Census counts (at sub-national level) were apportioned to each grid cell based on likelihood coefficients, which are based on land cover, slope, road proximity, high-resolution imagery, and other data sets. The LandScan data set was developed as part of Oak Ridge National Laboratory (ORNL) Global Population Project for estimating ambient populations at risk.

  15. How changing physical constants and violation of local position invariance may occur?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flambaum, V. V.; Shuryak, E. V.

    2008-04-04

    Light scalar fields very naturally appear in modern cosmological models, affecting such parameters of Standard Model as electromagnetic fine structure constant {alpha}, dimensionless ratios of electron or quark mass to the QCD scale, m{sub e,q}/{lambda}{sub QCD}. Cosmological variations of these scalar fields should occur because of drastic changes of matter composition in Universe: the latest such event is rather recent (redshift z{approx}0.5), from matter to dark energy domination. In a two-brane model (we use as a pedagogical example) these modifications are due to changing distance to 'the second brane', a massive companion of 'our brane'. Back from extra dimensions, massivemore » bodies (stars or galaxies) can also affect physical constants. They have large scalar charge Q{sub d} proportional to number of particles which produces a Coulomb-like scalar field {phi} = Q{sub d}/r. This leads to a variation of the fundamental constants proportional to the gravitational potential, e.g. {delta}{alpha}/{alpha} = k{sub {alpha}}{delta}(GM/rc{sup 2}). We compare different manifestations of this effect, which is usually called violation of local position invariance. The strongest limits k{sub {alpha}}+0.17k{sub e} (-3.5{+-}6)*10{sup -7} are obtained from the measurements of dependence of atomic frequencies on the distance from Sun (the distance varies due to the ellipticity of the Earth's orbit)« less

  16. Models for the modern power grid

    NASA Astrophysics Data System (ADS)

    Nardelli, Pedro H. J.; Rubido, Nicolas; Wang, Chengwei; Baptista, Murilo S.; Pomalaza-Raez, Carlos; Cardieri, Paulo; Latva-aho, Matti

    2014-10-01

    This article reviews different kinds of models for the electric power grid that can be used to understand the modern power system, the smart grid. From the physical network to abstract energy markets, we identify in the literature different aspects that co-determine the spatio-temporal multilayer dynamics of power system. We start our review by showing how the generation, transmission and distribution characteristics of the traditional power grids are already subject to complex behaviour appearing as a result of the the interplay between dynamics of the nodes and topology, namely synchronisation and cascade effects. When dealing with smart grids, the system complexity increases even more: on top of the physical network of power lines and controllable sources of electricity, the modernisation brings information networks, renewable intermittent generation, market liberalisation, prosumers, among other aspects. In this case, we forecast a dynamical co-evolution of the smart grid and other kind of networked systems that cannot be understood isolated. This review compiles recent results that model electric power grids as complex systems, going beyond pure technological aspects. From this perspective, we then indicate possible ways to incorporate the diverse co-evolving systems into the smart grid model using, for example, network theory and multi-agent simulation.

  17. Analysis of a grid ionospheric vertical delay and its bounding errors over West African sub-Saharan region

    NASA Astrophysics Data System (ADS)

    Abe, O. E.; Otero Villamide, X.; Paparini, C.; Radicella, S. M.; Nava, B.

    2017-02-01

    Investigating the effects of the Equatorial Ionization Anomaly (EIA) ionosphere and space weather on Global Navigation Satellite Systems (GNSS) is very crucial, and a key to successful implementation of a GNSS augmentation system (SBAS) over the equatorial and low-latitude regions. A possible ionospheric vertical delay (GIVD, Grid Ionospheric Vertical Delay) broadcast at a Ionospheric Grid Point (IGP) and its confidence bounds errors (GIVE, Grid Ionospheric Vertical Error) are analyzed and compared with the ionospheric vertical delay estimated at a nearby user location over the West African Sub-Saharan region. Since African sub-Saharan ionosphere falls within the EIA region, which is always characterized by a disturbance in form of irregularities after sunset, and the disturbance is even more during the geomagnetically quiet conditions unlike middle latitudes, the need to have a reliable ionospheric threat model to cater for the nighttime ionospheric plasma irregularities for the future SBAS user is essential. The study was done during the most quiet and disturbed geomagnetic conditions on October 2013. A specific low latitude EGNOS-like algorithm, based on single thin layer model, was engaged to simulate SBAS message in the study. Our preliminary results indicate that, the estimated GIVE detects and protects a potential SBAS user against sampled ionospheric plasma irregularities over the region with a steep increment in GIVE to non-monitored after local sunset to post midnight. This corresponds to the onset of the usual ionospheric plasma irregularities in the region. The results further confirm that the effects of the geomagnetic storms on the ionosphere are not consistent in affecting GNSS applications over the region. Finally, this paper suggests further work to be investigated in order to improve the threat integrity model activity, and thereby enhance the availability of the future SBAS over African sub-Saharan region.

  18. Weighted-density functionals for cavity formation and dispersion energies in continuum solvation models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sundararaman, Ravishankar; Gunceler, Deniz; Arias, T. A.

    2014-10-07

    Continuum solvation models enable efficient first principles calculations of chemical reactions in solution, but require extensive parametrization and fitting for each solvent and class of solute systems. Here, we examine the assumptions of continuum solvation models in detail and replace empirical terms with physical models in order to construct a minimally-empirical solvation model. Specifically, we derive solvent radii from the nonlocal dielectric response of the solvent from ab initio calculations, construct a closed-form and parameter-free weighted-density approximation for the free energy of the cavity formation, and employ a pair-potential approximation for the dispersion energy. We show that the resulting modelmore » with a single solvent-independent parameter: the electron density threshold (n{sub c}), and a single solvent-dependent parameter: the dispersion scale factor (s{sub 6}), reproduces solvation energies of organic molecules in water, chloroform, and carbon tetrachloride with RMS errors of 1.1, 0.6 and 0.5 kcal/mol, respectively. We additionally show that fitting the solvent-dependent s{sub 6} parameter to the solvation energy of a single non-polar molecule does not substantially increase these errors. Parametrization of this model for other solvents, therefore, requires minimal effort and is possible without extensive databases of experimental solvation free energies.« less

  19. Measurement of the Weak Mixing Angle in Moller Scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klejda, B.

    2005-01-28

    The weak mixing parameter, sin{sup 2} {theta}{sub w}, is one of the fundamental parameters of the Standard Model. Its tree-level value has been measured with high precision at energies near the Z{sup 0} pole; however, due to radiative corrections at the one-loop level, the value of sin{sup 2} {theta}{sub w} is expected to change with the interaction energy. As a result, a measurement of sin{sup 2} {theta}{sub w} at low energy (Q{sup 2} << m{sub Z}, where Q{sup 2} is the momentum transfer and m{sub Z} is the Z boson mass), provides a test of the Standard Model at themore » one-loop level, and a probe for new physics beyond the Standard Model. One way of obtaining sin{sup 2} {theta}{sub w} at low energy is from measuring the left-right, parity-violating asymmetry in electron-electron (Moeller) scattering: A{sub PV} = {sigma}{sub R}-{sigma}{sub L}/{sigma}{sub R}+{sigma}{sub L}, where {sigma}{sub R} and {sigma}{sub L} are the cross sections for right- and left-handed incident electrons, respectively. The parity violating asymmetry is proportional to the pseudo-scalar weak neutral current coupling in Moeller scattering, g{sub ee}. At tree level g{sub ee} = (1/4 -sin{sup 2} {theta}{sub w}). A precision measurement of the parity-violating asymmetry in Moeller scattering was performed by Experiment E158 at the Stanford Linear Accelerator Center (SLAC). During the experiment, {approx}50 GeV longitudinally polarized electrons scattered off unpolarized atomic electrons in a liquid hydrogen target, corresponding to an average momentum transfer Q{sup 2} {approx} 0.03 (GeV/c){sup 2}. The tree-level prediction for A{sub PV} at such energy is {approx}300 ppb. However one-loop radiative corrections reduce its value by {approx}40%. This document reports the E158 results from the 2002 data collection period. The parity-violating asymmetry was found to be A{sub PV} = -160 {+-} 21 (stat.) {+-} 17 (syst.) ppb, which represents the first observation of a parity-violating asymmetry in Moeller scattering. This value corresponds to a weak mixing angle at Q{sup 2} = 0.026 (GeV/c){sup 2} of sin{sup 2} {theta}{sub w{ovr MS}} = 0.2379 {+-} 0.0016 (stat.) {+-} 0.0013 (syst.), which is -0.3 standard deviations away from the Standard Model prediction: sin{sup 2} {theta}{sub w{ovr MS}}{sup predicted} = 0.2385 {+-} 0.0006 (theory). The E158 measurement of sin{sup 2} {theta}{sub w} at a precision of {delta}(sin{sup 2} {theta}{sub w}) = 0.0020 provides new physics sensitivity at the TeV scale.« less

  20. A wave model test bed study for wave energy resource characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Zhaoqing; Neary, Vincent S.; Wang, Taiping

    This paper presents a test bed study conducted to evaluate best practices in wave modeling to characterize energy resources. The model test bed off the central Oregon Coast was selected because of the high wave energy and available measured data at the site. Two third-generation spectral wave models, SWAN and WWIII, were evaluated. A four-level nested-grid approach—from global to test bed scale—was employed. Model skills were assessed using a set of model performance metrics based on comparing six simulated wave resource parameters to observations from a wave buoy inside the test bed. Both WWIII and SWAN performed well at themore » test bed site and exhibited similar modeling skills. The ST4 package with WWIII, which represents better physics for wave growth and dissipation, out-performed ST2 physics and improved wave power density and significant wave height predictions. However, ST4 physics tended to overpredict the wave energy period. The newly developed ST6 physics did not improve the overall model skill for predicting the six wave resource parameters. Sensitivity analysis using different wave frequencies and direction resolutions indicated the model results were not sensitive to spectral resolutions at the test bed site, likely due to the absence of complex bathymetric and geometric features.« less

  1. Facial recognition using simulated prosthetic pixelized vision.

    PubMed

    Thompson, Robert W; Barnett, G David; Humayun, Mark S; Dagnelie, Gislin

    2003-11-01

    To evaluate a model of simulated pixelized prosthetic vision using noncontiguous circular phosphenes, to test the effects of phosphene and grid parameters on facial recognition. A video headset was used to view a reference set of four faces, followed by a partially averted image of one of those faces viewed through a square pixelizing grid that contained 10x10 to 32x32 dots separated by gaps. The grid size, dot size, gap width, dot dropout rate, and gray-scale resolution were varied separately about a standard test condition, for a total of 16 conditions. All tests were first performed at 99% contrast and then repeated at 12.5% contrast. Discrimination speed and performance were influenced by all stimulus parameters. The subjects achieved highly significant facial recognition accuracy for all high-contrast tests except for grids with 70% random dot dropout and two gray levels. In low-contrast tests, significant facial recognition accuracy was achieved for all but the most adverse grid parameters: total grid area less than 17% of the target image, 70% dropout, four or fewer gray levels, and a gap of 40.5 arcmin. For difficult test conditions, a pronounced learning effect was noticed during high-contrast trials, and a more subtle practice effect on timing was evident during subsequent low-contrast trials. These findings suggest that reliable face recognition with crude pixelized grids can be learned and may be possible, even with a crude visual prosthesis.

  2. BayeSED: A General Approach to Fitting the Spectral Energy Distribution of Galaxies

    NASA Astrophysics Data System (ADS)

    Han, Yunkun; Han, Zhanwen

    2014-11-01

    We present a newly developed version of BayeSED, a general Bayesian approach to the spectral energy distribution (SED) fitting of galaxies. The new BayeSED code has been systematically tested on a mock sample of galaxies. The comparison between the estimated and input values of the parameters shows that BayeSED can recover the physical parameters of galaxies reasonably well. We then applied BayeSED to interpret the SEDs of a large Ks -selected sample of galaxies in the COSMOS/UltraVISTA field with stellar population synthesis models. Using the new BayeSED code, a Bayesian model comparison of stellar population synthesis models has been performed for the first time. We found that the 2003 model by Bruzual & Charlot, statistically speaking, has greater Bayesian evidence than the 2005 model by Maraston for the Ks -selected sample. In addition, while setting the stellar metallicity as a free parameter obviously increases the Bayesian evidence of both models, varying the initial mass function has a notable effect only on the Maraston model. Meanwhile, the physical parameters estimated with BayeSED are found to be generally consistent with those obtained using the popular grid-based FAST code, while the former parameters exhibit more natural distributions. Based on the estimated physical parameters of the galaxies in the sample, we qualitatively classified the galaxies in the sample into five populations that may represent galaxies at different evolution stages or in different environments. We conclude that BayeSED could be a reliable and powerful tool for investigating the formation and evolution of galaxies from the rich multi-wavelength observations currently available. A binary version of the BayeSED code parallelized with Message Passing Interface is publicly available at https://bitbucket.org/hanyk/bayesed.

  3. Statistical errors and systematic biases in the calibration of the convective core overshooting with eclipsing binaries. A case study: TZ Fornacis

    NASA Astrophysics Data System (ADS)

    Valle, G.; Dell'Omodarme, M.; Prada Moroni, P. G.; Degl'Innocenti, S.

    2017-04-01

    Context. Recently published work has made high-precision fundamental parameters available for the binary system TZ Fornacis, making it an ideal target for the calibration of stellar models. Aims: Relying on these observations, we attempt to constrain the initial helium abundance, the age and the efficiency of the convective core overshooting. Our main aim is in pointing out the biases in the results due to not accounting for some sources of uncertainty. Methods: We adopt the SCEPtER pipeline, a maximum likelihood technique based on fine grids of stellar models computed for various values of metallicity, initial helium abundance and overshooting efficiency by means of two independent stellar evolutionary codes, namely FRANEC and MESA. Results: Beside the degeneracy between the estimated age and overshooting efficiency, we found the existence of multiple independent groups of solutions. The best one suggests a system of age 1.10 ± 0.07 Gyr composed of a primary star in the central helium burning stage and a secondary in the sub-giant branch (SGB). The resulting initial helium abundance is consistent with a helium-to-metal enrichment ratio of ΔY/ ΔZ = 1; the core overshooting parameter is β = 0.15 ± 0.01 for FRANEC and fov = 0.013 ± 0.001 for MESA. The second class of solutions, characterised by a worse goodness-of-fit, still suggest a primary star in the central helium-burning stage but a secondary in the overall contraction phase, at the end of the main sequence (MS). In this case, the FRANEC grid provides an age of Gyr and a core overshooting parameter , while the MESA grid gives 1.23 ± 0.03 Gyr and fov = 0.025 ± 0.003. We analyse the impact on the results of a larger, but typical, mass uncertainty and of neglecting the uncertainty in the initial helium content of the system. We show that very precise mass determinations with uncertainty of a few thousandths of solar mass are required to obtain reliable determinations of stellar parameters, as mass errors larger than approximately 1% lead to estimates that are not only less precise but also biased. Moreover, we show that a fit obtained with a grid of models computed at a fixed ΔY/ ΔZ - thus neglecting the current uncertainty in the initial helium content of the system - can provide severely biased age and overshooting estimates. The possibility of independent overshooting efficiencies for the two stars of the system is also explored. Conclusions: The present analysis confirms that to constrain the core overshooting parameter by means of binary systems is a very difficult task that requires an observational precision still rarely achieved and a robust statistical treatment of the error sources.

  4. Crystal structure and physical properties of new Ca{sub 2}TGe{sub 3} (T = Pd and Pt) germanides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klimczuk, T., E-mail: tomasz.klimczuk@pg.gda.pl; Xie, Weiwei; Winiarski, M.J.

    The crystallographic, electronic transport and thermal properties of Ca{sub 2}PdGe{sub 3} and Ca{sub 2}PtGe{sub 3} are reported. The compounds crystalize in an ordered variant of the AlB{sub 2} crystal structure, in space group P6/mmm, with the lattice parameters a = 8.4876(4) Å/8.4503(5) Å and c = 4.1911(3) Å/4.2302(3) Å for Ca{sub 2}PdGe{sub 3} and Ca{sub 2}PtGe{sub 3}, respectively. The resistivity data exhibit metallic behavior with residual-resistivity-ratios (RRR) of 13 for Ca{sub 2}PdGe{sub 3} and 6.5 for Ca{sub 2}PtGe{sub 3}. No superconducting transition is observed down to 0.4 K. Specific heat studies reveal similar values of the Debye temperatures and Sommerfeldmore » coefficients: Θ{sub D} = 298 K, γ = 4.1 mJ mol{sup −1} K{sup −2} and Θ{sub D} = 305 K, γ = 3.2 mJ mol{sup −1} K{sup −2} for Ca{sub 2}PdGe{sub 3} and Ca{sub 2}PtGe{sub 3}, respectively. The low value of γ is in agreement with the electronic structure calculations.« less

  5. Climatic and landscape controls on travel time distributions across Europe

    NASA Astrophysics Data System (ADS)

    Kumar, Rohini; Rao, Suresh; Hesse, Falk; Borchardt, Dietrich; Fleckenstein, Jan; Jawitz, James; Musolff, Andreas; Rakovec, Oldrich; Samaniego, Luis; Yang, Soohyun; Zink, Matthias; Attinger, Sabine

    2017-04-01

    Travel time distributions (TTDs) are fundamental descriptors to characterize the functioning of storage, mixing and release of water and solutes in a river basin. Identifying the relative importance (and controls) of climate and landscape attributes on TDDs is fundamental to improve our understanding of the underlying mechanism controlling the spatial heterogeneity of TTDs, and their moments (e.g., mean TT). Studies aimed at elucidating such controls have focused on either theoretical developments to gain (physical) insights using mostly synthetic datasets or empirical relationships using limited datasets from experimental sites. A study painting a general picture of emerging controls at a continental scale is still lacking. In this study, we make use of spatially resolved hydrologic fluxes and states generated through an observationally driven, mesoscale Hydrologic Model (mHM; www.ufz.de/mhm) to comprehensively characterize the dominant controls of climate and landscape attributes on TDDs in the vadose zone across the entire European region. mHM uses a novel Multiscale Parameter Regionalization (MPR; Samaniego et al., 2010 and Kumar et al., 2013) scheme that encapsulates fine scale landscape attributes (e.g., topography, soil, and vegetation characteristics) to account for the sub-grid variability in model parameterization. The model was established at 25 km spatial resolution to simulate the daily gridded fluxes and states over Europe for the period 1955-2015. We utilized recent developments in TTDs theory (e.g., Botter et al., 2010, Harman et al., 2011) to characterize the stationary and non-stationary behavior of water particles transported through the vadose zone at every grid cell. Our results suggest a complex set of interactions between climate and landscape properties controlling the spatial heterogeneity of the mean travel time (TT). The spatial variability in the mean TT across the Pan-EU generally follows the climatic gradient with lower values in humid regions and higher in semi-arid or drier regions. The results signifies the role of a landscape attributes like plant available soil-water-storage capacity, when expressed in a dimensionless number that also include climate attributes such as average rain depth and aridity index, forms a potentially useful predictor for explaining the spatial heterogeneity of mean TTs. Finally, the study also highlights the time-varying behavior of TTDs and discusses the seasonal variation in mean TTs across Europe.

  6. Automated airplane surface generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, R.E.; Cordero, Y.; Jones, W.

    1996-12-31

    An efficient methodology and software axe presented for defining a class of airplane configurations. A small set of engineering design parameters and grid control parameters govern the process. The general airplane configuration has wing, fuselage, vertical tall, horizontal tail, and canard components. Wing, canard, and tail surface grids axe manifested by solving a fourth-order partial differential equation subject to Dirichlet and Neumann boundary conditions. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage is described by an algebraic function with four design parameters. The computed surface grids are suitablemore » for a wide range of Computational Fluid Dynamics simulation and configuration optimizations. Both batch and interactive software are discussed for applying the methodology.« less

  7. Design and implementation of a 3D-MR/CT geometric image distortion phantom/analysis system for stereotactic radiosurgery.

    PubMed

    Damyanovich, A Z; Rieker, M; Zhang, B; Bissonnette, J-P; Jaffray, D A

    2018-03-27

    The design, construction and application of a multimodality, 3D magnetic resonance/computed tomography (MR/CT) image distortion phantom and analysis system for stereotactic radiosurgery (SRS) is presented. The phantom is characterized by (1) a 1 × 1 × 1 (cm) 3 MRI/CT-visible 3D-Cartesian grid; (2) 2002 grid vertices that are 3D-intersections of MR-/CT-visible 'lines' in all three orthogonal planes; (3) a 3D-grid that is MR-signal positive/CT-signal negative; (4) a vertex distribution sufficiently 'dense' to characterize geometrical parameters properly, and (5) a grid/vertex resolution consistent with SRS localization accuracy. When positioned correctly, successive 3D-vertex planes along any orthogonal axis of the phantom appear as 1 × 1 (cm) 2 -2D grids, whereas between vertex planes, images are defined by 1 × 1 (cm) 2 -2D arrays of signal points. Image distortion is evaluated using a centroid algorithm that automatically identifies the center of each 3D-intersection and then calculates the deviations dx, dy, dz and dr for each vertex point; the results are presented as a color-coded 2D or 3D distribution of deviations. The phantom components and 3D-grid are machined to sub-millimeter accuracy, making the device uniquely suited to SRS applications; as such, we present it here in a form adapted for use with a Leksell stereotactic frame. Imaging reproducibility was assessed via repeated phantom imaging across ten back-to-back scans; 80%-90% of the differences in vertex deviations dx, dy, dz and dr between successive 3 T MRI scans were found to be  ⩽0.05 mm for both axial and coronal acquisitions, and over  >95% of the differences were observed to be  ⩽0.05 mm for repeated CT scans, clearly demonstrating excellent reproducibility. Applications of the 3D-phantom/analysis system are presented, using a 32-month time-course assessment of image distortion/gradient stability and statistical control chart for 1.5 T and 3 T GE TwinSpeed MRI systems.

  8. Design and implementation of a 3D-MR/CT geometric image distortion phantom/analysis system for stereotactic radiosurgery

    NASA Astrophysics Data System (ADS)

    Damyanovich, A. Z.; Rieker, M.; Zhang, B.; Bissonnette, J.-P.; Jaffray, D. A.

    2018-04-01

    The design, construction and application of a multimodality, 3D magnetic resonance/computed tomography (MR/CT) image distortion phantom and analysis system for stereotactic radiosurgery (SRS) is presented. The phantom is characterized by (1) a 1 × 1 × 1 (cm)3 MRI/CT-visible 3D-Cartesian grid; (2) 2002 grid vertices that are 3D-intersections of MR-/CT-visible ‘lines’ in all three orthogonal planes; (3) a 3D-grid that is MR-signal positive/CT-signal negative; (4) a vertex distribution sufficiently ‘dense’ to characterize geometrical parameters properly, and (5) a grid/vertex resolution consistent with SRS localization accuracy. When positioned correctly, successive 3D-vertex planes along any orthogonal axis of the phantom appear as 1 × 1 (cm)2-2D grids, whereas between vertex planes, images are defined by 1 × 1 (cm)2-2D arrays of signal points. Image distortion is evaluated using a centroid algorithm that automatically identifies the center of each 3D-intersection and then calculates the deviations dx, dy, dz and dr for each vertex point; the results are presented as a color-coded 2D or 3D distribution of deviations. The phantom components and 3D-grid are machined to sub-millimeter accuracy, making the device uniquely suited to SRS applications; as such, we present it here in a form adapted for use with a Leksell stereotactic frame. Imaging reproducibility was assessed via repeated phantom imaging across ten back-to-back scans; 80%–90% of the differences in vertex deviations dx, dy, dz and dr between successive 3 T MRI scans were found to be  ⩽0.05 mm for both axial and coronal acquisitions, and over  >95% of the differences were observed to be  ⩽0.05 mm for repeated CT scans, clearly demonstrating excellent reproducibility. Applications of the 3D-phantom/analysis system are presented, using a 32-month time-course assessment of image distortion/gradient stability and statistical control chart for 1.5 T and 3 T GE TwinSpeed MRI systems.

  9. Transverse and Quantum Effects in Light Control by Light; (A) Parallel Beams: Pump Dynamics for Three Level Superfluorescence; and (B) Counterflow Beams: An Algorithm for Transverse, Full Transient Effects in Optical Bi-Stability in a Fabryperot Cavity.

    DTIC Science & Technology

    1983-01-01

    The resolution of the compu- and also leads to an expression for "dz,"*. tational grid is thereby defined according to e the actual requirements of...computational economy are achieved simultaneously by redistributing the computational grid points according to the physical requirements of the problem...computational Eulerian grid points according to implemented using a two-dimensionl time- the physical requirements of the nonlinear dependent finite

  10. Ring-like reliable PON planning with physical constraints for a smart grid

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Gu, Rentao; Ji, Yuefeng

    2016-01-01

    Due to the high reliability requirements in the communication networks of a smart grid, a ring-like reliable PON is an ideal choice to carry power distribution information. Economical network planning is also very important for the smart grid communication infrastructure. Although the ring-like reliable PON has been widely used in the real applications, as far as we know, little research has been done on the network optimization subject of the ring-like reliable PON. Most PON planning research studies only consider a star-like topology or cascaded PON network, which barely guarantees the reliability requirements of the smart grid. In this paper, we mainly investigate the economical network planning problem for the ring-like reliable PON of the smart grid. To address this issue, we built a mathematical model for the planning problem of the ring-like reliable PON, and the objective was to minimize the total deployment costs under physical constraints. The model is simplified such that all of the nodes have the same properties, except OLT, because each potential splitter site can be located in the same ONU position in power communication networks. The simplified model is used to construct an optimal main tree topology in the complete graph and a backup-protected tree topology in the residual graph. An efficient heuristic algorithm, called the Constraints and Minimal Weight Oriented Fast Searching Algorithm (CMW-FSA), is proposed. In CMW-FSA, a feasible solution can be obtained directly with oriented constraints and a few recursive search processes. From the simulation results, the proposed planning model and CMW-FSA are verified to be accurate (the error rates are less than 0.4%) and effective compared with the accurate solution (CAESA), especially in small and sparse scenarios. The CMW-FSA significantly reduces the computation time compared with the CAESA. The time complexity algorithm of the CMW-FSA is acceptable and calculated as T(n) = O(n3). After evaluating the effects of the parameters of the two PON systems, the total planning costs of each scenario show a general declining trend and reach a threshold as the respective maximal transmission distances and maximal time delays increase.

  11. A 3-D chimera grid embedding technique

    NASA Technical Reports Server (NTRS)

    Benek, J. A.; Buning, P. G.; Steger, J. L.

    1985-01-01

    A three-dimensional (3-D) chimera grid-embedding technique is described. The technique simplifies the construction of computational grids about complex geometries. The method subdivides the physical domain into regions which can accommodate easily generated grids. Communication among the grids is accomplished by interpolation of the dependent variables at grid boundaries. The procedures for constructing the composite mesh and the associated data structures are described. The method is demonstrated by solution of the Euler equations for the transonic flow about a wing/body, wing/body/tail, and a configuration of three ellipsoidal bodies.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The purpose of the work on this contract is to study the suitability of Zn/sub 3/P/sub 2/ as a photovoltaic material for large scale terrestrial use. Zn/sub 3/P/sub 2/ was chosen for study because those of its physical parameters which could be gleaned from a rather sparse literature match fairly well the criteria for optimum terrestrial photovoltaic materials. The main emphasis in the quarter has been on material preparation. Materials synthesis has been successful, with a fair number of useable single crystals produced with the bulk material. In addition, thin films have been produced in a preliminary way on variousmore » substrates. Initial electrical and optical studies have been carried out in both single crystals and films, but the results of these studies are of a preliminary nature only.« less

  13. Greedy Sampling and Incremental Surrogate Model-Based Tailoring of Aeroservoelastic Model Database for Flexible Aircraft

    NASA Technical Reports Server (NTRS)

    Wang, Yi; Pant, Kapil; Brenner, Martin J.; Ouellette, Jeffrey A.

    2018-01-01

    This paper presents a data analysis and modeling framework to tailor and develop linear parameter-varying (LPV) aeroservoelastic (ASE) model database for flexible aircrafts in broad 2D flight parameter space. The Kriging surrogate model is constructed using ASE models at a fraction of grid points within the original model database, and then the ASE model at any flight condition can be obtained simply through surrogate model interpolation. The greedy sampling algorithm is developed to select the next sample point that carries the worst relative error between the surrogate model prediction and the benchmark model in the frequency domain among all input-output channels. The process is iterated to incrementally improve surrogate model accuracy till a pre-determined tolerance or iteration budget is met. The methodology is applied to the ASE model database of a flexible aircraft currently being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies indicate that the proposed method can reduce the number of models in the original database by 67%. Even so the ASE models obtained through Kriging interpolation match the model in the original database constructed directly from the physics-based tool with the worst relative error far below 1%. The interpolated ASE model exhibits continuously-varying gains along a set of prescribed flight conditions. More importantly, the selected grid points are distributed non-uniformly in the parameter space, a) capturing the distinctly different dynamic behavior and its dependence on flight parameters, and b) reiterating the need and utility for adaptive space sampling techniques for ASE model database compaction. The present framework is directly extendible to high-dimensional flight parameter space, and can be used to guide the ASE model development, model order reduction, robust control synthesis and novel vehicle design of flexible aircraft.

  14. Evaluating the performance of a 50 kilowatt grid-connected photovoltaic system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chowdhury, B.H.; Muknahallipatn, S.; Cupal, J.J.

    A 50-kilowatt solar photovoltaic (PV) system was built at the University of Wyoming (UW) in 1996. The system comprises of three sub-systems. The first sub-system, a 10 kW roof-integrated system is located on the roof of the Engineering Building. The second sub-system is a 5 kW rack-mounted, ballasted PV system located on another part of the roof. The third sub-system is a 35 kW shade structure and is located adjacent to the university's football stadium. The three sub-systems differ in their design strategy since each is being used for research and education at the university. Each sub-system, being located atmore » some distance away from one another, supplies a different part of the campus grid. Efforts are continuing for setting up a central monitoring system, which will receive data remotely from all locations. A part of this monitoring system is complete. The system as configured provides a great deal of flexibility, which is in turn demanded by the variety of signal types measured at each installation. Each installation requires measurement of multiple dc and ac voltages and currents and one slowly varying voltage (proportional to solar insolation). The simultaneous sampling, fast sample rate, and lowpass signal conditioning allow for accurate measurement of power factor and total harmonic distortion of the inverter outputs. Panel and inverter efficiencies can be determined via simultaneous DC and AC measurements. These performance monitors provide the essential data for characterization of the PV effect at the grid input, and enable the use of intelligent power factor correction and harmonic filtering. Monitoring of the system shows that the total harmonic distortion present in the ac power output is at or below the acceptable limit as recommended by IEEE 519-1992. The harmonic distortion worsens when the ac power reaches more than 3.8 kW. A number of reliability problems with PV modules and inverters have delayed full functionality of the system.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tellarini, Matteo; Ross, Ashley J.; Wands, David

    Measurements of the non-Gaussianity of the primordial density field have the power to considerably improve our understanding of the physics of inflation. Indeed, if we can increase the precision of current measurements by an order of magnitude, a null-detection would rule out many classes of scenarios for generating primordial fluctuations. Large-scale galaxy redshift surveys represent experiments that hold the promise to realise this goal. Thus, we model the galaxy bispectrum and forecast the accuracy with which it will probe the parameter f {sub NL}, which represents the degree of primordial local-type non Gaussianity. Specifically, we address the problem of modellingmore » redshift space distortions (RSD) in the tree-level galaxy bispectrum including f {sub NL}. We find novel contributions associated with RSD, with the characteristic large scale amplification induced by local-type non-Gaussianity. These RSD effects must be properly accounted for in order to obtain un-biased measurements of f {sub NL} from the galaxy bispectrum. We propose an analytic template for the monopole which can be used to fit against data on large scales, extending models used in the recent measurements. Finally, we perform idealised forecasts on σ {sub f} {sub N{sub L}}—the accuracy of the determination of local non-linear parameter f {sub NL}—from measurements of the galaxy bispectrum. Our findings suggest that current surveys can in principle provide f {sub NL} constraints competitive with Planck , and future surveys could improve them further.« less

  16. Application of high performance computing for studying cyclic variability in dilute internal combustion engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FINNEY, Charles E A; Edwards, Kevin Dean; Stoyanov, Miroslav K

    2015-01-01

    Combustion instabilities in dilute internal combustion engines are manifest in cyclic variability (CV) in engine performance measures such as integrated heat release or shaft work. Understanding the factors leading to CV is important in model-based control, especially with high dilution where experimental studies have demonstrated that deterministic effects can become more prominent. Observation of enough consecutive engine cycles for significant statistical analysis is standard in experimental studies but is largely wanting in numerical simulations because of the computational time required to compute hundreds or thousands of consecutive cycles. We have proposed and begun implementation of an alternative approach to allowmore » rapid simulation of long series of engine dynamics based on a low-dimensional mapping of ensembles of single-cycle simulations which map input parameters to output engine performance. This paper details the use Titan at the Oak Ridge Leadership Computing Facility to investigate CV in a gasoline direct-injected spark-ignited engine with a moderately high rate of dilution achieved through external exhaust gas recirculation. The CONVERGE CFD software was used to perform single-cycle simulations with imposed variations of operating parameters and boundary conditions selected according to a sparse grid sampling of the parameter space. Using an uncertainty quantification technique, the sampling scheme is chosen similar to a design of experiments grid but uses functions designed to minimize the number of samples required to achieve a desired degree of accuracy. The simulations map input parameters to output metrics of engine performance for a single cycle, and by mapping over a large parameter space, results can be interpolated from within that space. This interpolation scheme forms the basis for a low-dimensional metamodel which can be used to mimic the dynamical behavior of corresponding high-dimensional simulations. Simulations of high-EGR spark-ignition combustion cycles within a parametric sampling grid were performed and analyzed statistically, and sensitivities of the physical factors leading to high CV are presented. With these results, the prospect of producing low-dimensional metamodels to describe engine dynamics at any point in the parameter space will be discussed. Additionally, modifications to the methodology to account for nondeterministic effects in the numerical solution environment are proposed« less

  17. RF Models for Plasma-Surface Interactions in VSim

    NASA Astrophysics Data System (ADS)

    Jenkins, Thomas G.; Smithe, D. N.; Pankin, A. Y.; Roark, C. M.; Zhou, C. D.; Stoltz, P. H.; Kruger, S. E.

    2014-10-01

    An overview of ongoing enhancements to the Plasma Discharge (PD) module of Tech-X's VSim software tool is presented. A sub-grid kinetic sheath model, developed for the accurate computation of sheath potentials near metal and dielectric-coated walls, enables the physical effects of DC and RF sheath physics to be included in macroscopic-scale plasma simulations that need not explicitly resolve sheath scale lengths. Sheath potential evolution, together with particle behavior near the sheath, can thus be simulated in complex geometries. Generalizations of the model to include sputtering, secondary electron emission, and effects from multiple ion species and background magnetic fields are summarized; related numerical results are also presented. In addition, improved tools for plasma chemistry and IEDF/EEDF visualization and modeling are discussed, as well as our initial efforts toward the development of hybrid fluid/kinetic transition capabilities within VSim. Ultimately, we aim to establish VSimPD as a robust, efficient computational tool for modeling industrial plasma processes. Supported by US DoE SBIR-I/II Award DE-SC0009501.

  18. Gasdynamic model of turbulent combustion in an explosion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuhl, A.L.; Ferguson, R.E.; Chien, K.Y.

    1994-08-31

    Proposed here is a gasdynamic model of turbulent combustion in explosions. It is used to investigate turbulent mixing aspects of afterburning found in TNT charges detonated in air. Evolution of the turbulent velocity field was calculated by a high-order Godunov solution of the gasdynamic equations. Adaptive Mesh Refinement (AMR) was used to follow convective-mixing processes on the computational grid. Combustion was then taken into account by a simplified sub-grid model, demonstrating that it was controlled by turbulent mixing. The rate of fuel consumption decayed inversely with time, and was shown to be insensitive to grid resolution.

  19. User's manual for the HYPGEN hyperbolic grid generator and the HGUI graphical user interface

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Chiu, Ing-Tsau; Buning, Pieter G.

    1993-01-01

    The HYPGEN program is used to generate a 3-D volume grid over a user-supplied single-block surface grid. This is accomplished by solving the 3-D hyperbolic grid generation equations consisting of two orthogonality relations and one cell volume constraint. In this user manual, the required input files and parameters and output files are described. Guidelines on how to select the input parameters are given. Illustrated examples are provided showing a variety of topologies and geometries that can be treated. HYPGEN can be used in stand-alone mode as a batch program or it can be called from within a graphical user interface HGUI that runs on Silicon Graphics workstations. This user manual provides a description of the menus, buttons, sliders, and typein fields in HGUI for users to enter the parameters needed to run HYPGEN. Instructions are given on how to configure the interface to allow HYPGEN to run either locally or on a faster remote machine through the use of shell scripts on UNIX operating systems. The volume grid generated is copied back to the local machine for visualization using a built-in hook to PLOT3D.

  20. Mapping the genome of meta-generalized gradient approximation density functionals: The search for B97M-V

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mardirossian, Narbe; Head-Gordon, Martin, E-mail: mhg@cchem.berkeley.edu; Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720

    2015-02-21

    A meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional is presented. The functional form is selected from more than 10{sup 10} choices carved out of a functional space of almost 10{sup 40} possibilities. Raw data come from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filtered based onmore » a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.« less

  1. Mapping the genome of meta-generalized gradient approximation density functionals: The search for B97M-V

    DOE PAGES

    Mardirossian, Narbe; Head-Gordon, Martin

    2015-02-20

    We present a meta-generalized gradient approximation density functional paired with the VV10 nonlocal correlation functional. The functional form is selected from more than 10 10 choices carved out of a functional space of almost 10 40 possibilities. This raw data comes from training a vast number of candidate functional forms on a comprehensive training set of 1095 data points and testing the resulting fits on a comprehensive primary test set of 1153 data points. Functional forms are ranked based on their ability to reproduce the data in both the training and primary test sets with minimum empiricism, and filteredmore » based on a set of physical constraints and an often-overlooked condition of satisfactory numerical precision with medium-sized integration grids. The resulting optimal functional form has 4 linear exchange parameters, 4 linear same-spin correlation parameters, and 4 linear opposite-spin correlation parameters, for a total of 12 fitted parameters. The final density functional, B97M-V, is further assessed on a secondary test set of 212 data points, applied to several large systems including the coronene dimer and water clusters, tested for the accurate prediction of intramolecular and intermolecular geometries, verified to have a readily attainable basis set limit, and checked for grid sensitivity. Compared to existing density functionals, B97M-V is remarkably accurate for non-bonded interactions and very satisfactory for thermochemical quantities such as atomization energies, but inherits the demonstrable limitations of existing local density functionals for barrier heights.« less

  2. Measurement of the Michel parameter {rho} in normal muon decay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tu, X.; Amann, J.F.; Bolton, R.D.

    1995-07-10

    A new measurement of the Michel parameter {rho} in normal muon decay has been performed using the MEGA positron spectrometer. Over 500 million triggers were recorded and the data are currently being analyzed. The previous result has a precision on the value of {rho}{plus_minus}0.0026. The present experiment expects to improve the precision to {plus_minus}0.0008 or better. The improved result will be a precise test of the standard model of electroweak interactions for a purely leptonic process. It also will provide a better constraint on the {ital W}{sub {ital R}}{minus}{ital W}{sub {ital L}} mixing angle in the left-right symmetric models. {copyright}more » {ital 1995} {ital American} {ital Institute} {ital of} {ital Physics}.« less

  3. New Global Calculation of Nuclear Masses and Fission Barriers for Astrophysical Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moeller, P.; Sierk, A. J.; Bengtsson, R.

    The FRDM(1992) mass model [1] has an accuracy of 0.669 MeV in the region where its parameters were determined. For the 529 masses that have been measured since, its accuracy is 0.46 MeV, which is encouraging for applications far from stability in astrophysics. We are developing an improved mass model, the FRDM(2008). The improvements in the calculations with respect to the FRDM(1992) are in two main areas. (1) The macroscopic model parameters are better optimized. By simulation (adjusting to a limited set of now known nuclei) we can show that this actually makes the results more reliable in new regionsmore » of nuclei. (2) The ground-state deformation parameters are more accurately calculated. We minimize the energy in a four-dimensional deformation space ({epsilon}{sub 2}, {epsilon}{sub 3}, {epsilon}{sub 4}, {epsilon}{sub 6},) using a grid interval of 0.01 in all 4 deformation variables. The (non-finalized) FRDM (2008-a) has an accuracy of 0.596 MeV with respect to the 2003 Audi mass evaluation before triaxial shape degrees of freedom are included (in progress). When triaxiality effects are incorporated preliminary results indicate that the model accuracy will improve further, to about 0.586 MeV.We also discuss very large-scale fission-barrier calculations in the related FRLDM (2002) model, which has been shown to reproduce very satisfactorily known fission properties, for example barrier heights from {sup 70}Se to the heaviest elements, multiple fission modes in the Ra region, asymmetry of mass division in fission and the triple-humped structure found in light actinides. In the superheavy region we find barriers consistent with the observed half-lives. We have completed production calculations and obtain barrier heights for 5254 nuclei heavier than A = 170 for all nuclei between the proton and neutron drip lines. The energy is calculated for 5009325 different shapes for each nucleus and the optimum barrier between ground state and separated fragments is determined by use of an ''immersion'' technique.« less

  4. Algorithms for the automatic generation of 2-D structured multi-block grids

    NASA Technical Reports Server (NTRS)

    Schoenfeld, Thilo; Weinerfelt, Per; Jenssen, Carl B.

    1995-01-01

    Two different approaches to the fully automatic generation of structured multi-block grids in two dimensions are presented. The work aims to simplify the user interactivity necessary for the definition of a multiple block grid topology. The first approach is based on an advancing front method commonly used for the generation of unstructured grids. The original algorithm has been modified toward the generation of large quadrilateral elements. The second method is based on the divide-and-conquer paradigm with the global domain recursively partitioned into sub-domains. For either method each of the resulting blocks is then meshed using transfinite interpolation and elliptic smoothing. The applicability of these methods to practical problems is demonstrated for typical geometries of fluid dynamics.

  5. Energy levels and optical properties of neodymium-doped barium fluorapatite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stefanos, Sennay M.; Bonner, Carl E. Jr.; Meegoda, Chandana

    Energy levels of the 4f{sup 3} electronic configuration of Nd{sup 3+} in barium fluorapatite, Ba{sub 5}(PO{sub 4}){sub 3}F(B-FAP) have been determined from polarized absorption and fluorescence spectra using crystals at 8 K. Experimental energy-level assignments were made initially by comparing the crystal spectra energy levels with those obtained from those previously reported for Nd{sup 3+} in strontium fluorapatite and fluorapatite. The initial crystal-field parameters were calculated by using lattice summation techniques. The crystal-field parameters were varied to obtain a best fit between experimental and theoretical energies and the final values give a root-mean-square deviation of 7.1 cm-1. The odd-fold crystal-fieldmore » components are used to calculate the emission intensities and lifetimes of the Nd{sup 3+} ions in B-FAP. These calculations yield results in good agreement with the experimental measurements of the absorption and emission cross sections and lifetimes. (c) 2000 American Institute of Physics.« less

  6. Constraints on B and Higgs physics in minimal low energy supersymmetric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carena, Marcela; /Fermilab; Menon, A.

    2006-03-01

    We study the implications of minimal flavor violating low energy supersymmetry scenarios for the search of new physics in the B and Higgs sectors at the Tevatron collider and the LHC. We show that the already stringent Tevatron bound on the decay rate B{sub s} {yields} {mu}{sup +}{mu}{sup -} sets strong constraints on the possibility of generating large corrections to the mass difference {Delta} M{sub s} of the B{sub s} eigenstates. We also show that the B{sub s} {yields} {mu}{sup +}{mu}{sup -} bound together with the constraint on the branching ratio of the rare decay b {yields} s{gamma} has strongmore » implications for the search of light, non-standard Higgs bosons at hadron colliders. In doing this, we demonstrate that the former expressions derived for the analysis of the double penguin contributions in the Kaon sector need to be corrected by additional terms for a realistic analysis of these effects. We also study a specific non-minimal flavor violating scenario, where there are flavor changing gluino-squark-quark interactions, governed by the CKM matrix elements, and show that the B and Higgs physics constraints are similar to the ones in the minimal flavor violating case. Finally we show that, in scenarios like electroweak baryogenesis which have light stops and charginos, there may be enhanced effects on the B and K mixing parameters, without any significant effect on the rate of B{sub s} {yields} {mu}{sup +}{mu}{sup -}.« less

  7. Hydrodynamic simulations of accretion flows with time-varying viscosity

    NASA Astrophysics Data System (ADS)

    Roy, Abhishek; Chakrabarti, Sandip K.

    2017-12-01

    X-ray outbursts of stellar-mass black hole candidates are believed to be due to a sudden rise in viscosity, which transports angular momentum efficiently and increases the accretion rates, causing higher X-ray flux. After the viscosity is reduced, the outburst subsides and the object returns back to the pre-outburst quiescence stage. In the absence of a satisfactory understanding of the physical mechanism leading to such a sharp time dependence of viscous processes, we perform numerical simulations where we include the rise and fall of a viscosity parameter at an outer injection grid, assumed to be located at the accumulation radius where matter from the companion is piled up before being released by enhanced viscosity. We use a power-law radial dependence of the viscosity parameter (α ∼ rε), but the exponent (ε) is allowed to vary with time to mimic a fast rise and decay of the viscosity parameter. Since X-ray spectra of a black hole candidate can be explained by a Keplerian disc component in the presence of a post-shock region of an advective flow, our goal here is also to understand whether the flow configurations required to explain the spectral states of an outbursting source could be obtained by a time-varying viscosity. We present the results of our simulations to prove that low-angular-momentum (sub-Keplerian) advective flows do form a Keplerian disc in the pre-shock region when the viscosity is enhanced, which disappears on a much longer time-scale after the viscosity is withdrawn. From the variation of the Keplerian disc inside an advective halo, we believe that our result, for the first time, is able to simulate the two-component advective flow dynamics during an entire X-ray outburst and explain the observed hysteresis effects in the hardness-intensity diagram.

  8. Sensitivity Analysis of Repeat Track Estimation Techniques for Detection of Elevation Change in Polar Ice Sheets

    NASA Astrophysics Data System (ADS)

    Harpold, R. E.; Urban, T. J.; Schutz, B. E.

    2008-12-01

    Interest in elevation change detection in the polar regions has increased recently due to concern over the potential sea level rise from the melting of the polar ice caps. Repeat track analysis can be used to estimate elevation change rate by fitting elevation data to model parameters. Several aspects of this method have been tested to improve the recovery of the model parameters. Elevation data from ICESat over Antarctica and Greenland from 2003-2007 are used to test several grid sizes and types, such as grids based on latitude and longitude and grids centered on the ICESat reference groundtrack. Different sets of parameters are estimated, some of which include seasonal terms or alternate types of slopes (linear, quadratic, etc.). In addition, the effects of including crossovers and other solution constraints are evaluated. Simulated data are used to infer potential errors due to unmodeled parameters.

  9. Feedback first: the surprisingly weak effects of magnetic fields, viscosity, conduction and metal diffusion on sub-L* galaxy formation

    NASA Astrophysics Data System (ADS)

    Su, Kung-Yi; Hopkins, Philip F.; Hayward, Christopher C.; Faucher-Giguère, Claude-André; Kereš, Dušan; Ma, Xiangcheng; Robles, Victor H.

    2017-10-01

    Using high-resolution simulations with explicit treatment of stellar feedback physics based on the FIRE (Feedback In Realistic Environments) project, we study how galaxy formation and the interstellar medium (ISM) are affected by magnetic fields, anisotropic Spitzer-Braginskii conduction and viscosity, and sub-grid metal diffusion from unresolved turbulence. We consider controlled simulations of isolated (non-cosmological) galaxies but also a limited set of cosmological 'zoom-in' simulations. Although simulations have shown significant effects from these physics with weak or absent stellar feedback, the effects are much weaker than those of stellar feedback when the latter is modelled explicitly. The additional physics have no systematic effect on galactic star formation rates (SFRs). In contrast, removing stellar feedback leads to SFRs being overpredicted by factors of ˜10-100. Without feedback, neither galactic winds nor volume-filling hot-phase gas exist, and discs tend to runaway collapse to ultra-thin scaleheights with unphysically dense clumps congregating at the galactic centre. With stellar feedback, a multi-phase, turbulent medium with galactic fountains and winds is established. At currently achievable resolutions and for the investigated halo mass range 1010-1013 M⊙, the additional physics investigated here (magnetohydrodynamic, conduction, viscosity, metal diffusion) have only weak (˜10 per cent-level) effects on regulating SFR and altering the balance of phases, outflows or the energy in ISM turbulence, consistent with simple equipartition arguments. We conclude that galactic star formation and the ISM are primarily governed by a combination of turbulence, gravitational instabilities and feedback. We add the caveat that active galactic nucleus feedback is not included in the present work.

  10. A photometric study of Enceladus

    NASA Technical Reports Server (NTRS)

    Verbiscer, Anne J.; Veverka, Joseph

    1994-01-01

    We have supplemented Voyager imaging data from Enceladus (limited to phase angles of 13 deg-43 deg) with recent Earth-based CCD observations to obtain an improved determination of the Bond albedo, to construct an albedo map of the satellite, and to constrain parameters in Hapke's (1986) photometric equation. A major result is evidence of regional variations in the physical properties of Enceladus' surface. The average global photometric properties are described by single scattering albedo omega(sub 0) average = 0.998 +/- 0.001, macroscopic roughness parameter theta average = 6 +/- 1 deg, and Henyey-Greenstein asymmetry parameter g = -0.399 +/- 0.005. The value of theta average is smaller than the 14 deg found by fitting whole-disk data, which include all terrains on Enceladus. The opposition surge amplitude B(sub 0) = 0.21 +/- 0.07 and regolith compaction parameter h = 0.014 +/- 0.02 are loosely constrained by the scarcity of and uncertainty in near-opposition observations. From the solar phase curve we determine the geometric albedo of Enceladus p(sub v) = 0.99 +/- 0.06 and phase integral q = 0.92 +/- 0.05, which corresponds to a spherical albedo A = p(sub v)q = 0.91 +/- 0.1. Since the spectrum of Enceladus is fairly flat, we can approximate the Bond albedo A(sub B) with the spherical albedo. Our photometric analysis is summarized in terms of an albedo map which generally reproduces the satellite's observed lightcurve and indicates that normal reflectances range from 0.9 on the leading hemisphere to 1.4 on the trailing one. The albedo map also revels an albedo variation of 15% from longitudes 170 deg to 200 deg, corresponding to the boundary between the leading and trailing hemispheres.

  11. Self-organizing map network-based precipitation regionalization for the Tibetan Plateau and regional precipitation variability

    NASA Astrophysics Data System (ADS)

    Wang, Nini; Yin, Jianchuan

    2017-12-01

    A precipitation-based regionalization for the Tibetan Plateau (TP) was investigated for regional precipitation trend analysis and frequency analysis using data from 1113 grid points covering the period 1900-2014. The results utilizing self-organizing map (SOM) network suggest that four clusters of precipitation coherent zones can be identified, including the southwestern edge, the southern edge, the southeastern region, and the north central region. Regionalization results of the SOM network satisfactorily represent the influences of the atmospheric circulation systems such as the East Asian summer monsoon, the south Asian summer monsoon, and the mid-latitude westerlies. Regionalization results also well display the direct impacts of physical geographical features of the TP such as orography, topography, and land-sea distribution. Regional-scale annual precipitation trend as well as regional differences of annual and seasonal total precipitation were investigated by precipitation index such as precipitation concentration index (PCI) and Standardized Anomaly Index (SAI). Results demonstrate significant negative long-term linear trends in southeastern TP and the north central part of the TP, indicating arid and semi-arid regions in the TP are getting drier. The empirical mode decomposition (EMD) method shows an evolution of the main cycle with 4 and 12 months for all the representative grids of four sub-regions. The cross-wavelet analysis suggests that predominant and effective period of Indian Ocean Dipole (IOD) on monthly precipitation is around ˜12 months, except for the representative grid of the northwestern region.

  12. New datasets for quantifying snow-vegetation-atmosphere interactions in boreal birch and conifer forests

    NASA Astrophysics Data System (ADS)

    Reid, T. D.; Essery, R.; Rutter, N.; Huntley, B.; Baxter, R.; Holden, R.; King, M.; Hancock, S.; Carle, J.

    2012-12-01

    Boreal forests exert a strong influence on weather and climate by modifying the surface energy and radiation balance. However, global climate and numerical weather prediction models use forest parameter values from simple look-up tables or maps that are derived from limited satellite data, on large grid scales. In reality, Arctic landscapes are inherently heterogeneous, with highly variable land cover types and structures on a variety of spatial scales. There is value in collecting detailed field data for different areas of vegetation cover, to assess the accuracy of large-scale assumptions. To address these issues, a consortium of researchers funded by the UK's Natural Environment Research Council have collected extensive data on radiation, meteorology, snow cover and canopy structure at two contrasting Arctic forest sites. The chosen study sites were an area of boreal birch forest near Abisko, Sweden in March/April 2011 and mixed conifer forest at Sodankylä, Finland in March/April 2012. At both sites, arrays comprising ten shortwave pyranometers and four longwave pyrgeometers were deployed for periods of up to 50 days, under forest plots of varying canopy structures and densities. In addition, downwelling longwave irradiance and global and diffuse shortwave irradiances were recorded at nearby open sites representing the top-of-canopy conditions. Meteorological data were recorded at all sub-canopy and open sites using automatic weather stations. Over the same periods, tree skin temperatures were measured on selected trees using contact thermocouples, infrared thermocouples and thermal imagery. Canopy structure was accurately quantified through manual surveys, extensive hemispherical photography and terrestrial laser scans of every study plot. Sub-canopy snow depth and snow water equivalent were measured on fine-scale grids at each study plot. Regular site maintenance ensured a high quality dataset covering the important Arctic spring period. The data have several applications, for example in forest ecology, canopy radiative transfer models, snow hydrological modelling, and land surface schemes, for a variety of canopy types from sparse, leafless birch to dense pine and spruce. The work also allows the comparison of modern, highly detailed methods such as laser scanning and thermal imagery with older, well-established data collection methods. By combining these data with airborne and satellite remote sensing data, snow-vegetation-atmosphere interactions could be estimated over a wide area of the heterogeneous boreal landscape. This could improve estimates of crucial parameters such as land surface albedo on the grid scales required for global or regional weather and climate models.

  13. Laterally inherently thin amorphous-crystalline silicon heterojunction photovoltaic cell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chowdhury, Zahidur R., E-mail: zr.chowdhury@utoronto.ca; Kherani, Nazir P., E-mail: kherani@ecf.utoronto.ca

    2014-12-29

    This article reports on an amorphous-crystalline silicon heterojunction photovoltaic cell concept wherein the heterojunction regions are laterally narrow and distributed amidst a backdrop of well-passivated crystalline silicon surface. The localized amorphous-crystalline silicon heterojunctions consisting of the laterally thin emitter and back-surface field regions are precisely aligned under the metal grid-lines and bus-bars while the remaining crystalline silicon surface is passivated using the recently proposed facile grown native oxide–plasma enhanced chemical vapour deposited silicon nitride passivation scheme. The proposed cell concept mitigates parasitic optical absorption losses by relegating amorphous silicon to beneath the shadowed metallized regions and by using optically transparentmore » passivation layer. A photovoltaic conversion efficiency of 13.6% is obtained for an untextured proof-of-concept cell illuminated under AM 1.5 global spectrum; the specific cell performance parameters are V{sub OC} of 666 mV, J{sub SC} of 29.5 mA-cm{sup −2}, and fill-factor of 69.3%. Reduced parasitic absorption, predominantly in the shorter wavelength range, is confirmed with external quantum efficiency measurement.« less

  14. Improvements, testing and development of the ADM-τ sub-grid surface tension model for two-phase LES

    NASA Astrophysics Data System (ADS)

    Aniszewski, Wojciech

    2016-12-01

    In this paper, a specific subgrid term occurring in Large Eddy Simulation (LES) of two-phase flows is investigated. This and other subgrid terms are presented, we subsequently elaborate on the existing models for those and re-formulate the ADM-τ model for sub-grid surface tension previously published by these authors. This paper presents a substantial, conceptual simplification over the original model version, accompanied by a decrease in its computational cost. At the same time, it addresses the issues the original model version faced, e.g. introduces non-isotropic applicability criteria based on resolved interface's principal curvature radii. Additionally, this paper introduces more throughout testing of the ADM-τ, in both simple and complex flows.

  15. An effective XML based name mapping mechanism within StoRM

    NASA Astrophysics Data System (ADS)

    Corso, E.; Forti, A.; Ghiselli, A.; Magnoni, L.; Zappi, R.

    2008-07-01

    In a Grid environment the naming capability allows users to refer to specific data resources in a physical storage system using a high level logical identifier. This logical identifier is typically organized in a file system like structure, a hierarchical tree of names. Storage Resource Manager (SRM) services map the logical identifier to the physical location of data evaluating a set of parameters as the desired quality of services and the VOMS attributes specified in the requests. StoRM is a SRM service developed by INFN and ICTP-EGRID to manage file and space on standard POSIX and high performing parallel and cluster file systems. An upcoming requirement in the Grid data scenario is the orthogonality of the logical name and the physical location of data, in order to refer, with the same identifier, to different copies of data archived in various storage areas with different quality of service. The mapping mechanism proposed in StoRM is based on a XML document that represents the different storage components managed by the service, the storage areas defined by the site administrator, the quality of service they provide and the Virtual Organization that want to use the storage area. An appropriate directory tree is realized in each storage component reflecting the XML schema. In this scenario StoRM is able to identify the physical location of a requested data evaluating the logical identifier and the specified attributes following the XML schema, without querying any database service. This paper presents the namespace schema defined, the different entities represented and the technical details of the StoRM implementation.

  16. The influence of the dose calculation resolution of VMAT plans on the calculated dose for eye lens and optic pathway.

    PubMed

    Park, Jong Min; Park, So-Yeon; Kim, Jung-In; Carlson, Joel; Kim, Jin Ho

    2017-03-01

    To investigate the effect of dose calculation grid on calculated dose-volumetric parameters for eye lenses and optic pathways. A total of 30 patients treated using the volumetric modulated arc therapy (VMAT) technique, were retrospectively selected. For each patient, dose distributions were calculated with calculation grids ranging from 1 to 5 mm at 1 mm intervals. Identical structures were used for VMAT planning. The changes in dose-volumetric parameters according to the size of the calculation grid were investigated. Compared to dose calculation with 1 mm grid, the maximum doses to the eye lens with calculation grids of 2, 3, 4 and 5 mm increased by 0.2 ± 0.2 Gy, 0.5 ± 0.5 Gy, 0.9 ± 0.8 Gy and 1.7 ± 1.5 Gy on average, respectively. The Spearman's correlation coefficient between dose gradients near structures vs. the differences between the calculated doses with 1 mm grid and those with 5 mm grid, were 0.380 (p < 0.001). For the accurate calculation of dose distributions, as well as efficiency, using a grid size of 2 mm appears to be the most appropriate choice.

  17. Grid-based Meteorological and Crisis Applications

    NASA Astrophysics Data System (ADS)

    Hluchy, Ladislav; Bartok, Juraj; Tran, Viet; Lucny, Andrej; Gazak, Martin

    2010-05-01

    We present several applications from domain of meteorology and crisis management we developed and/or plan to develop. Particularly, we present IMS Model Suite - a complex software system designed to address the needs of accurate forecast of weather and hazardous weather phenomena, environmental pollution assessment, prediction of consequences of nuclear accident and radiological emergency. We discuss requirements on computational means and our experiences how to meet them by grid computing. The process of a pollution assessment and prediction of the consequences in case of radiological emergence results in complex data-flows and work-flows among databases, models and simulation tools (geographical databases, meteorological and dispersion models, etc.). A pollution assessment and prediction requires running of 3D meteorological model (4 nests with resolution from 50 km to 1.8 km centered on nuclear power plant site, 38 vertical levels) as well as running of the dispersion model performing the simulation of the release transport and deposition of the pollutant with respect to the numeric weather prediction data, released material description, topography, land use description and user defined simulation scenario. Several post-processing options can be selected according to particular situation (e.g. doses calculation). Another example is a forecasting of fog as one of the meteorological phenomena hazardous to the aviation as well as road traffic. It requires complicated physical model and high resolution meteorological modeling due to its dependence on local conditions (precise topography, shorelines and land use classes). An installed fog modeling system requires a 4 time nested parallelized 3D meteorological model with 1.8 km horizontal resolution and 42 levels vertically (approx. 1 million points in 3D space) to be run four times daily. The 3D model outputs and multitude of local measurements are utilized by SPMD-parallelized 1D fog model run every hour. The fog forecast model is a subject of the parameterization and parameter optimization before its real deployment. The parameter optimization requires tens of evaluations of the parameterized model accuracy and each evaluation of the model parameters requires re-running of the hundreds of meteorological situations collected over the years and comparison of the model output with the observed data. The architecture and inherent heterogeneity of both examples and their computational complexity and their interfaces to other systems and services make them well suited for decomposition into a set of web and grid services. Such decomposition has been performed within several projects we participated or participate in cooperation with academic sphere, namely int.eu.grid (dispersion model deployed as a pilot application to an interactive grid), SEMCO-WS (semantic composition of the web and grid services), DMM (development of a significant meteorological phenomena prediction system based on the data mining), VEGA 2009-2011 and EGEE III. We present useful and practical applications of technologies of high performance computing. The use of grid technology provides access to much higher computation power not only for modeling and simulation, but also for the model parameterization and validation. This results in the model parameters optimization and more accurate simulation outputs. Having taken into account that the simulations are used for the aviation, road traffic and crisis management, even small improvement in accuracy of predictions may result in significant improvement of safety as well as cost reduction. We found grid computing useful for our applications. We are satisfied with this technology and our experience encourages us to extend its use. Within an ongoing project (DMM) we plan to include processing of satellite images which extends our requirement on computation very rapidly. We believe that thanks to grid computing we are able to handle the job almost in real time.

  18. Modular Spectral Inference Framework Applied to Young Stars and Brown Dwarfs

    NASA Technical Reports Server (NTRS)

    Gully-Santiago, Michael A.; Marley, Mark S.

    2017-01-01

    In practice, synthetic spectral models are imperfect, causing inaccurate estimates of stellar parameters. Using forward modeling and statistical inference, we derive accurate stellar parameters for a given observed spectrum by emulating a grid of precomputed spectra to track uncertainties. Spectral inference as applied to brown dwarfs re: Synthetic spectral models (Marley et al 1996 and 2014) via the newest grid spans a massive multi-dimensional grid applied to IGRINS spectra, improving atmospheric models for JWST. When applied to young stars(10Myr) with large starpots, they can be measured spectroscopically, especially in the near-IR with IGRINS.

  19. Lumped versus distributed thermoregulatory control: results from a three-dimensional dynamic model.

    PubMed

    Werner, J; Buse, M; Foegen, A

    1989-01-01

    In this study we use a three-dimensional model of the human thermal system, the spatial grid of which is 0.5 ... 1.0 cm. The model is based on well-known physical heat-transfer equations, and all parameters of the passive system have definite physical values. According to the number of substantially different areas and organs, 54 spatially different values are attributed to each physical parameter. Compatibility of simulation and experiment was achieved solely on the basis of physical considerations and physiological basic data. The equations were solved using a modification of the alternating direction implicit method. On the basis of this complex description of the passive system close to reality, various lumped and distributed parameter control equations were tested for control of metabolic heat production, blood flow and sweat production. The simplest control equations delivering results on closed-loop control compatible with experimental evidence were determined. It was concluded that it is essential to take into account the spatial distribution of heat production, blood flow and sweat production, and that at least for control of shivering, distributed controller gains different from the pattern of distribution of muscle tissue are required. For sweat production this is not so obvious, so that for simulation of sweating control after homogeneous heat load a lumped parameter control may be justified. Based on these conclusions three-dimensional temperature profiles for cold and heat load and the dynamics for changes of the environmental conditions were computed. In view of the exact simulation of the passive system and the compatibility with experimentally attainable variables there is good evidence that those values extrapolated by the simulation are adequately determined. The model may be used both for further analysis of the real thermoregulatory mechanisms and for special applications in environmental and clinical health care.

  20. Daughters mimic sterile neutrinos (almost!) perfectly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasenkamp, Jasper, E-mail: Jasper.Hasenkamp@nyu.edu

    Since only recently, cosmological observations are sensitive to hot dark matter (HDM) admixtures with sub-eV mass, m{sub hdm}{sup eff} < eV, that are not fully-thermalised, Δ N{sub eff} < 1. We argue that their almost automatic interpretation as a sterile neutrino species is neither from theoretical nor practical parsimony principles preferred over HDM formed by decay products (daughters) of an out-of-equilibrium particle decay. While daughters mimic sterile neutrinos in N{sub eff} and m{sub hdm}{sup eff}, there are opportunities to assess this possibility in likelihood analyses. Connecting cosmological parameters and moments of momentum distribution functions, we show that—also in the case of mass-degenerate daughters with indistinguishablemore » main physical effects—the mimicry breaks down when the next moment, the skewness, is considered. Predicted differences of order one in the root-mean-squares of absolute momenta are too small for current sensitivities.« less

  1. The section TiInSe/sub 2/-TiSbSe/sub 2/ of the system Ti-In-Sb-Se

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guseinov, G.D.; Chapanova, L.M.; Mal'sagov, A.U.

    1985-09-01

    The ternary compounds A /SUP I/ B /SUP III/ C/sub 2/ /SUP VI/ (A /SUP I/ is univalent Ti; B /SUP III/ is Ga or In; and C /SUP VI/ is S, Se or Te) form a class of semiconductors with a large number of different gap widths. The compounds crystallize in the chalcopyrite structure. Solid solutions based on these compounds, which permit varying smoothly the gap width and other physical parameters over wide limits, are of great interest. The authors synthesized the compounds TiInSe/sub 2/ and TiSbSe/sub 2/ from the starting materials Ti-000, In-000, Sb-000 and Se-OSCh-17-4 by directmore » fusion of the components, taken in a stoichiometric ratio, in quartz ampules evacuated to 1.3 X 10/sup -3/ Pa and sealed.« less

  2. Sub-Saharan Africa Report

    DTIC Science & Technology

    1987-03-10

    make contributions. So far, the Tanzanian people have contributed money, corn , and goats to assist Mozambique. There was a general mobilization of...different jobs. Mr Mazula said in practice workers could get increases as high as 100 percent after their employers have introduced a comprehensive...served by the same grid, pay 18 meticals. The average electricity tariff from high power grids is raised to 8 meticals and 50 cents. A statement from

  3. Numerical Investigation of Pressure Profile in Hydrodynamic Lubrication Thrust Bearing.

    PubMed

    Najar, Farooq Ahmad; Harmain, G A

    2014-01-01

    Reynolds equation is solved using finite difference method (FDM) on the surface of the tilting pad to find the pressure distribution in the lubricant oil film. Different pressure profiles with grid independence are described. The present work evaluates pressure at various locations after performing a thorough grid refinement. In recent similar works, this aspect has not been addressed. However, present study shows that it can have significant effect on the pressure profile. Results of a sector shaped pad are presented and it is shown that the maximum average value of pressure is 12% (approximately) greater than the previous results. Grid independence occurs after 24 × 24 grids. A parameter "ψ" has been proposed to provide convenient indicator of obtaining grid independent results. ψ = |(P refinedgrid - P Refrence-grid)/P refinedgrid|, ψ ≤ ε, where "ε" can be fixed to a convenient value and a constant minimum film thickness value of 75 μm is used in present study. This important parameter is highlighted in the present work; the location of the peak pressure zone in terms of (r, θ) coordinates is getting shifted by changing the grid size which will help the designer and experimentalist to conveniently determine the position of pressure measurement probe.

  4. Lighting the World: the first application of an open source, spatial electrification tool (OnSSET) on Sub-Saharan Africa

    NASA Astrophysics Data System (ADS)

    Mentis, Dimitrios; Howells, Mark; Rogner, Holger; Korkovelos, Alexandros; Arderne, Christopher; Zepeda, Eduardo; Siyal, Shahid; Taliotis, Costantinos; Bazilian, Morgan; de Roo, Ad; Tanvez, Yann; Oudalov, Alexandre; Scholtz, Ernst

    2017-08-01

    In September 2015, the United Nations General Assembly adopted Agenda 2030, which comprises a set of 17 Sustainable Development Goals (SDGs) defined by 169 targets. ‘Ensuring access to affordable, reliable, sustainable and modern energy for all by 2030’ is the seventh goal (SDG7). While access to energy refers to more than electricity, the latter is the central focus of this work. According to the World Bank’s 2015 Global Tracking Framework, roughly 15% of the world’s population (or 1.1 billion people) lack access to electricity, and many more rely on poor quality electricity services. The majority of those without access (87%) reside in rural areas. This paper presents results of a geographic information systems approach coupled with open access data. We present least-cost electrification strategies on a country-by-country basis for Sub-Saharan Africa. The electrification options include grid extension, mini-grid and stand-alone systems for rural, peri-urban, and urban contexts across the economy. At low levels of electricity demand there is a strong penetration of standalone technologies. However, higher electricity demand levels move the favourable electrification option from stand-alone systems to mini grid and to grid extensions.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saumon, D.; Holberg, J. B.; Kowalski, P. M., E-mail: dsaumon@lanl.gov, E-mail: holberg@argus.lpl.arizona.edu, E-mail: p.kowalski@fz-juelich.de

    The atmospheres of very cool, hydrogen-rich white dwarfs (WDs) (T{sub eff} < 6000 K) are challenging to model because of the increased complexity of the equation of state, chemical equilibrium, and opacity sources in a low-temperature, weakly ionized dense gas. In particular, many models that assume relatively simple models for the broadening of atomic levels and mostly ideal gas physics overestimate the flux in the blue part of their spectra. A solution to this problem that has met with some success is that additional opacity at short wavelengths comes for the extreme broadening of the Lyman α line of atomicmore » H by collisions primarily with H{sub 2}. For the purpose of validating this model more rigorously, we acquired Hubble Space Telescope STIS spectra of eight very cool WDs (five DA and three DC stars). Combined with their known parallaxes, BVRIJHK, and Spitzer IRAC photometry, we analyze their entire spectral energy distribution (from 0.24 to 9.3 μm) with a large grid of model atmospheres and synthetic spectra. We find that the red wing of the Lyman α line reproduces the rapidly decreasing near-UV flux of these very cool stars very well. We determine better constrained values of T{sub eff} and gravity as well as upper limits to the helium abundance in their atmospheres.« less

  6. A grid spacing control technique for algebraic grid generation methods

    NASA Technical Reports Server (NTRS)

    Smith, R. E.; Kudlinski, R. A.; Everton, E. L.

    1982-01-01

    A technique which controls the spacing of grid points in algebraically defined coordinate transformations is described. The technique is based on the generation of control functions which map a uniformly distributed computational grid onto parametric variables defining the physical grid. The control functions are smoothed cubic splines. Sets of control points are input for each coordinate directions to outline the control functions. Smoothed cubic spline functions are then generated to approximate the input data. The technique works best in an interactive graphics environment where control inputs and grid displays are nearly instantaneous. The technique is illustrated with the two-boundary grid generation algorithm.

  7. High pressure droplet burning experiments in reduced gravity

    NASA Technical Reports Server (NTRS)

    Chauveau, Christian; Goekalp, Iskender

    1995-01-01

    A parametric investigation of single droplet gasification regimes is helpful in providing the necessary physical ideas for sub-grid models used in spray combustion numerical prediction codes. A research program has been initiated at the LCSR to explore the vaporization regimes of single and interacting hydrocarbon and liquid oxygen droplets under high pressure conditions. This paper summarizes the status of the LCSR program on the high pressure burning of single fuel droplets; recent results obtained under normal and reduced gravity conditions with suspended droplets are presented. In the work described here, parabolic flights of the CNES Caravelle is used to create a reduced gravity environment of the order of 10(exp -2) g(sub O). For all the droplet burning experiments reported here, the suspended droplet initial diameters are scattered around 1.5 mm; and the ambient air temperature is 300 K. The ambient pressure is varied between 0.1 MPa and 12 MPa. Four fuels are investigated: methanol (Pc = 7.9 MPa), n-heptane (Pc = 2.74 MPa), n-hexane (Pc = 3.01 MPa) and n-octane (Pc = 2.48 MPa).

  8. MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Key, Kerry

    2016-10-01

    This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data balancing normalization weights for the joint inversion of two or more data sets encourages the inversion to fit each data type equally well. A synthetic joint inversion of marine CSEM and MT data illustrates the algorithm's performance and parallel scaling on up to 480 processing cores. CSEM inversion of data from the Middle America Trench offshore Nicaragua demonstrates a real world application. The source code and MATLAB interface tools are freely available at http://mare2dem.ucsd.edu.

  9. The Impact of the Grid Size on TomoTherapy for Prostate Cancer

    PubMed Central

    Kawashima, Motohiro; Kawamura, Hidemasa; Onishi, Masahiro; Takakusagi, Yosuke; Okonogi, Noriyuki; Okazaki, Atsushi; Sekihara, Tetsuo; Ando, Yoshitaka; Nakano, Takashi

    2017-01-01

    Discretization errors due to the digitization of computed tomography images and the calculation grid are a significant issue in radiation therapy. Such errors have been quantitatively reported for a fixed multifield intensity-modulated radiation therapy using traditional linear accelerators. The aim of this study is to quantify the influence of the calculation grid size on the dose distribution in TomoTherapy. This study used ten treatment plans for prostate cancer. The final dose calculation was performed with “fine” (2.73 mm) and “normal” (5.46 mm) grid sizes. The dose distributions were compared from different points of view: the dose-volume histogram (DVH) parameters for planning target volume (PTV) and organ at risk (OAR), the various indices, and dose differences. The DVH parameters were used Dmax, D2%, D2cc, Dmean, D95%, D98%, and Dmin for PTV and Dmax, D2%, and D2cc for OARs. The various indices used were homogeneity index and equivalent uniform dose for plan evaluation. Almost all of DVH parameters for the “fine” calculations tended to be higher than those for the “normal” calculations. The largest difference of DVH parameters for PTV was Dmax and that for OARs was rectal D2cc. The mean difference of Dmax was 3.5%, and the rectal D2cc was increased up to 6% at the maximum and 2.9% on average. The mean difference of D95% for PTV was the smallest among the differences of the other DVH parameters. For each index, whether there was a significant difference between the two grid sizes was determined through a paired t-test. There were significant differences for most of the indices. The dose difference between the “fine” and “normal” calculations was evaluated. Some points around high-dose regions had differences exceeding 5% of the prescription dose. The influence of the calculation grid size in TomoTherapy is smaller than traditional linear accelerators. However, there was a significant difference. We recommend calculating the final dose using the “fine” grid size. PMID:28974860

  10. Multigrid solution of the Navier-Stokes equations on highly stretched grids with defect correction

    NASA Technical Reports Server (NTRS)

    Sockol, Peter M.

    1993-01-01

    Relaxation-based multigrid solvers for the steady incompressible Navier-Stokes equations are examined to determine their computational speed and robustness. Four relaxation methods with a common discretization have been used as smoothers in a single tailored multigrid procedure. The equations are discretized on a staggered grid with first order upwind used for convection in the relaxation process on all grids and defect correction to second order central on the fine grid introduced once per multigrid cycle. A fixed W(1,1) cycle with full weighting of residuals is used in the FAS multigrid process. The resulting solvers have been applied to three 2D flow problems, over a range of Reynolds numbers, on both uniform and highly stretched grids. In all cases the L(sub 2) norm of the velocity changes is reduced to 10(exp -6) in a few 10's of fine grid sweeps. The results from this study are used to draw conclusions on the strengths and weaknesses of the individual relaxation schemes as well as those of the overall multigrid procedure when used as a solver on highly stretched grids.

  11. Testing for new physics: neutrinos and the primordial power spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canac, Nicolas; Abazajian, Kevork N.; Aslanyan, Grigor

    2016-09-01

    We test the sensitivity of neutrino parameter constraints from combinations of CMB and LSS data sets to the assumed form of the primordial power spectrum (PPS) using Bayesian model selection. Significantly, none of the tested combinations, including recent high-precision local measurements of H{sub 0} and cluster abundances, indicate a signal for massive neutrinos or extra relativistic degrees of freedom. For PPS models with a large, but fixed number of degrees of freedom, neutrino parameter constraints do not change significantly if the location of any features in the PPS are allowed to vary, although neutrino constraints are more sensitive to PPSmore » features if they are known a priori to exist at fixed intervals in log k . Although there is no support for a non-standard neutrino sector from constraints on both neutrino mass and relativistic energy density, we see surprisingly strong evidence for features in the PPS when it is constrained with data from Planck 2015, SZ cluster counts, and recent high-precision local measurements of H{sub 0}. Conversely combining Planck with matter power spectrum and BAO measurements yields a much weaker constraint. Given that this result is sensitive to the choice of data this tension between SZ cluster counts, Planck and H{sub 0} measurements is likely an indication of unmodeled systematic bias that mimics PPS features, rather than new physics in the PPS or neutrino sector.« less

  12. Cyberinfrastructure for high energy physics in Korea

    NASA Astrophysics Data System (ADS)

    Cho, Kihyeon; Kim, Hyunwoo; Jeung, Minho; High Energy Physics Team

    2010-04-01

    We introduce the hierarchy of cyberinfrastructure which consists of infrastructure (supercomputing and networks), Grid, e-Science, community and physics from bottom layer to top layer. KISTI is the national headquarter of supercomputer, network, Grid and e-Science in Korea. Therefore, KISTI is the best place to for high energy physicists to use cyberinfrastructure. We explain this concept on the CDF and the ALICE experiments. In the meantime, the goal of e-Science is to study high energy physics anytime and anywhere even if we are not on-site of accelerator laboratories. The components are data production, data processing and data analysis. The data production is to take both on-line and off-line shifts remotely. The data processing is to run jobs anytime, anywhere using Grid farms. The data analysis is to work together to publish papers using collaborative environment such as EVO (Enabling Virtual Organization) system. We also present the global community activities of FKPPL (France-Korea Particle Physics Laboratory) and physics as top layer.

  13. Structured background grids for generation of unstructured grids by advancing front method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar

    1991-01-01

    A new method of background grid construction is introduced for generation of unstructured tetrahedral grids using the advancing-front technique. Unlike the conventional triangular/tetrahedral background grids which are difficult to construct and usually inadequate in performance, the new method exploits the simplicity of uniform Cartesian meshes and provides grids of better quality. The approach is analogous to solving a steady-state heat conduction problem with discrete heat sources. The spacing parameters of grid points are distributed over the nodes of a Cartesian background grid by interpolating from a few prescribed sources and solving a Poisson equation. To increase the control over the grid point distribution, a directional clustering approach is used. The new method is convenient to use and provides better grid quality and flexibility. Sample results are presented to demonstrate the power of the method.

  14. CMB bispectrum, trispectrum, non-Gaussianity, and the Cramer-Rao bound

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamionkowski, Marc; Smith, Tristan L.; Heavens, Alan

    Minimum-variance estimators for the parameter f{sub nl} that quantifies local-model non-Gaussianity can be constructed from the cosmic microwave background (CMB) bispectrum (three-point function) and also from the trispectrum (four-point function). Some have suggested that a comparison between the estimates for the values of f{sub nl} from the bispectrum and trispectrum allow a consistency test for the model. But others argue that the saturation of the Cramer-Rao bound--which gives a lower limit to the variance of an estimator--by the bispectrum estimator implies that no further information on f{sub nl} can be obtained from the trispectrum. Here, we elaborate the nature ofmore » the correlation between the bispectrum and trispectrum estimators for f{sub nl}. We show that the two estimators become statistically independent in the limit of large number of CMB pixels, and thus that the trispectrum estimator does indeed provide additional information on f{sub nl} beyond that obtained from the bispectrum. We explain how this conclusion is consistent with the Cramer-Rao bound. Our discussion of the Cramer-Rao bound may be of interest to those doing Fisher-matrix parameter-estimation forecasts or data analysis in other areas of physics as well.« less

  15. Application of Physically based landslide susceptibility models in Brazil

    NASA Astrophysics Data System (ADS)

    Carvalho Vieira, Bianca; Martins, Tiago D.

    2017-04-01

    Shallow landslides and floods are the processes responsible for most material and environmental damages in Brazil. In the last decades, some landslides events induce a high number of deaths (e.g. Over 1000 deaths in one event) and incalculable social and economic losses. Therefore, the prediction of those processes is considered an important tool for land use planning tools. Among different methods the physically based landslide susceptibility models having been widely used in many countries, but in Brazil it is still incipient when compared to other ones, like statistical tools and frequency analyses. Thus, the main objective of this research was to assess the application of some Physically based landslide susceptibility models in Brazil, identifying their main results, the efficiency of susceptibility mapping, parameters used and limitations of the tropical humid environment. In order to achieve that, it was evaluated SHALSTAB, SINMAP and TRIGRS models in some studies in Brazil along with the Geotechnical values, scales, DEM grid resolution and the results based on the analysis of the agreement between predicted susceptibility and the landslide scar's map. Most of the studies in Brazil applied SHALSTAB, SINMAP and to a lesser extent the TRIGRS model. The majority researches are concentrated in the Serra do Mar mountain range, that is a system of escarpments and rugged mountains that extends more than 1,500 km along the southern and southeastern Brazilian coast, and regularly affected by heavy rainfall that generates widespread mass movements. Most part of these studies used conventional topographic maps with scales ranging from 1:2000 to 1:50000 and DEM-grid resolution between 2 and 20m. Regarding the Geotechnical and hydrological values, a few studies use field collected data which could produce more efficient results, as indicated by international literature. Therefore, even though they have enormous potential in the susceptibility mapping, even for comparison purposes between different areas, the studies in Brazil require more detailed consideration on the input of topographic and Geotechnical parameters.

  16. GEWEX Cloud Systems Study (GCSS)

    NASA Technical Reports Server (NTRS)

    Moncrieff, Mitch

    1993-01-01

    The Global Energy and Water Cycle Experiment (GEWEX) Cloud Systems Study (GCSS) program seeks to improve the physical understanding of sub-grid scale cloud processes and their representation in parameterization schemes. By improving the description and understanding of key cloud system processes, GCSS aims to develop the necessary parameterizations in climate and numerical weather prediction (NWP) models. GCSS will address these issues mainly through the development and use of cloud-resolving or cumulus ensemble models to generate realizations of a set of archetypal cloud systems. The focus of GCSS is on mesoscale cloud systems, including precipitating convectively-driven cloud systems like MCS's and boundary layer clouds, rather than individual clouds, and on their large-scale effects. Some of the key scientific issues confronting GCSS that particularly relate to research activities in the central U.S. are presented.

  17. CAA for Jet Noise Physics

    NASA Technical Reports Server (NTRS)

    Mankbadi, Reda

    2001-01-01

    Dr. Mankbadi summarized recent CAA results. Examples of the effect of various boundary condition schemes on the computed acoustic field, for a point source in a uniform flow, were shown. Solutions showing the impact of inflow excitations on the result were also shown. Results from a large eddy simulation, using a fourth-order MacCormack scheme with a Smagorinsky sub-grid turbulence model, were shown for a Mach 2.1 unheated jet. The results showed that the results were free from spurious modes. Results were shown for a Mach 1.4 jet using LES in the near field and the Kirchhoff method for the far field. Predicted flow field characteristics were shown to be in good agreement with data and predicted far field directivities were shown to be in qualitative agree with experimental measurements.

  18. CAA for Jet Noise Physics: Issues and Recent Progress

    NASA Technical Reports Server (NTRS)

    Mankbadi, Reda

    2001-01-01

    Dr. Mankbadi summarized recent CAA results. Examples of the effect of various boundary condition schemes on the computed acoustic field, for a point source in a uniform flow, were shown. Solutions showing the impact of inflow excitations on the result were also shown. Results from a large eddy simulation, using a fourth-order MacCormack scheme with a Smagorinsky sub-grid turbulence model, were shown for a Mach 2.1 unheated jet. The results showed that the results were free from spurious modes. Results were shown for a Mach 1.4 jet using LES in the near field and the Kirchhoff method for the far field. Predicted flow field characteristics were shown to be in good agreement with data and predicted far field directivities were shown to be in qualitative agree with experimental measurements.

  19. A novel parameter to describe the glass-forming ability of alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, E. S.; Ryu, C. W.; Kim, W. T.

    2015-08-14

    In this paper, we propose a new parameter for glass-forming ability (GFA) based on the combination of thermodynamic (stability of stable and metastable liquids by ΔT{sub m} = T{sub m}{sup mix} − T{sub l} and ΔT{sub x} = T{sub x} − T{sub g}, respectively) and kinetic (resistance to crystallization by T{sub x}) aspects for glass formation. The parameter is defined as ε = (ΔT{sub m} + ΔT{sub x} + T{sub x})/T{sub m}{sup mix} without directly adding T{sub g} while considering the whole temperature range for glass formation up to T{sub m}{sup mix}, which reflects the relative position of crystallization curve in continuous cooling transformation diagram. The relationship between the εmore » parameter and critical cooling rate (R{sub c}) or maximum section thickness for glass formation (Z{sub max}) clearly confirms that the ε parameter exhibits a better correlation with GFA than other commonly used GFA parameters, such as ΔT{sub x} (=T{sub x} − T{sub g}), K (=[T{sub x} − T{sub g}]/[T{sub l} − T{sub x}]), ΔT*(=(T{sub m}{sup mix} − T{sub l})/T{sub m}{sup mix}), T{sub rg} (=T{sub g}/T{sub l}), and γ (=[T{sub x}]/[T{sub l} + T{sub g}]). The relationship between the ε parameter and R{sub c} or Z{sub max} is also formulated and evaluated in the study. The results suggest that the ε parameter can effectively predict R{sub c} and Z{sub max} for various glass-forming alloys, which would permit more widespread uses of these paradigm-shifting materials in a variety of industries.« less

  20. Method for fabricating a microelectromechanical resonator

    DOEpatents

    Wojciechowski, Kenneth E; Olsson, III, Roy H

    2013-02-05

    A method is disclosed which calculates dimensions for a MEM resonator in terms of integer multiples of a grid width G for reticles used to fabricate the resonator, including an actual sub-width L.sub.a=NG and an effective electrode width W.sub.e=MG where N and M are integers which minimize a frequency error f.sub.e=f.sub.d-f.sub.a between a desired resonant frequency f.sub.d and an actual resonant frequency f.sub.a. The method can also be used to calculate an overall width W.sub.o for the MEM resonator, and an effective electrode length L.sub.e which provides a desired motional impedance for the MEM resonator. The MEM resonator can then be fabricated using these values for L.sub.a, W.sub.e, W.sub.o and L.sub.e. The method can also be applied to a number j of MEM resonators formed on a common substrate.

  1. The refined physical properties of transiting exoplanetary system WASP-11/HAT-P-10

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xiao-bin; Gu, Sheng-hong; Wang, Yi-bo

    2014-04-01

    The transiting exoplanetary system WASP-11/HAT-P-10 was observed using the CCD camera at Yunnan Observatories, China from 2008 to 2011, and four new transit light curves were obtained. Combined with published radial velocity measurements, the new transit light curves are analyzed along with available photometric data from the literature using the Markov Chain Monte Carlo technique, and the refined physical parameters of the system are derived, which are compatible with the results of two discovery groups, respectively. The planet mass is M{sub p} = 0.526 ± 0.019 M{sub J} , which is the same as West et al.'s value, and moremore » accurately, the planet radius R{sub p} = 0.999{sub −0.018}{sup +0.029} R{sub J} is identical to the value of Bakos et al. The new result confirms that the planet orbit is circular. By collecting 19 available mid-transit epochs with higher precision, we make an orbital period analysis for WASP-11b/HAT-P-10b, and derive a new value for its orbital period, P = 3.72247669 days. Through an (O – C) study based on these mid-transit epochs, no obvious transit timing variation signal can be found for this system during 2008-2012.« less

  2. The fluid dynamic approach to equidistribution methods for grid generation and adaptation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delzanno, Gian Luca; Finn, John M

    2009-01-01

    The equidistribution methods based on L{sub p} Monge-Kantorovich optimization [Finn and Delzanno, submitted to SISC, 2009] and on the deformation [Moser, 1965; Dacorogna and Moser, 1990, Liao and Anderson, 1992] method are analyzed primarily in the context of grid generation. It is shown that the first class of methods can be obtained from a fluid dynamic formulation based on time-dependent equations for the mass density and the momentum density, arising from a variational principle. In this context, deformation methods arise from a fluid formulation by making a specific assumption on the time evolution of the density (but with some degreemore » of freedom for the momentum density). In general, deformation methods do not arise from a variational principle. However, it is possible to prescribe an optimal deformation method, related to L{sub 1} Monge-Kantorovich optimization, by making a further assumption on the momentum density. Some applications of the L{sub p} fluid dynamic formulation to imaging are also explored.« less

  3. Large Eddy Simulation of High Reynolds Number Complex Flows

    NASA Astrophysics Data System (ADS)

    Verma, Aman

    Marine configurations are subject to a variety of complex hydrodynamic phenomena affecting the overall performance of the vessel. The turbulent flow affects the hydrodynamic drag, propulsor performance and structural integrity, control-surface effectiveness, and acoustic signature of the marine vessel. Due to advances in massively parallel computers and numerical techniques, an unsteady numerical simulation methodology such as Large Eddy Simulation (LES) is well suited to study such complex turbulent flows whose Reynolds numbers (Re) are typically on the order of 10. 6. LES also promises increasedaccuracy over RANS based methods in predicting unsteady phenomena such as cavitation and noise production. This dissertation develops the capability to enable LES of high Re flows in complex geometries (e.g. a marine vessel) on unstructured grids and provide physical insight into the turbulent flow. LES is performed to investigate the geometry induced separated flow past a marine propeller attached to a hull, in an off-design condition called crashback. LES shows good quantitative agreement with experiments and provides a physical mechanism to explain the increase in side-force on the propeller blades below an advance ratio of J=-0.7. Fundamental developments in the dynamic subgrid-scale model for LES are pursued to improve the LES predictions, especially for complex flows on unstructured grids. A dynamic procedure is proposed to estimate a Lagrangian time scale based on a surrogate correlation without any adjustable parameter. The proposed model is applied to turbulent channel, cylinder and marine propeller flows and predicts improved results over other model variants due to a physically consistent Lagrangian time scale. A wall model is proposed for application to LES of high Reynolds number wall-bounded flows. The wall model is formulated as the minimization of a generalized constraint in the dynamic model for LES and applied to LES of turbulent channel flow at various Reynolds numbers up to Reτ=10000 and coarse grid resolutions to obtain significant improvement.

  4. Misfit layered Ca{sub 3}Co{sub 4}O{sub 9} as a high figure of merit p-type transparent conducting oxide film through solution processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aksit, M.; Kolli, S. K.; Slauch, I. M.

    Ca{sub 3}Co{sub 4}O{sub 9} thin films synthesized through solution processing are shown to be high-performing, p-type transparent conducting oxides (TCOs). The synthesis method is a cost-effective and scalable process that consists of sol-gel chemistry, spin coating, and heat treatments. The process parameters can be varied to produce TCO thin films with sheet resistance as low as 5.7 kΩ/sq (ρ ≈ 57 mΩ cm) or with average visible range transparency as high as 67%. The most conductive Ca{sub 3}Co{sub 4}O{sub 9} TCO thin film has near infrared region optical transmission as high as 85%. The figure of merit (FOM) for the top-performing Ca{sub 3}Co{submore » 4}O{sub 9} thin film (151 MΩ{sup −1}) is higher than FOM values reported in the literature for all other solution processed, p-type TCO thin films and higher than most others prepared by physical vapor deposition and chemical vapor deposition. Transparent conductivity in misfit layered oxides presents new opportunities for TCO compositions.« less

  5. Theoretical prediction of the energy stability of graphene nanoblisters

    NASA Astrophysics Data System (ADS)

    Glukhova, O. E.; Slepchenkov, M. M.; Barkov, P. V.

    2018-04-01

    The paper presents the results of a theoretical prediction of the energy stability of graphene nanoblisters with various geometrical parameters. As a criterion for the evaluation of the stability of investigated carbon objects we propose to consider the value of local stress of the nanoblister atomic grid. Numerical evaluation of stresses experienced by atoms of the graphene blister framework was carried out by means of an original method for calculation of local stresses that is based on energy approach. Atomistic models of graphene nanoblisters corresponding to the natural experiment data were built for the first time in this work. New physical regularities of the influence of topology on the thermodynamic stability of nanoblisters were established as a result of the analysis of the numerical experiment data. We built the distribution of local stresses for graphene blister structures, whose atomic grid contains a variety of structural defects. We have shown how the concentration and location of defects affect the picture of the distribution of the maximum stresses experienced by the atoms of the nanoblisters.

  6. SWEEPING AWAY THE MYSTERIES OF DUSTY CONTINUOUS WINDS IN ACTIVE GALACTIC NUCLEI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keating, S. K.; Gallagher, S. C.; Deo, R. P.

    2012-04-10

    An integral part of the unified model for active galactic nuclei (AGNs) is an axisymmetric obscuring medium, which is commonly depicted as a torus of gas and dust surrounding the central engine. However, a robust, dynamical model of the torus is required in order to understand the fundamental physics of AGNs and interpret their observational signatures. Here, we explore self-similar, dusty disk winds, driven by both magnetocentrifugal forces and radiation pressure, as an explanation for the torus. Using these models, we make predictions of AGN infrared (IR) spectral energy distributions from 2 to 100 {mu}m by varying parameters such asmore » the viewing angle (from i = 0 Degree-Sign to 90 Degree-Sign ), the base column density of the wind (from N{sub H,0} = 10{sup 23} to 10{sup 25} cm{sup -2}), the Eddington ratio (from L/L{sub Edd} = 0.01 to 0.1), the black hole mass (from M{sub BH} = 10{sup 8} to 10{sup 9} M{sub Sun }), and the amount of power in the input spectrum emitted in the X-ray relative to that emitted in the UV/optical (from {alpha}{sub ox} = 1.1 to 2.1). We find that models with N{sub H,0} = 10{sup 25} cm{sup -2}, L/L{sub Edd} = 0.1, and M{sub BH} {>=} 10{sup 8} M{sub Sun} are able to adequately approximate the general shape and amount of power expected in the IR as observed in a composite of optically luminous Sloan Digital Sky Survey quasars. The effect of varying the relative power coming out in X-rays relative to the UV is a change in the emission below {approx}5 {mu}m from the hottest dust grains; this arises from the differing contributions to heating and acceleration of UV and X-ray photons. We see mass outflows ranging from {approx}1 to 4 M{sub Sun} yr{sup -1}, terminal velocities ranging from {approx}1900 to 8000 km s{sup -1}, and kinetic luminosities ranging from {approx}1 Multiplication-Sign 10{sup 42} to 8 Multiplication-Sign 10{sup 43} erg s{sup -1}. Further development of this model holds promise for using specific features of observed IR spectra in AGNs to infer fundamental physical parameters of the systems.« less

  7. The Potential of Repertory Grid Technique in the Assessment of Conceptual Change in Physics.

    ERIC Educational Resources Information Center

    Winer, Laura R.; Vazquez-Abad, Jesus

    This paper presents results from a number of trials of a new approach in assessing student conceptions in physics and changes in these conceptions over time. The goal was to explore the potential of Personal Construct Psychology and its central tool, Repertory Grid Technique, to aid in the diagnosis of learner difficulties and eventually the…

  8. SAR target recognition using behaviour library of different shapes in different incidence angles and polarisations

    NASA Astrophysics Data System (ADS)

    Fallahpour, Mojtaba Behzad; Dehghani, Hamid; Jabbar Rashidi, Ali; Sheikhi, Abbas

    2018-05-01

    Target recognition is one of the most important issues in the interpretation of the synthetic aperture radar (SAR) images. Modelling, analysis, and recognition of the effects of influential parameters in the SAR can provide a better understanding of the SAR imaging systems, and therefore facilitates the interpretation of the produced images. Influential parameters in SAR images can be divided into five general categories of radar, radar platform, channel, imaging region, and processing section, each of which has different physical, structural, hardware, and software sub-parameters with clear roles in the finally formed images. In this paper, for the first time, a behaviour library that includes the effects of polarisation, incidence angle, and shape of targets, as radar and imaging region sub-parameters, in the SAR images are extracted. This library shows that the created pattern for each of cylindrical, conical, and cubic shapes is unique, and due to their unique properties these types of shapes can be recognised in the SAR images. This capability is applied to data acquired with the Canadian RADARSAT1 satellite.

  9. Synergistic Effects of Physical Aging and Damage on Long-Term Behavior of Polymer Matrix Composites

    NASA Technical Reports Server (NTRS)

    Brinson, L. Cate

    1999-01-01

    The research consisted of two major parts, first modeling and simulation of the combined effects of aging and damage on polymer composites and secondly an experimental phase examining composite response at elevated temperatures, again activating both aging and damage. For the simulation, a damage model for polymeric composite laminates operating at elevated temperatures was developed. Viscoelastic behavior of the material is accounted for via the correspondence principle and a variational approach is adopted to compute the temporal stresses within the laminate. Also, the effect of physical aging on ply level stress and on overall laminate behavior is included. An important feature of the model is that damage evolution predictions for viscoelastic laminates can be made. This allows us to track the mechanical response of the laminate up to large load levels though within the confines of linear viscoelastic constitutive behavior. An experimental investigation of microcracking and physical aging effects in polymer matrix composites was also pursued. The goal of the study was to assess the impact of aging on damage accumulation, in ten-ns of microcracking, and the impact of damage on aging and viscoelastic behavior. The testing was performed both at room and elevated temperatures on [+/- 45/903](sub s) and [02/903](sub s) laminates, both containing a set of 90 deg plies centrally located to facilitate investigation of microcracking. Edge replication and X-ray-radiography were utilized to quantify damage. Sequenced creep tests were performed to characterize viscoelastic and aging parameters. Results indicate that while the aging times studied have limited ]Influence on damage evolution, elevated temperature and viscoelastic effects have a profound effect on the damage mode seen. Some results are counterintuitive, including the lower strain to failure for elevated temperature tests and the catastrophic failure mode observed for the [+/- 45/9O3](sub s), specimens. The fracture toughness for transverse cracks increases with increasing temperature for both systems: transverse cracking was completely absent prior to failure in [+/- 45/903](sub s), and was suppressed for [02/903](sub s). No significant effect of damage on aging or viscoelastic parameters was observed.

  10. An innovative expression model of human health risk based on the quantitative analysis of soil metals sources contribution in different spatial scales.

    PubMed

    Zhang, Yimei; Li, Shuai; Wang, Fei; Chen, Zhuang; Chen, Jie; Wang, Liqun

    2018-09-01

    Toxicity of heavy metals from industrialization poses critical concern, and analysis of sources associated with potential human health risks is of unique significance. Assessing human health risk of pollution sources (factored health risk) concurrently in the whole and the sub region can provide more instructive information to protect specific potential victims. In this research, we establish a new expression model of human health risk based on quantitative analysis of sources contribution in different spatial scales. The larger scale grids and their spatial codes are used to initially identify the level of pollution risk, the type of pollution source and the sensitive population at high risk. The smaller scale grids and their spatial codes are used to identify the contribution of various sources of pollution to each sub region (larger grid) and to assess the health risks posed by each source for each sub region. The results of case study show that, for children (sensitive populations, taking school and residential area as major region of activity), the major pollution source is from the abandoned lead-acid battery plant (ALP), traffic emission and agricultural activity. The new models and results of this research present effective spatial information and useful model for quantifying the hazards of source categories and human health a t complex industrial system in the future. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Maximization of permanent trapping of CO{sub 2} and co-contaminants in the highest-porosity formations of the Rock Springs Uplift (Southwest Wyoming): experimentation and multi-scale modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piri, Mohammad

    2014-03-31

    Under this project, a multidisciplinary team of researchers at the University of Wyoming combined state-of-the-art experimental studies, numerical pore- and reservoir-scale modeling, and high performance computing to investigate trapping mechanisms relevant to geologic storage of mixed scCO{sub 2} in deep saline aquifers. The research included investigations in three fundamental areas: (i) the experimental determination of two-phase flow relative permeability functions, relative permeability hysteresis, and residual trapping under reservoir conditions for mixed scCO{sub 2}-­brine systems; (ii) improved understanding of permanent trapping mechanisms; (iii) scientifically correct, fine grid numerical simulations of CO{sub 2} storage in deep saline aquifers taking into account themore » underlying rock heterogeneity. The specific activities included: (1) Measurement of reservoir-­conditions drainage and imbibition relative permeabilities, irreducible brine and residual mixed scCO{sub 2} saturations, and relative permeability scanning curves (hysteresis) in rock samples from RSU; (2) Characterization of wettability through measurements of contact angles and interfacial tensions under reservoir conditions; (3) Development of physically-­based dynamic core-­scale pore network model; (4) Development of new, improved high-­performance modules for the UW-­team simulator to provide new capabilities to the existing model to include hysteresis in the relative permeability functions, geomechanical deformation and an equilibrium calculation (Both pore-­ and core-­scale models were rigorously validated against well-­characterized core-­ flooding experiments); and (5) An analysis of long term permanent trapping of mixed scCO{sub 2} through high-­resolution numerical experiments and analytical solutions. The analysis takes into account formation heterogeneity, capillary trapping, and relative permeability hysteresis.« less

  12. Potential application of artificial concepts to aerodynamic simulation

    NASA Technical Reports Server (NTRS)

    Kutler, P.; Mehta, U. B.; Andrews, A.

    1984-01-01

    The concept of artificial intelligence as it applies to computational fluid dynamics simulation is investigated. How expert systems can be adapted to speed the numerical aerodynamic simulation process is also examined. A proposed expert grid generation system is briefly described which, given flow parameters, configuration geometry, and simulation constraints, uses knowledge about the discretization process to determine grid point coordinates, computational surface information, and zonal interface parameters.

  13. Frequency Regulation Services from Connected Residential Devices: Short Paper

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Jin, Xin; Vaidhynathan, Deepthi

    In this paper, we demonstrate the potential benefits that residential buildings can provide for frequency regulation services in the electric power grid. In a hardware-in-the- loop (HIL) implementation, simulated homes along with a physical laboratory home are coordinated via a grid aggregator, and it is shown that their aggregate response has the potential to follow the regulation signal on a timescale of seconds. Connected (communication-enabled), devices in the National Renewable Energy Laboratory's (NREL's) Energy Systems Integration Facility (ESIF) received demand response (DR) requests from a grid aggregator, and the devices responded accordingly to meet the signal while satisfying user comfortmore » bounds and physical hardware limitations. Future research will address the issues of cybersecurity threats, participation rates, and reducing equipment wear-and-tear while providing grid services.« less

  14. Atomic Physics with the Goddard High Resolution Spectrograph on the Hubble Space Telescope. III; Oscillator Strengths for Neutral Carbon

    NASA Technical Reports Server (NTRS)

    Zsargo, J.; Federman, S. R.; Cardelli, Jason A.

    1997-01-01

    High quality spectra of interstellar absorption from C I toward beta(sup 1) S(sub co), rho O(sub ph) A, and chi O(sub ph) were obtained with the Goddard High Resolution Spectrograph on HST. Many weak lines were detected within the observed wavelength intervals: 1150-1200 A for beta(sup 1) S(sub co) and 1250-1290 A for rho O(sub ph) A and chi O(sub ph). Curve-of-growth analyses were performed in order to extract accurate column densities and Doppler parameters from lines with precise laboratory-based f-values. These column densities and b-values were used to obtain a self-consistent set of f-values for all the observed C I lines. A particularly important constraint was the need to reproduce data for more than one line of sight. For about 50% of the lines, the derived f-values differ appreciably from the values quoted by Morton.

  15. On the damping of right hand circularly polarized waves in spin quantum plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iqbal, Z.; Hussain, A., E-mail: ah-gcu@yahoo.com; Department of Physics, Quaid-i-Azam University Islamabad, Islamabad 45320

    2014-12-15

    General dispersion relation for the right hand circularly polarized waves has been derived using non-relativistic spin quantum kinetic theory. Employing the derived dispersion relation, temporal and spatial damping of the right hand circularly polarized waves are studied for both the degenerate and non-degenerate plasma regimes for two different frequency domains: (i) k{sub ∥}v≫(ω+ω{sub ce}),(ω+ω{sub cg}) and (ii) k{sub ∥}v≪(ω+ω{sub ce}),(ω+ω{sub cg}). Comparison of the cold and hot plasma regimes shows that the right hand circularly polarized wave with spin-effects exists for larger k-values as compared to the spinless case, before it damps completely. It is also found that the spin-effectsmore » can significantly influence the phase and group velocities of the whistler waves in both the degenerate and non-degenerate regimes. The results obtained are also analyzed graphically for some laboratory parameters to demonstrate the physical significance of the present work.« less

  16. Develop and Test Coupled Physical Parameterizations and Tripolar Wave Model Grid: NAVGEM / WaveWatch III / HYCOM

    DTIC Science & Technology

    2013-09-30

    Tripolar Wave Model Grid: NAVGEM / WaveWatch III / HYCOM W. Erick Rogers Naval Research Laboratory, Code 7322 Stennis Space Center, MS 39529...Parameterizations and Tripolar Wave Model Grid: NAVGEM / WaveWatch III / HYCOM 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6

  17. Marginalizing Instrument Systematics in HST WFC3 Transit Light Curves

    NASA Technical Reports Server (NTRS)

    Wakeford, H. R.; Sing, D.K.; Deming, D.; Mandell, A.

    2016-01-01

    Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) infrared observations at 1.1-1.7 microns probe primarily the H2O absorption band at 1.4 microns, and have provided low-resolution transmission spectra for a wide range of exoplanets. We present the application of marginalization based on Gibson to analyze exoplanet transit light curves obtained from HST WFC3 to better determine important transit parameters such as "ramp" probability (R (sub p)) divided by "ramp" total (R (sub asterisk)), which are important for accurate detections of H2O. We approximate the evidence, often referred to as the marginal likelihood, for a grid of systematic models using the Akaike Information Criterion. We then calculate the evidence-based weight assigned to each systematic model and use the information from all tested models to calculate the final marginalized transit parameters for both the band-integrated and spectroscopic light curves to construct the transmission spectrum. We find that a majority of the highest weight models contain a correction for a linear trend in time as well as corrections related to HST orbital phase. We additionally test the dependence on the shift in spectral wavelength position over the course of the observations and find that spectroscopic wavelength shifts delta (sub lambda) times lambda) best describe the associated systematic in the spectroscopic light curves for most targets while fast scan rate observations of bright targets require an additional level of processing to produce a robust transmission spectrum. The use of marginalization allows for transparent interpretation and understanding of the instrument and the impact of each systematic evaluated statistically for each data set, expanding the ability to make true and comprehensive comparisons between exoplanet atmospheres.

  18. Cyber-Physical System Security of a Power Grid: State-of-the-Art

    DOE PAGES

    Sun, Chih -Che; Liu, Chen -Ching; Xie, Jing

    2016-07-14

    Here, as part of the smart grid development, more and more technologies are developed and deployed on the power grid to enhance the system reliability. A primary purpose of the smart grid is to significantly increase the capability of computer-based remote control and automation. As a result, the level of connectivity has become much higher, and cyber security also becomes a potential threat to the cyber-physical systems (CPSs). In this paper, a survey of the state-of-the-art is conducted on the cyber security of the power grid concerning issues of: the structure of CPSs in a smart grid; cyber vulnerability assessment;more » cyber protection systems; and testbeds of a CPS. At Washington State University (WSU), the Smart City Testbed (SCT) has been developed to provide a platform to test, analyze and validate defense mechanisms against potential cyber intrusions. A test case is provided in this paper to demonstrate how a testbed helps the study of cyber security and the anomaly detection system (ADS) for substations.« less

  19. Cyber-Physical System Security of a Power Grid: State-of-the-Art

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Chih -Che; Liu, Chen -Ching; Xie, Jing

    Here, as part of the smart grid development, more and more technologies are developed and deployed on the power grid to enhance the system reliability. A primary purpose of the smart grid is to significantly increase the capability of computer-based remote control and automation. As a result, the level of connectivity has become much higher, and cyber security also becomes a potential threat to the cyber-physical systems (CPSs). In this paper, a survey of the state-of-the-art is conducted on the cyber security of the power grid concerning issues of: the structure of CPSs in a smart grid; cyber vulnerability assessment;more » cyber protection systems; and testbeds of a CPS. At Washington State University (WSU), the Smart City Testbed (SCT) has been developed to provide a platform to test, analyze and validate defense mechanisms against potential cyber intrusions. A test case is provided in this paper to demonstrate how a testbed helps the study of cyber security and the anomaly detection system (ADS) for substations.« less

  20. La-oxides as tracers for PuO{sub 2} to simulate contaminated aerosol behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, L.C.; Newton, G.J.; Cronenberg, A.W.

    1994-04-01

    An analytical and experimental study was performed on the use of lanthanide oxides (La-oxides) as surrogates for plutonium oxides (PuO{sub 2}) during simulated buried waste retrieval. This study determined how well the La-oxides move compared to PuO{sub 2} in aerosolized soils during retrieval scenarios. As part of the analytical study, physical properties of La-oxides and PuO{sub 2}, such as molecular diameter, diffusivity, density, and molecular weight are compared. In addition, an experimental study was performed in which Idaho National Engineering Laboratory (INEL) soil, INEL soil with lanthanides, and INEL soil with plutonium were aerosolized and collected in filters. Comparison ofmore » particle size distribution parameters from this experimental study show similarity between INEL soil, INEL soil with lanthanides, and INEL soil with plutonium.« less

Top