Sample records for grid bias method

  1. Addressing Spatial Dependence Bias in Climate Model Simulations—An Independent Component Analysis Approach

    NASA Astrophysics Data System (ADS)

    Nahar, Jannatun; Johnson, Fiona; Sharma, Ashish

    2018-02-01

    Conventional bias correction is usually applied on a grid-by-grid basis, meaning that the resulting corrections cannot address biases in the spatial distribution of climate variables. To solve this problem, a two-step bias correction method is proposed here to correct time series at multiple locations conjointly. The first step transforms the data to a set of statistically independent univariate time series, using a technique known as independent component analysis (ICA). The mutually independent signals can then be bias corrected as univariate time series and back-transformed to improve the representation of spatial dependence in the data. The spatially corrected data are then bias corrected at the grid scale in the second step. The method has been applied to two CMIP5 General Circulation Model simulations for six different climate regions of Australia for two climate variables—temperature and precipitation. The results demonstrate that the ICA-based technique leads to considerable improvements in temperature simulations with more modest improvements in precipitation. Overall, the method results in current climate simulations that have greater equivalency in space and time with observational data.

  2. Stochastic sampling of quadrature grids for the evaluation of vibrational expectation values

    NASA Astrophysics Data System (ADS)

    López Ríos, Pablo; Monserrat, Bartomeu; Needs, Richard J.

    2018-02-01

    The thermal lines method for the evaluation of vibrational expectation values of electronic observables [B. Monserrat, Phys. Rev. B 93, 014302 (2016), 10.1103/PhysRevB.93.014302] was recently proposed as a physically motivated approximation offering balance between the accuracy of direct Monte Carlo integration and the low computational cost of using local quadratic approximations. In this paper we reformulate thermal lines as a stochastic implementation of quadrature-grid integration, analyze the analytical form of its bias, and extend the method to multiple-point quadrature grids applicable to any factorizable harmonic or anharmonic nuclear wave function. The bias incurred by thermal lines is found to depend on the local form of the expectation value, and we demonstrate that the use of finer quadrature grids along selected modes can eliminate this bias, while still offering an ˜30 % lower computational cost than direct Monte Carlo integration in our tests.

  3. Three-grid accelerator system for an ion propulsion engine

    NASA Technical Reports Server (NTRS)

    Brophy, John R. (Inventor)

    1994-01-01

    An apparatus is presented for an ion engine comprising a three-grid accelerator system with the decelerator grid biased negative of the beam plasma. This arrangement substantially reduces the charge-exchange ion current reaching the accelerator grid at high tank pressures, which minimizes erosion of the accelerator grid due to charge exchange ion sputtering, known to be the major accelerator grid wear mechanism. An improved method for life testing ion engines is also provided using the disclosed apparatus. In addition, the invention can also be applied in materials processing.

  4. Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods

    NASA Astrophysics Data System (ADS)

    Werner, A. T.; Cannon, A. J.

    2015-06-01

    Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e., correlation tests) and distributional properties (i.e., tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3 day peak flow and 7 day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational datasets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational dataset. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7 day low flow events, regardless of reanalysis or observational dataset. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event-scale spatial gradients, passed the greatest number of tests for hydrologic extremes. Non-stationarity in the observational/reanalysis datasets complicated the evaluation of downscaling performance. Comparing temporal homogeneity and trends in climate indices and hydrological model outputs calculated from downscaled reanalyses and gridded observations was useful for diagnosing the reliability of the various historical datasets. We recommend that such analyses be conducted before such data are used to construct future hydro-climatic change scenarios.

  5. Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods

    NASA Astrophysics Data System (ADS)

    Werner, Arelia T.; Cannon, Alex J.

    2016-04-01

    Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e. correlation tests) and distributional properties (i.e. tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), the climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3-day peak flow and 7-day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational data sets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational data set. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7-day low-flow events, regardless of reanalysis or observational data set. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event-scale spatial gradients, passed the greatest number of tests for hydrologic extremes. Non-stationarity in the observational/reanalysis data sets complicated the evaluation of downscaling performance. Comparing temporal homogeneity and trends in climate indices and hydrological model outputs calculated from downscaled reanalyses and gridded observations was useful for diagnosing the reliability of the various historical data sets. We recommend that such analyses be conducted before such data are used to construct future hydro-climatic change scenarios.

  6. Sampling designs matching species biology produce accurate and affordable abundance indices

    PubMed Central

    Farley, Sean; Russell, Gareth J.; Butler, Matthew J.; Selinger, Jeff

    2013-01-01

    Wildlife biologists often use grid-based designs to sample animals and generate abundance estimates. Although sampling in grids is theoretically sound, in application, the method can be logistically difficult and expensive when sampling elusive species inhabiting extensive areas. These factors make it challenging to sample animals and meet the statistical assumption of all individuals having an equal probability of capture. Violating this assumption biases results. Does an alternative exist? Perhaps by sampling only where resources attract animals (i.e., targeted sampling), it would provide accurate abundance estimates more efficiently and affordably. However, biases from this approach would also arise if individuals have an unequal probability of capture, especially if some failed to visit the sampling area. Since most biological programs are resource limited, and acquiring abundance data drives many conservation and management applications, it becomes imperative to identify economical and informative sampling designs. Therefore, we evaluated abundance estimates generated from grid and targeted sampling designs using simulations based on geographic positioning system (GPS) data from 42 Alaskan brown bears (Ursus arctos). Migratory salmon drew brown bears from the wider landscape, concentrating them at anadromous streams. This provided a scenario for testing the targeted approach. Grid and targeted sampling varied by trap amount, location (traps placed randomly, systematically or by expert opinion), and traps stationary or moved between capture sessions. We began by identifying when to sample, and if bears had equal probability of capture. We compared abundance estimates against seven criteria: bias, precision, accuracy, effort, plus encounter rates, and probabilities of capture and recapture. One grid (49 km2 cells) and one targeted configuration provided the most accurate results. Both placed traps by expert opinion and moved traps between capture sessions, which raised capture probabilities. The grid design was least biased (−10.5%), but imprecise (CV 21.2%), and used most effort (16,100 trap-nights). The targeted configuration was more biased (−17.3%), but most precise (CV 12.3%), with least effort (7,000 trap-nights). Targeted sampling generated encounter rates four times higher, and capture and recapture probabilities 11% and 60% higher than grid sampling, in a sampling frame 88% smaller. Bears had unequal probability of capture with both sampling designs, partly because some bears never had traps available to sample them. Hence, grid and targeted sampling generated abundance indices, not estimates. Overall, targeted sampling provided the most accurate and affordable design to index abundance. Targeted sampling may offer an alternative method to index the abundance of other species inhabiting expansive and inaccessible landscapes elsewhere, provided their attraction to resource concentrations. PMID:24392290

  7. Real space mapping of oxygen vacancy diffusion and electrochemical transformations by hysteretic current reversal curve measurements

    DOEpatents

    Kalinin, Sergei V.; Balke, Nina; Borisevich, Albina Y.; Jesse, Stephen; Maksymovych, Petro; Kim, Yunseok; Strelcov, Evgheni

    2014-06-10

    An excitation voltage biases an ionic conducting material sample over a nanoscale grid. The bias sweeps a modulated voltage with increasing maximal amplitudes. A current response is measured at grid locations. Current response reversal curves are mapped over maximal amplitudes of the bias cycles. Reversal curves are averaged over the grid for each bias cycle and mapped over maximal bias amplitudes for each bias cycle. Average reversal curve areas are mapped over maximal amplitudes of the bias cycles. Thresholds are determined for onset and ending of electrochemical activity. A predetermined number of bias sweeps may vary in frequency where each sweep has a constant number of cycles and reversal response curves may indicate ionic diffusion kinetics.

  8. An optimized data fusion method and its application to improve lateral boundary conditions in winter for Pearl River Delta regional PM2.5 modeling, China

    NASA Astrophysics Data System (ADS)

    Huang, Zhijiong; Hu, Yongtao; Zheng, Junyu; Zhai, Xinxin; Huang, Ran

    2018-05-01

    Lateral boundary conditions (LBCs) are essential for chemical transport models to simulate regional transport; however they often contain large uncertainties. This study proposes an optimized data fusion approach to reduce the bias of LBCs by fusing gridded model outputs, from which the daughter domain's LBCs are derived, with ground-level measurements. The optimized data fusion approach follows the framework of a previous interpolation-based fusion method but improves it by using a bias kriging method to correct the spatial bias in gridded model outputs. Cross-validation shows that the optimized approach better estimates fused fields in areas with a large number of observations compared to the previous interpolation-based method. The optimized approach was applied to correct LBCs of PM2.5 concentrations for simulations in the Pearl River Delta (PRD) region as a case study. Evaluations show that the LBCs corrected by data fusion improve in-domain PM2.5 simulations in terms of the magnitude and temporal variance. Correlation increases by 0.13-0.18 and fractional bias (FB) decreases by approximately 3%-15%. This study demonstrates the feasibility of applying data fusion to improve regional air quality modeling.

  9. Using a Mobile Device "App" and Proximal Remote Sensing Technologies to Assess Soil Cover Fractions on Agricultural Fields.

    PubMed

    Laamrani, Ahmed; Pardo Lara, Renato; Berg, Aaron A; Branson, Dave; Joosse, Pamela

    2018-02-27

    Quantifying the amount of crop residue left in the field after harvest is a key issue for sustainability. Conventional assessment approaches (e.g., line-transect) are labor intensive, time-consuming and costly. Many proximal remote sensing devices and systems have been developed for agricultural applications such as cover crop and residue mapping. For instance, current mobile devices (smartphones & tablets) are usually equipped with digital cameras and global positioning systems and use applications (apps) for in-field data collection and analysis. In this study, we assess the feasibility and strength of a mobile device app developed to estimate crop residue cover. The performance of this novel technique (from here on referred to as "app" method) was compared against two point counting approaches: an established digital photograph-grid method and a new automated residue counting script developed in MATLAB at the University of Guelph. Both photograph-grid and script methods were used to count residue under 100 grid points. Residue percent cover was estimated using the app, script and photograph-grid methods on 54 vertical digital photographs (images of the ground taken from above at a height of 1.5 m) collected from eighteen fields (9 corn and 9 soybean, 3 samples each) located in southern Ontario. Results showed that residue estimates from the app method were in good agreement with those obtained from both photograph-grid and script methods (R² = 0.86 and 0.84, respectively). This study has found that the app underestimates the residue coverage by -6.3% and -10.8% when compared to the photograph-grid and script methods, respectively. With regards to residue type, soybean has a slightly lower bias than corn (i.e., -5.3% vs. -7.4%). For photos with residue <30%, the app derived residue measurements are within ±5% difference (bias) of both photograph-grid- and script-derived residue measurements. These methods could therefore be used to track the recommended minimum soil residue cover of 30%, implemented to reduce farmland topsoil and nutrient losses that impact water quality. Overall, the app method was found to be a good alternative to the point counting methods, which are more time-consuming.

  10. Downscaling RCP8.5 daily temperatures and precipitation in Ontario using localized ensemble optimal interpolation (EnOI) and bias correction

    NASA Astrophysics Data System (ADS)

    Deng, Ziwang; Liu, Jinliang; Qiu, Xin; Zhou, Xiaolan; Zhu, Huaiping

    2017-10-01

    A novel method for daily temperature and precipitation downscaling is proposed in this study which combines the Ensemble Optimal Interpolation (EnOI) and bias correction techniques. For downscaling temperature, the day to day seasonal cycle of high resolution temperature of the NCEP climate forecast system reanalysis (CFSR) is used as background state. An enlarged ensemble of daily temperature anomaly relative to this seasonal cycle and information from global climate models (GCMs) are used to construct a gain matrix for each calendar day. Consequently, the relationship between large and local-scale processes represented by the gain matrix will change accordingly. The gain matrix contains information of realistic spatial correlation of temperature between different CFSR grid points, between CFSR grid points and GCM grid points, and between different GCM grid points. Therefore, this downscaling method keeps spatial consistency and reflects the interaction between local geographic and atmospheric conditions. Maximum and minimum temperatures are downscaled using the same method. For precipitation, because of the non-Gaussianity issue, a logarithmic transformation is used to daily total precipitation prior to conducting downscaling. Cross validation and independent data validation are used to evaluate this algorithm. Finally, data from a 29-member ensemble of phase 5 of the Coupled Model Intercomparison Project (CMIP5) GCMs are downscaled to CFSR grid points in Ontario for the period from 1981 to 2100. The results show that this method is capable of generating high resolution details without changing large scale characteristics. It results in much lower absolute errors in local scale details at most grid points than simple spatial downscaling methods. Biases in the downscaled data inherited from GCMs are corrected with a linear method for temperatures and distribution mapping for precipitation. The downscaled ensemble projects significant warming with amplitudes of 3.9 and 6.5 °C for 2050s and 2080s relative to 1990s in Ontario, respectively; Cooling degree days and hot days will significantly increase over southern Ontario and heating degree days and cold days will significantly decrease in northern Ontario. Annual total precipitation will increase over Ontario and heavy precipitation events will increase as well. These results are consistent with conclusions in many other studies in the literature.

  11. Measurement of sheath potential by three emissive-probe methods in DC filament plasmas near a biased grid

    NASA Astrophysics Data System (ADS)

    Kang, In-Je; Park, In-Sun; Wackerbarth, Eugene; Bae, Min-Keun; Hershkowitz, Noah; Severn, Greg; Chung, Kyu-Sun

    2017-10-01

    Plasma potential structures are measured with an emissive probe near a negatively biased grid ( - 100 V , 80mm diam., 40 lines/cm) immersed in a hot filament DC discharge in Kr. Three different methods of analysis are compared: inflection point (IP), floating potential (FP) and separation point (SE) methods. The plasma device at the University of San Diego (length = 64 cm, diameter = 32 cm, source = filament DC discharge) was operated with 5 ×108

  12. Development of Spatiotemporal Bias-Correction Techniques for Downscaling GCM Predictions

    NASA Astrophysics Data System (ADS)

    Hwang, S.; Graham, W. D.; Geurink, J.; Adams, A.; Martinez, C. J.

    2010-12-01

    Accurately representing the spatial variability of precipitation is an important factor for predicting watershed response to climatic forcing, particularly in small, low-relief watersheds affected by convective storm systems. Although Global Circulation Models (GCMs) generally preserve spatial relationships between large-scale and local-scale mean precipitation trends, most GCM downscaling techniques focus on preserving only observed temporal variability on point by point basis, not spatial patterns of events. Downscaled GCM results (e.g., CMIP3 ensembles) have been widely used to predict hydrologic implications of climate variability and climate change in large snow-dominated river basins in the western United States (Diffenbaugh et al., 2008; Adam et al., 2009). However fewer applications to smaller rain-driven river basins in the southeastern US (where preserving spatial variability of rainfall patterns may be more important) have been reported. In this study a new method was developed to bias-correct GCMs to preserve both the long term temporal mean and variance of the precipitation data, and the spatial structure of daily precipitation fields. Forty-year retrospective simulations (1960-1999) from 16 GCMs were collected (IPCC, 2007; WCRP CMIP3 multi-model database: https://esg.llnl.gov:8443/), and the daily precipitation data at coarse resolution (i.e., 280km) were interpolated to 12km spatial resolution and bias corrected using gridded observations over the state of Florida (Maurer et al., 2002; Wood et al, 2002; Wood et al, 2004). In this method spatial random fields which preserved the observed spatial correlation structure of the historic gridded observations and the spatial mean corresponding to the coarse scale GCM daily rainfall were generated. The spatiotemporal variability of the spatio-temporally bias-corrected GCMs were evaluated against gridded observations, and compared to the original temporally bias-corrected and downscaled CMIP3 data for the central Florida. The hydrologic response of two southwest Florida watersheds to the gridded observation data, the original bias corrected CMIP3 data, and the new spatiotemporally corrected CMIP3 predictions was compared using an integrated surface-subsurface hydrologic model developed by Tampa Bay Water.

  13. GRID BLACKOUT IN VACUUM TUBES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardin, K.D.

    1961-06-01

    A method which gives quantitative data is presented which allows for characterization of the grid blackout effect and is applicable to calculation of circuit degradation. Data are presented for several tube types which show developed bias and discharge time constants as a function of pulse input conditions. Blackout can seriously change the performance of any vacuum tube circuit which utilizes the tube in positive grid operation. The effects on CW oscillators and UHF mixers are discussed. An equivalent circuit which simulates some portions of the blackout phenomenon is presented and used to calculate effective capacitance and resistance associated with themore » grid surface. (auth)« less

  14. Using a Mobile Device “App” and Proximal Remote Sensing Technologies to Assess Soil Cover Fractions on Agricultural Fields

    PubMed Central

    Laamrani, Ahmed; Branson, Dave; Joosse, Pamela

    2018-01-01

    Quantifying the amount of crop residue left in the field after harvest is a key issue for sustainability. Conventional assessment approaches (e.g., line-transect) are labor intensive, time-consuming and costly. Many proximal remote sensing devices and systems have been developed for agricultural applications such as cover crop and residue mapping. For instance, current mobile devices (smartphones & tablets) are usually equipped with digital cameras and global positioning systems and use applications (apps) for in-field data collection and analysis. In this study, we assess the feasibility and strength of a mobile device app developed to estimate crop residue cover. The performance of this novel technique (from here on referred to as “app” method) was compared against two point counting approaches: an established digital photograph-grid method and a new automated residue counting script developed in MATLAB at the University of Guelph. Both photograph-grid and script methods were used to count residue under 100 grid points. Residue percent cover was estimated using the app, script and photograph-grid methods on 54 vertical digital photographs (images of the ground taken from above at a height of 1.5 m) collected from eighteen fields (9 corn and 9 soybean, 3 samples each) located in southern Ontario. Results showed that residue estimates from the app method were in good agreement with those obtained from both photograph–grid and script methods (R2 = 0.86 and 0.84, respectively). This study has found that the app underestimates the residue coverage by −6.3% and −10.8% when compared to the photograph-grid and script methods, respectively. With regards to residue type, soybean has a slightly lower bias than corn (i.e., −5.3% vs. −7.4%). For photos with residue <30%, the app derived residue measurements are within ±5% difference (bias) of both photograph-grid- and script-derived residue measurements. These methods could therefore be used to track the recommended minimum soil residue cover of 30%, implemented to reduce farmland topsoil and nutrient losses that impact water quality. Overall, the app method was found to be a good alternative to the point counting methods, which are more time-consuming. PMID:29495497

  15. Bias correction of surface downwelling longwave and shortwave radiation for the EWEMBI dataset

    NASA Astrophysics Data System (ADS)

    Lange, Stefan

    2018-05-01

    Many meteorological forcing datasets include bias-corrected surface downwelling longwave and shortwave radiation (rlds and rsds). Methods used for such bias corrections range from multi-year monthly mean value scaling to quantile mapping at the daily timescale. An additional downscaling is necessary if the data to be corrected have a higher spatial resolution than the observational data used to determine the biases. This was the case when EartH2Observe (E2OBS; Calton et al., 2016) rlds and rsds were bias-corrected using more coarsely resolved Surface Radiation Budget (SRB; Stackhouse Jr. et al., 2011) data for the production of the meteorological forcing dataset EWEMBI (Lange, 2016). This article systematically compares various parametric quantile mapping methods designed specifically for this purpose, including those used for the production of EWEMBI rlds and rsds. The methods vary in the timescale at which they operate, in their way of accounting for physical upper radiation limits, and in their approach to bridging the spatial resolution gap between E2OBS and SRB. It is shown how temporal and spatial variability deflation related to bilinear interpolation and other deterministic downscaling approaches can be overcome by downscaling the target statistics of quantile mapping from the SRB to the E2OBS grid such that the sub-SRB-grid-scale spatial variability present in the original E2OBS data is retained. Cross validations at the daily and monthly timescales reveal that it is worthwhile to take empirical estimates of physical upper limits into account when adjusting either radiation component and that, overall, bias correction at the daily timescale is more effective than bias correction at the monthly timescale if sampling errors are taken into account.

  16. Optimize of shrink process with X-Y CD bias on hole pattern

    NASA Astrophysics Data System (ADS)

    Koike, Kyohei; Hara, Arisa; Natori, Sakurako; Yamauchi, Shohei; Yamato, Masatoshi; Oyama, Kenichi; Yaegashi, Hidetami

    2017-03-01

    Gridded design rules[1] is major process in configuring logic circuit used 193-immersion lithography. In the scaling of grid patterning, we can make 10nm order line and space pattern by using multiple patterning techniques such as self-aligned multiple patterning (SAMP) and litho-etch- litho-etch (LELE)[2][3][4] . On the other hand, Line cut process has some error parameters such as pattern defect, placement error, roughness and X-Y CD bias with the decreasing scale. We tried to cure hole pattern roughness to use additional process such as Line smoothing[5] . Each smoothing process showed different effect. As the result, CDx shrink amount is smaller than CDy without one additional process. In this paper, we will report the pattern controllability comparison of EUV and 193-immersion. And we will discuss optimum method about CD bias on hole pattern.

  17. Control of nanoparticle size and amount by using the mesh grid and applying DC-bias to the substrate in silane ICP-CVD process

    NASA Astrophysics Data System (ADS)

    Yoo, Seung-Wan; Hwang, Nong-Moon; You, Shin-Jae; Kim, Jung-Hyung; Seong, Dae-Jin

    2017-11-01

    The effect of applying a bias to the substrate on the size and amount of charged crystalline silicon nanoparticles deposited on the substrate was investigated in the inductively coupled plasma chemical vapor deposition process. By inserting the grounded grid with meshes above the substrate, the region just above the substrate was separated from the plasma. Thereby, crystalline Si nanoparticles formed by the gas-phase reaction in the plasma could be deposited directly on the substrate, successfully avoiding the formation of a film. Moreover, the size and the amount of deposited nanoparticles could be changed by applying direct current bias to the substrate. When the grid of 1 × 1-mm-sized mesh was used, the nanoparticle flux was increased as the negative substrate bias increased from 0 to - 50 V. On the other hand, when a positive bias was applied to the substrate, Si nanoparticles were not deposited at all. Regardless of substrate bias voltages, the most frequently observed nanoparticles synthesized with the grid of 1 × 1-mm-sized mesh had the size range of 10-12 nm in common. When the square mesh grid of 2-mm size was used, as the substrate bias was increased from - 50 to 50 V, the size of the nanoparticles observed most frequently increased from the range of 8-10 to 40-45 nm but the amount that was deposited on the substrate decreased.

  18. The power grid AGC frequency bias coefficient online identification method based on wide area information

    NASA Astrophysics Data System (ADS)

    Wang, Zian; Li, Shiguang; Yu, Ting

    2015-12-01

    This paper propose online identification method of regional frequency deviation coefficient based on the analysis of interconnected grid AGC adjustment response mechanism of regional frequency deviation coefficient and the generator online real-time operation state by measured data through PMU, analyze the optimization method of regional frequency deviation coefficient in case of the actual operation state of the power system and achieve a more accurate and efficient automatic generation control in power system. Verify the validity of the online identification method of regional frequency deviation coefficient by establishing the long-term frequency control simulation model of two-regional interconnected power system.

  19. Dynamics of flows, fluctuations, and global instability under electrode biasing in a linear plasma device

    NASA Astrophysics Data System (ADS)

    Desjardins, T. R.; Gilmore, M.

    2016-05-01

    Grid biasing is utilized in a large-scale helicon plasma to modify an existing instability. It is shown both experimentally and with a linear stability analysis to be a hybrid drift-Kelvin-Helmholtz mode. At low magnetic field strengths, coherent fluctuations are present, while at high magnetic field strengths, the plasma is broad-band turbulent. Grid biasing is used to drive the once-coherent fluctuations to a broad-band turbulent state, as well as to suppress them. There is a corresponding change in the flow shear. When a high positive bias (10Te) is applied to the grid electrode, a large-scale ( n ˜/n ≈50 % ) is excited. This mode has been identified as the potential relaxation instability.

  20. Merging Station Observations with Large-Scale Gridded Data to Improve Hydrological Predictions over Chile

    NASA Astrophysics Data System (ADS)

    Peng, L.; Sheffield, J.; Verbist, K. M. J.

    2016-12-01

    Hydrological predictions at regional-to-global scales are often hampered by the lack of meteorological forcing data. The use of large-scale gridded meteorological data is able to overcome this limitation, but these data are subject to regional biases and unrealistic values at local scale. This is especially challenging in regions such as Chile, where climate exhibits high spatial heterogeneity as a result of long latitude span and dramatic elevation changes. However, regional station-based observational datasets are not fully exploited and have the potential of constraining biases and spatial patterns. This study aims at adjusting precipitation and temperature estimates from the Princeton University global meteorological forcing (PGF) gridded dataset to improve hydrological simulations over Chile, by assimilating 982 gauges from the Dirección General de Aguas (DGA). To merge station data with the gridded dataset, we use a state-space estimation method to produce optimal gridded estimates, considering both the error of the station measurements and the gridded PGF product. The PGF daily precipitation, maximum and minimum temperature at 0.25° spatial resolution are adjusted for the period of 1979-2010. Precipitation and temperature gauges with long and continuous records (>70% temporal coverage) are selected, while the remaining stations are used for validation. The leave-one-out cross validation verifies the robustness of this data assimilation approach. The merged dataset is then used to force the Variable Infiltration Capacity (VIC) hydrological model over Chile at daily time step which are compared to the observations of streamflow. Our initial results show that the station-merged PGF precipitation effectively captures drizzle and the spatial pattern of storms. Overall the merged dataset has significant improvements compared to the original PGF with reduced biases and stronger inter-annual variability. The invariant spatial pattern of errors between the station data and the gridded product opens up the possibility of merging real-time satellite and intermittent gauge observations to produce more accurate real-time hydrological predictions.

  1. Domain-Level Assessment of the Weather Running Estimate-Nowcast (WREN) Model

    DTIC Science & Technology

    2016-11-01

    Added by Decreased Grid Spacing 14 4.4 Performance Comparison of 2 WRE–N Configurations 18 4.5 Performance Comparison: Dumais WRE–N with FDDA vs. the...FDDA for 2 -m-AGL TMP (K) ..................................................... 15 Fig. 11 Bias and RMSE errors for the 3 grids for Dumais and Passner...WRE–N with FDDA for 2 -m-AGL DPT (K) ...................................................... 16 Fig. 12 Bias and RMSE errors for the 3 grids for Dumais

  2. Adaptive enhanced sampling by force-biasing using neural networks

    NASA Astrophysics Data System (ADS)

    Guo, Ashley Z.; Sevgen, Emre; Sidky, Hythem; Whitmer, Jonathan K.; Hubbell, Jeffrey A.; de Pablo, Juan J.

    2018-04-01

    A machine learning assisted method is presented for molecular simulation of systems with rugged free energy landscapes. The method is general and can be combined with other advanced sampling techniques. In the particular implementation proposed here, it is illustrated in the context of an adaptive biasing force approach where, rather than relying on discrete force estimates, one can resort to a self-regularizing artificial neural network to generate continuous, estimated generalized forces. By doing so, the proposed approach addresses several shortcomings common to adaptive biasing force and other algorithms. Specifically, the neural network enables (1) smooth estimates of generalized forces in sparsely sampled regions, (2) force estimates in previously unexplored regions, and (3) continuous force estimates with which to bias the simulation, as opposed to biases generated at specific points of a discrete grid. The usefulness of the method is illustrated with three different examples, chosen to highlight the wide range of applicability of the underlying concepts. In all three cases, the new method is found to enhance considerably the underlying traditional adaptive biasing force approach. The method is also found to provide improvements over previous implementations of neural network assisted algorithms.

  3. Implication of observed cloud variability for parameterizations of microphysical and radiative transfer processes in climate models

    NASA Astrophysics Data System (ADS)

    Huang, D.; Liu, Y.

    2014-12-01

    The effects of subgrid cloud variability on grid-average microphysical rates and radiative fluxes are examined by use of long-term retrieval products at the Tropical West Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy's Atmospheric Radiation Measurement (ARM) Program. Four commonly used distribution functions, the truncated Gaussian, Gamma, lognormal, and Weibull distributions, are constrained to have the same mean and standard deviation as observed cloud liquid water content. The PDFs are then used to upscale relevant physical processes to obtain grid-average process rates. It is found that the truncated Gaussian representation results in up to 30% mean bias in autoconversion rate whereas the mean bias for the lognormal representation is about 10%. The Gamma and Weibull distribution function performs the best for the grid-average autoconversion rate with the mean relative bias less than 5%. For radiative fluxes, the lognormal and truncated Gaussian representations perform better than the Gamma and Weibull representations. The results show that the optimal choice of subgrid cloud distribution function depends on the nonlinearity of the process of interest and thus there is no single distribution function that works best for all parameterizations. Examination of the scale (window size) dependence of the mean bias indicates that the bias in grid-average process rates monotonically increases with increasing window sizes, suggesting the increasing importance of subgrid variability with increasing grid sizes.

  4. Multi-time Scale Joint Scheduling Method Considering the Grid of Renewable Energy

    NASA Astrophysics Data System (ADS)

    Zhijun, E.; Wang, Weichen; Cao, Jin; Wang, Xin; Kong, Xiangyu; Quan, Shuping

    2018-01-01

    Renewable new energy power generation prediction error like wind and light, brings difficulties to dispatch the power system. In this paper, a multi-time scale robust scheduling method is set to solve this problem. It reduces the impact of clean energy prediction bias to the power grid by using multi-time scale (day-ahead, intraday, real time) and coordinating the dispatching power output of various power supplies such as hydropower, thermal power, wind power, gas power and. The method adopts the robust scheduling method to ensure the robustness of the scheduling scheme. By calculating the cost of the abandon wind and the load, it transforms the robustness into the risk cost and optimizes the optimal uncertainty set for the smallest integrative costs. The validity of the method is verified by simulation.

  5. Is Bohm's Criterion satisfied in a weakly ionized Kr discharge, in the vicinity of a biased grid that permits counter streaming ion flow?

    NASA Astrophysics Data System (ADS)

    Wackerbarth, Eugene; Kang, In-Je; Park, In-Sun; Chung, Kyu-Sun; Hershkowitz, Noah; Severn, Greg

    2017-10-01

    We consider the problem of the sheath near a negatively biased grid (-100V) that permits ion flow in both directions. We show the first laser-induced fluorescence (LIF) measurements of ion velocity distribution functions (IVDFs) in such a system. We worked with a hot filament discharge at the University of San Diego (length = 64 cm, diameter = 32 cm) in which a Kr discharge was operated with a neutral pressure of 0.1mTorr, ne 3 ×109cm-3 and Te 3.5 eV. Sheath potentials were measured with an emissive probe using the inflection point method in the limit of zero emission. The LIF collection optics were recently upgraded to a 4f system with a spatial resolution smaller than 1mm. IVDFs measured near the grid (80mm diam. 40 lines/cm) indicate ion flow from both sides of the grid. Preliminary analysis of the moments of the IVDFs indicate that Bohm's Criterion is satisfied at the sheath edge. Thanks to DOE Grant No. DE-SC00114226, NSF Grant Nos. 1464741, 1464838, and the National Research Foundation of Korea funded by the Ministry of Science, ICT and Future Planning (2015M1A7A1A01002784).

  6. Estimation and correction of different flavors of surface observation biases in ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    Lorente-Plazas, Raquel; Hacker, Josua P.; Collins, Nancy; Lee, Jared A.

    2017-04-01

    The impact of assimilating surface observations has been shown in several publications, for improving weather prediction inside of the boundary layer as well as the flow aloft. However, the assimilation of surface observations is often far from optimal due to the presence of both model and observation biases. The sources of these biases can be diverse: an instrumental offset, errors associated to the comparison of point-based observations and grid-cell average, etc. To overcome this challenge, a method was developed using the ensemble Kalman filter. The approach consists on representing each observation bias as a parameter. These bias parameters are added to the forward operator and they extend the state vector. As opposed to the observation bias estimation approaches most common in operational systems (e.g. for satellite radiances), the state vector and parameters are simultaneously updated by applying the Kalman filter equations to the augmented state. The method to estimate and correct the observation bias is evaluated using observing system simulation experiments (OSSEs) with the Weather Research and Forecasting (WRF) model. OSSEs are constructed for the conventional observation network including radiosondes, aircraft observations, atmospheric motion vectors, and surface observations. Three different kinds of biases are added to 2-meter temperature for synthetic METARs. From the simplest to more sophisticated, imposed biases are: (1) a spatially invariant bias, (2) a spatially varying bias proportional to topographic height differences between the model and the observations, and (3) bias that is proportional to the temperature. The target region characterized by complex terrain is the western U.S. on a domain with 30-km grid spacing. Observations are assimilated every 3 hours using an 80-member ensemble during September 2012. Results demonstrate that the approach is able to estimate and correct the bias when it is spatially invariant (experiment 1). More complex bias structure in experiments (2) and (3) are more difficult to estimate, but still possible. Estimated the parameter in experiments with unbiased observations results in spatial and temporal parameter variability about zero, and establishes a threshold on the accuracy of the parameter in further experiments. When the observations are biased, the mean parameter value is close to the true bias, but temporal and spatial variability in the parameter estimates is similar to the parameters used when estimating a zero bias in the observations. The distributions are related to other errors in the forecasts, indicating that the parameters are absorbing some of the forecast error from other sources. In this presentation we elucidate the reasons for the resulting parameter estimates, and their variability.

  7. Calculating depths to shallow magnetic sources using aeromagnetic data from the Tucson Basin

    USGS Publications Warehouse

    Casto, Daniel W.

    2001-01-01

    Using gridded high-resolution aeromagnetic data, the performance of several automated 3-D depth-to-source methods was evaluated over shallow control sources based on how close their depth estimates came to the actual depths to the tops of the sources. For all three control sources, only the simple analytic signal method, the local wavenumber method applied to the vertical integral of the magnetic field, and the horizontal gradient method applied to the pseudo-gravity field provided median depth estimates that were close (-11% to +14% error) to the actual depths. Careful attention to data processing was required in order to calculate a sufficient number of depth estimates and to reduce the occurrence of false depth estimates. For example, to eliminate sampling bias, high-frequency noise and interference from deeper sources, it was necessary to filter the data before calculating derivative grids and subsequent depth estimates. To obtain smooth spatial derivative grids using finite differences, the data had to be gridded at intervals less than one percent of the anomaly wavelength. Before finding peak values in the derived signal grids, it was necessary to remove calculation noise by applying a low-pass filter in the grid-line directions and to re-grid at an interval that enabled the search window to encompass only the peaks of interest. Using the methods that worked best over the control sources, depth estimates over geologic sites of interest suggested the possible occurrence of volcanics nearly 170 meters beneath a city landfill. Also, a throw of around 2 kilometers was determined for a detachment fault that has a displacement of roughly 6 kilometers.

  8. Optically-sectioned two-shot structured illumination microscopy with Hilbert-Huang processing.

    PubMed

    Patorski, Krzysztof; Trusiak, Maciej; Tkaczyk, Tomasz

    2014-04-21

    We introduce a fast, simple, adaptive and experimentally robust method for reconstructing background-rejected optically-sectioned images using two-shot structured illumination microscopy. Our innovative data demodulation method needs two grid-illumination images mutually phase shifted by π (half a grid period) but precise phase displacement between two frames is not required. Upon frames subtraction the input pattern with increased grid modulation is obtained. The first demodulation stage comprises two-dimensional data processing based on the empirical mode decomposition for the object spatial frequency selection (noise reduction and bias term removal). The second stage consists in calculating high contrast image using the two-dimensional spiral Hilbert transform. Our algorithm effectiveness is compared with the results calculated for the same input data using structured-illumination (SIM) and HiLo microscopy methods. The input data were collected for studying highly scattering tissue samples in reflectance mode. Results of our approach compare very favorably with SIM and HiLo techniques.

  9. Thinking About Challenging Behavior: A Repertory Grid Study of Inpatient Staff Beliefs

    ERIC Educational Resources Information Center

    Hare, Dougal Julian; Durand, Marianne; Hendy, Steve; Wittkowski, Anja

    2012-01-01

    Studies examining staff attitudes toward people with intellectual disability have traditionally used pre-determined categories and models or been open to researcher bias. The use of methods derived from personal construct psychology permits an objective investigation of staff views and attitudes without such limitations. Fourteen staff from an…

  10. Multiresolution comparison of precipitation datasets for large-scale models

    NASA Astrophysics Data System (ADS)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  11. Focused beams of fast neutral atoms in glow discharge plasma

    NASA Astrophysics Data System (ADS)

    Grigoriev, S. N.; Melnik, Yu. A.; Metel, A. S.; Volosova, M. A.

    2017-06-01

    Glow discharge with electrostatic confinement of electrons in a vacuum chamber allows plasma processing of conductive products in a wide pressure range of p = 0.01 - 5 Pa. To assist processing of a small dielectric product with a concentrated on its surface beam of fast neutral atoms, which do not cause charge effects, ions from the discharge plasma are accelerated towards the product and transformed into fast atoms. The beam is produced using a negatively biased cylindrical or a spherical grid immersed in the plasma. Ions accelerated by the grid turn into fast neutral atoms at p > 0.1 Pa due to charge exchange collisions with gas atoms in the space charge sheaths adjoining the grid. The atoms form a diverging neutral beam and a converging beam propagating from the grid in opposite directions. The beam propagating from the concave surface of a 0.24-m-wide cylindrical grid is focused on a target within a 10-mm-wide stripe, and the beam from the 0.24-m-diameter spherical grid is focused within a 10-mm-diameter circle. At the bias voltage U = 5 kV and p ˜ 0.1 Pa, the energy of fast argon atoms is distributed continuously from zero to eU ˜ 5 keV. The pressure increase to 1 Pa results in the tenfold growth of their equivalent current and a decrease in the mean energy by an order of magnitude, which substantially raises the efficiency of material etching. Sharpening by the beam of ceramic knife-blades proved that the new method for the generation of concentrated fast atom beams can be effectively used for the processing of dielectric materials in vacuum.

  12. Development of a fountain detector for spectroscopy of secondary electrons in scanning electron microscopy

    NASA Astrophysics Data System (ADS)

    Agemura, Toshihide; Kimura, Takashi; Sekiguchi, Takashi

    2018-04-01

    The low-pass secondary electron (SE) detector, the so-called “fountain detector (FD)”, for scanning electron microscopy has high potential for application to the imaging of low-energy SEs. Low-energy SE imaging may be used for detecting the surface potential variations of a specimen. However, the detected SEs include a certain fraction of tertiary electrons (SE3s) because some of the high-energy backscattered electrons hit the grid to yield SE3s. We have overcome this difficulty by increasing the aperture ratio of the bias and ground grids and using the lock-in technique, in which the AC field with the DC offset was applied on the bias grid. The energy-filtered SE images of a 4H-SiC p-n junction show complex behavior according to the grid bias. These observations are clearly explained by the variations of Auger spectra across the p-n junction. The filtered SE images taken with the FD can be applied to observing the surface potential variation of specimens.

  13. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    PubMed Central

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-01-01

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108

  14. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    PubMed

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-02-24

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  15. Using ERA-Interim reanalysis for creating datasets of energy-relevant climate variables

    NASA Astrophysics Data System (ADS)

    Jones, Philip D.; Harpham, Colin; Troccoli, Alberto; Gschwind, Benoit; Ranchin, Thierry; Wald, Lucien; Goodess, Clare M.; Dorling, Stephen

    2017-07-01

    The construction of a bias-adjusted dataset of climate variables at the near surface using ERA-Interim reanalysis is presented. A number of different, variable-dependent, bias-adjustment approaches have been proposed. Here we modify the parameters of different distributions (depending on the variable), adjusting ERA-Interim based on gridded station or direct station observations. The variables are air temperature, dewpoint temperature, precipitation (daily only), solar radiation, wind speed, and relative humidity. These are available on either 3 or 6 h timescales over the period 1979-2016. The resulting bias-adjusted dataset is available through the Climate Data Store (CDS) of the Copernicus Climate Change Data Store (C3S) and can be accessed at present from ftp://ecem.climate.copernicus.eu. The benefit of performing bias adjustment is demonstrated by comparing initial and bias-adjusted ERA-Interim data against gridded observational fields.

  16. Linear Regression Quantile Mapping (RQM) - A new approach to bias correction with consistent quantile trends

    NASA Astrophysics Data System (ADS)

    Passow, Christian; Donner, Reik

    2017-04-01

    Quantile mapping (QM) is an established concept that allows to correct systematic biases in multiple quantiles of the distribution of a climatic observable. It shows remarkable results in correcting biases in historical simulations through observational data and outperforms simpler correction methods which relate only to the mean or variance. Since it has been shown that bias correction of future predictions or scenario runs with basic QM can result in misleading trends in the projection, adjusted, trend preserving, versions of QM were introduced in the form of detrended quantile mapping (DQM) and quantile delta mapping (QDM) (Cannon, 2015, 2016). Still, all previous versions and applications of QM based bias correction rely on the assumption of time-independent quantiles over the investigated period, which can be misleading in the context of a changing climate. Here, we propose a novel combination of linear quantile regression (QR) with the classical QM method to introduce a consistent, time-dependent and trend preserving approach of bias correction for historical and future projections. Since QR is a regression method, it is possible to estimate quantiles in the same resolution as the given data and include trends or other dependencies. We demonstrate the performance of the new method of linear regression quantile mapping (RQM) in correcting biases of temperature and precipitation products from historical runs (1959 - 2005) of the COSMO model in climate mode (CCLM) from the Euro-CORDEX ensemble relative to gridded E-OBS data of the same spatial and temporal resolution. A thorough comparison with established bias correction methods highlights the strengths and potential weaknesses of the new RQM approach. References: A.J. Cannon, S.R. Sorbie, T.Q. Murdock: Bias Correction of GCM Precipitation by Quantile Mapping - How Well Do Methods Preserve Changes in Quantiles and Extremes? Journal of Climate, 28, 6038, 2015 A.J. Cannon: Multivariate Bias Correction of Climate Model Outputs - Matching Marginal Distributions and Inter-variable Dependence Structure. Journal of Climate, 29, 7045, 2016

  17. Variability in Bias of Gridded Sea Surface Temperature Data Products: Implications for Seasonally Resolved Marine Proxy Reconstructions

    NASA Astrophysics Data System (ADS)

    Ouellette, G., Jr.; DeLong, K. L.

    2016-12-01

    Seasonally resolved reconstructions of sea surface temperature (SST) are commonly produced using isotopic ratios and trace elemental ratios within the skeletal material of marine organisms such as corals, coralline algae, and mollusks. Using these geochemical proxies to produce paleoclimate reconstructions requires using regression methods to calibrate the proxy to observed SST, ideally with in situ SST records that span many years. Unfortunately, the few locations with in situ SST records rarely coincide with the time span of the marine proxy archive. Therefore, SST data products are often used for calibration and they are based on MOHSST or ICOADS SST observations as their main SST source but use different algorithms to produce globally gridded data products. These products include the Hadley Center's HADSST (5º) and interpolated HADISST (1º), NOAA's extended reconstructed SST (ERSST; 2º), optimum interpolation SST (OISST; 1º), and the Kaplan SST (5º). This study assessed the potential bias in these data products at marine archive sites throughout the tropical Atlantic using in situ SST where it was available, and a high-resolution (4 km) satellite-based SST data product from NOAA Pathfinder that has been shown to closely reflect in situ SST for our locations. Bias was assessed at each site, and then within each data product across the region for spatial homogeneity. Our results reveal seasonal biases in all data products, but not for all locations and not of a uniform magnitude or season among products. We found the largest differences in mean SST on the order of 1-3°C for single sites in the Gulf of Mexico, and differences for regional mean SST bias were 0.5-1°C when sites in the Gulf of Mexico were compared to sites in the Caribbean Sea within the same data product. No one SST data product outperformed the others and no systematic bias was found. This analysis illustrates regional strengths and weaknesses of these data products, and serves as a cautionary note against the wholesale use of a particular gridded data product for marine proxy calibration, whether for a single site or larger regional reconstruction, without considering the inherent heterogeneous bias present in each data product that we show varies among locations. Furthermore, this study has implications for comparing climate models and these SST data products.

  18. A rotationally biased upwind difference scheme for the Euler equations

    NASA Technical Reports Server (NTRS)

    Davis, S. F.

    1983-01-01

    The upwind difference schemes of Godunov, Osher, Roe and van Leer are able to resolve one dimensional steady shocks for the Euler equations within one or two mesh intervals. Unfortunately, this resolution is lost in two dimensions when the shock crosses the computing grid at an oblique angle. To correct this problem, a numerical scheme was developed which automatically locates the angle at which a shock might be expected to cross the computing grid and then constructs separate finite difference formulas for the flux components normal and tangential to this direction. Numerical results which illustrate the ability of this method to resolve steady oblique shocks are presented.

  19. Uncertainty in coal property valuation in West Virginia: A case study

    USGS Publications Warehouse

    Hohn, M.E.; McDowell, R.R.

    2001-01-01

    Interpolated grids of coal bed thickness are being considered for use in a proposed method for taxation of coal in the state of West Virginia (United States). To assess the origin and magnitude of possible inaccuracies in calculated coal tonnage, we used conditional simulation to generate equiprobable realizations of net coal thickness for two coals on a 7 1/2 min topographic quadrangle, and a third coal in a second quadrangle. Coals differed in average thickness and proportion of original coal that had been removed by erosion; all three coals crop out in the study area. Coal tonnage was calculated for each realization and for each interpolated grid for actual and artificial property parcels, and differences were summarized as graphs of percent difference between tonnage calculated from the grid and average tonnage from simulations. Coal in individual parcels was considered minable for valuation purposes if average thickness in each parcel exceeded 30 inches. Results of this study show that over 75% of the parcels are classified correctly as minable or unminable based on interpolation grids of coal bed thickness. Although between 80 and 90% of the tonnages differ by less than 20% between interpolated values and simulated values, a nonlinear conditional bias might exist in estimation of coal tonnage from interpolated thickness, such that tonnage is underestimated where coal is thin, and overestimated where coal is thick. The largest percent differences occur for parcels that are small in area, although because of the small quantities of coal in question, bias is small on an absolute scale for these parcels. For a given parcel size, maximum apparent overestimation of coal tonnage occurs in parcels with an average coal bed thickness near the minable cutoff of 30 in. Conditional bias in tonnage for parcels having a coal thickness exceeding the cutoff by 10 in. or more is constant for two of the three coals studied, and increases slightly with average thickness for the third coal. ?? 2001 International Association for Mathematical Geology.

  20. Quantile Mapping Bias correction for daily precipitation over Vietnam in a regional climate model

    NASA Astrophysics Data System (ADS)

    Trinh, L. T.; Matsumoto, J.; Ngo-Duc, T.

    2017-12-01

    In the past decades, Regional Climate Models (RCMs) have been developed significantly, allowing climate simulation to be conducted at a higher resolution. However, RCMs often contained biases when comparing with observations. Therefore, statistical correction methods were commonly employed to reduce/minimize the model biases. In this study, outputs of the Regional Climate Model (RegCM) version 4.3 driven by the CNRM-CM5 global products were evaluated with and without the Quantile Mapping (QM) bias correction method. The model domain covered the area from 90oE to 145oE and from 15oS to 40oN with a horizontal resolution of 25km. The QM bias correction processes were implemented by using the Vietnam Gridded precipitation dataset (VnGP) and the outputs of RegCM historical run in the period 1986-1995 and then validated for the period 1996-2005. Based on the statistical quantity of spatial correlation and intensity distributions, the QM method showed a significant improvement in rainfall compared to the non-bias correction method. The improvements both in time and space were recognized in all seasons and all climatic sub-regions of Vietnam. Moreover, not only the rainfall amount but also some extreme indices such as R10m, R20mm, R50m, CDD, CWD, R95pTOT, R99pTOT were much better after the correction. The results suggested that the QM correction method should be taken into practice for the projections of the future precipitation over Vietnam.

  1. Calculating distributed glacier mass balance for the Swiss Alps from RCM output: Development and testing of downscaling and validation methods

    NASA Astrophysics Data System (ADS)

    Machguth, H.; Paul, F.; Kotlarski, S.; Hoelzle, M.

    2009-04-01

    Climate model output has been applied in several studies on glacier mass balance calculation. Hereby, computation of mass balance has mostly been performed at the native resolution of the climate model output or data from individual cells were selected and statistically downscaled. Little attention has been given to the issue of downscaling entire fields of climate model output to a resolution fine enough to compute glacier mass balance in rugged high-mountain terrain. In this study we explore the use of gridded output from a regional climate model (RCM) to drive a distributed mass balance model for the perimeter of the Swiss Alps and the time frame 1979-2003. Our focus lies on the development and testing of downscaling and validation methods. The mass balance model runs at daily steps and 100 m spatial resolution while the RCM REMO provides daily grids (approx. 18 km resolution) of dynamically downscaled re-analysis data. Interpolation techniques and sub-grid parametrizations are combined to bridge the gap in spatial resolution and to obtain daily input fields of air temperature, global radiation and precipitation. The meteorological input fields are compared to measurements at 14 high-elevation weather stations. Computed mass balances are compared to various sets of direct measurements, including stake readings and mass balances for entire glaciers. The validation procedure is performed separately for annual, winter and summer balances. Time series of mass balances for entire glaciers obtained from the model run agree well with observed time series. On the one hand, summer melt measured at stakes on several glaciers is well reproduced by the model, on the other hand, observed accumulation is either over- or underestimated. It is shown that these shifts are systematic and correlated to regional biases in the meteorological input fields. We conclude that the gap in spatial resolution is not a large drawback, while biases in RCM output are a major limitation to model performance. The development and testing of methods to reduce regionally variable biases in entire fields of RCM output should be a focus of pursuing studies.

  2. Hydrologic Implications of Dynamical and Statistical Approaches to Downscaling Climate Model Outputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Andrew W; Leung, Lai R; Sridhar, V

    Six approaches for downscaling climate model outputs for use in hydrologic simulation were evaluated, with particular emphasis on each method's ability to produce precipitation and other variables used to drive a macroscale hydrology model applied at much higher spatial resolution than the climate model. Comparisons were made on the basis of a twenty-year retrospective (1975–1995) climate simulation produced by the NCAR-DOE Parallel Climate Model (PCM), and the implications of the comparison for a future (2040–2060) PCM climate scenario were also explored. The six approaches were made up of three relatively simple statistical downscaling methods – linear interpolation (LI), spatial disaggregationmore » (SD), and bias-correction and spatial disaggregation (BCSD) – each applied to both PCM output directly (at T42 spatial resolution), and after dynamical downscaling via a Regional Climate Model (RCM – at ½-degree spatial resolution), for downscaling the climate model outputs to the 1/8-degree spatial resolution of the hydrological model. For the retrospective climate simulation, results were compared to an observed gridded climatology of temperature and precipitation, and gridded hydrologic variables resulting from forcing the hydrologic model with observations. The most significant findings are that the BCSD method was successful in reproducing the main features of the observed hydrometeorology from the retrospective climate simulation, when applied to both PCM and RCM outputs. Linear interpolation produced better results using RCM output than PCM output, but both methods (PCM-LI and RCM-LI) lead to unacceptably biased hydrologic simulations. Spatial disaggregation of the PCM output produced results similar to those achieved with the RCM interpolated output; nonetheless, neither PCM nor RCM output was useful for hydrologic simulation purposes without a bias-correction step. For the future climate scenario, only the BCSD-method (using PCM or RCM) was able to produce hydrologically plausible results. With the BCSD method, the RCM-derived hydrology was more sensitive to climate change than the PCM-derived hydrology.« less

  3. Near-Body Grid Adaption for Overset Grids

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2016-01-01

    A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.

  4. Comparison Of Downscaled CMIP5 Precipitation Datasets For Projecting Changes In Extreme Precipitation In The San Francisco Bay Area.

    NASA Technical Reports Server (NTRS)

    Milesi, Cristina; Costa-Cabral, Mariza; Rath, John; Mills, William; Roy, Sujoy; Thrasher, Bridget; Wang, Weile; Chiang, Felicia; Loewenstein, Max; Podolske, James

    2014-01-01

    Water resource managers planning for the adaptation to future events of extreme precipitation now have access to high resolution downscaled daily projections derived from statistical bias correction and constructed analogs. We also show that along the Pacific Coast the Northern Oscillation Index (NOI) is a reliable predictor of storm likelihood, and therefore a predictor of seasonal precipitation totals and likelihood of extremely intense precipitation. Such time series can be used to project intensity duration curves into the future or input into stormwater models. However, few climate projection studies have explored the impact of the type of downscaling method used on the range and uncertainty of predictions for local flood protection studies. Here we present a study of the future climate flood risk at NASA Ames Research Center, located in South Bay Area, by comparing the range of predictions in extreme precipitation events calculated from three sets of time series downscaled from CMIP5 data: 1) the Bias Correction Constructed Analogs method dataset downscaled to a 1/8 degree grid (12km); 2) the Bias Correction Spatial Disaggregation method downscaled to a 1km grid; 3) a statistical model of extreme daily precipitation events and projected NOI from CMIP5 models. In addition, predicted years of extreme precipitation are used to estimate the risk of overtopping of the retention pond located on the site through simulations of the EPA SWMM hydrologic model. Preliminary results indicate that the intensity of extreme precipitation events is expected to increase and flood the NASA Ames retention pond. The results from these estimations will assist flood protection managers in planning for infrastructure adaptations.

  5. A High Resolution Land Cover Data Product to Remove Urban Density Over-Estimation Bias for Coupled Urban-Vegetation-Atmosphere Interaction Studies

    NASA Astrophysics Data System (ADS)

    Shaffer, S. R.

    2017-12-01

    Coupled land-atmosphere interactions in urban settings modeled with the Weather Research and Forecasting model (WRF) derive urban land cover from 30-meter resolution National Land Cover Database (NLCD) products. However, within urban areas, the categorical NLCD lose information of non-urban classifications whenever the impervious cover within a grid cell is above 0%, and the current method to determine urban area over estimates the actual area, leading to a bias of urban contribution. To address this bias of urban contribution an investigation is conducted by employing a 1-meter resolution land cover data product derived from the National Agricultural Imagery Program (NAIP) dataset. Scenes during 2010 for the Central Arizona Phoenix Long Term Ecological Research (CAP-LTER) study area, roughly a 120 km x 100 km area containing metropolitan Phoenix, are adapted for use within WRF to determine the areal fraction and urban fraction of each WRF urban class. A method is shown for converting these NAIP data into classes corresponding to NLCD urban classes, and is evaluated in comparison with current WRF implementation using NLCD. Results are shown for comparisons of land cover products at the level of input data and aggregated to model resolution (1 km). The sensitivity of WRF short-term summertime pre-monsoon predictions within metropolitan Phoenix to different input data products of land cover, to method of aggregating these data to model grid scale (1 km), for the default and derived parameter values are examined with the Noah mosaic land surface scheme adapted for using these data. Issues with adapting these non-urban NAIP classes for use in the mosaic approach will also be discussed.

  6. Electrostatic dust detector

    DOEpatents

    Skinner, Charles H [Lawrenceville, NJ

    2006-05-02

    An apparatus for detecting dust in a variety of environments which can include radioactive and other hostile environments both in a vacuum and in a pressurized system. The apparatus consists of a grid coupled to a selected bias voltage. The signal generated when dust impacts and shorts out the grid is electrically filtered, and then analyzed by a signal analyzer which is then sent to a counter. For fine grids a correlation can be developed to relate the number of counts observed to the amount of dust which impacts the grid.

  7. Testing the Hydrological Coherence of High-Resolution Gridded Precipitation and Temperature Data Sets

    NASA Astrophysics Data System (ADS)

    Laiti, L.; Mallucci, S.; Piccolroaz, S.; Bellin, A.; Zardi, D.; Fiori, A.; Nikulin, G.; Majone, B.

    2018-03-01

    Assessing the accuracy of gridded climate data sets is highly relevant to climate change impact studies, since evaluation, bias correction, and statistical downscaling of climate models commonly use these products as reference. Among all impact studies those addressing hydrological fluxes are the most affected by errors and biases plaguing these data. This paper introduces a framework, coined Hydrological Coherence Test (HyCoT), for assessing the hydrological coherence of gridded data sets with hydrological observations. HyCoT provides a framework for excluding meteorological forcing data sets not complying with observations, as function of the particular goal at hand. The proposed methodology allows falsifying the hypothesis that a given data set is coherent with hydrological observations on the basis of the performance of hydrological modeling measured by a metric selected by the modeler. HyCoT is demonstrated in the Adige catchment (southeastern Alps, Italy) for streamflow analysis, using a distributed hydrological model. The comparison covers the period 1989-2008 and includes five gridded daily meteorological data sets: E-OBS, MSWEP, MESAN, APGD, and ADIGE. The analysis highlights that APGD and ADIGE, the data sets with highest effective resolution, display similar spatiotemporal precipitation patterns and produce the largest hydrological efficiency indices. Lower performances are observed for E-OBS, MESAN, and MSWEP, especially in small catchments. HyCoT reveals deficiencies in the representation of spatiotemporal patterns of gridded climate data sets, which cannot be corrected by simply rescaling the meteorological forcing fields, as often done in bias correction of climate model outputs. We recommend this framework to assess the hydrological coherence of gridded data sets to be used in large-scale hydroclimatic studies.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myers, A.M.; Ruzic, D.N.; Powell, R.C.

    An analyzer capable of determining the mass as well as the energy (5--200 eV) of neutral and ion species has been developed from a quadrupole mass spectrometer (QMS). The system, which is similar to a retarding grid energy analyzer (RGEA), functions by biasing the rods of a QMS and monitoring the analyzer signal as a function of bias potential. Modulation of the pole bias greatly increases the minimum detectable signal level. Experiments were performed using species generated in a single-grid Kaufman ion gun operated with N{sub 2} or Ar. Results show that the pole bias techniques can provide energy resolutionmore » of 1--2 eV. Ion species from the gun were found to have an energy equal to the sum of the beam and the plasma potentials, with an energy spread between 1 and 3 eV. Fast N{sub 2} and Ar neutral species were measured as a function of discharge voltage (30--80 V), beam acceleration voltage (50--100 V), grid voltage ({minus}20 to +5 V), and pressure (0.5 and 1.5 mTorr). The energy of the fast neutral species was always less than that of the ions. This was consistent with the fast neutrals being formed by a charge-exchange process.« less

  9. The effects of climate downscaling technique and observational data set on modeled ecological responses.

    PubMed

    Pourmokhtarian, Afshin; Driscoll, Charles T; Campbell, John L; Hayhoe, Katharine; Stoner, Anne M K

    2016-07-01

    Assessments of future climate change impacts on ecosystems typically rely on multiple climate model projections, but often utilize only one downscaling approach trained on one set of observations. Here, we explore the extent to which modeled biogeochemical responses to changing climate are affected by the selection of the climate downscaling method and training observations used at the montane landscape of the Hubbard Brook Experimental Forest, New Hampshire, USA. We evaluated three downscaling methods: the delta method (or the change factor method), monthly quantile mapping (Bias Correction-Spatial Disaggregation, or BCSD), and daily quantile regression (Asynchronous Regional Regression Model, or ARRM). Additionally, we trained outputs from four atmosphere-ocean general circulation models (AOGCMs) (CCSM3, HadCM3, PCM, and GFDL-CM2.1) driven by higher (A1fi) and lower (B1) future emissions scenarios on two sets of observations (1/8º resolution grid vs. individual weather station) to generate the high-resolution climate input for the forest biogeochemical model PnET-BGC (eight ensembles of six runs).The choice of downscaling approach and spatial resolution of the observations used to train the downscaling model impacted modeled soil moisture and streamflow, which in turn affected forest growth, net N mineralization, net soil nitrification, and stream chemistry. All three downscaling methods were highly sensitive to the observations used, resulting in projections that were significantly different between station-based and grid-based observations. The choice of downscaling method also slightly affected the results, however not as much as the choice of observations. Using spatially smoothed gridded observations and/or methods that do not resolve sub-monthly shifts in the distribution of temperature and/or precipitation can produce biased results in model applications run at greater temporal and/or spatial resolutions. These results underscore the importance of carefully considering field observations used for training, as well as the downscaling method used to generate climate change projections, for smaller-scale modeling studies. Different sources of variability including selection of AOGCM, emissions scenario, downscaling technique, and data used for training downscaling models, result in a wide range of projected forest ecosystem responses to future climate change. © 2016 by the Ecological Society of America.

  10. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    PubMed Central

    Pereira, N F; Sitek, A

    2011-01-01

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated. PMID:20736496

  11. Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method

    NASA Astrophysics Data System (ADS)

    Pereira, N. F.; Sitek, A.

    2010-09-01

    Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.

  12. A two-stage cluster sampling method using gridded population data, a GIS, and Google Earth(TM) imagery in a population-based mortality survey in Iraq.

    PubMed

    Galway, Lp; Bell, Nathaniel; Sae, Al Shatari; Hagopian, Amy; Burnham, Gilbert; Flaxman, Abraham; Weiss, Wiliam M; Rajaratnam, Julie; Takaro, Tim K

    2012-04-27

    Mortality estimates can measure and monitor the impacts of conflict on a population, guide humanitarian efforts, and help to better understand the public health impacts of conflict. Vital statistics registration and surveillance systems are rarely functional in conflict settings, posing a challenge of estimating mortality using retrospective population-based surveys. We present a two-stage cluster sampling method for application in population-based mortality surveys. The sampling method utilizes gridded population data and a geographic information system (GIS) to select clusters in the first sampling stage and Google Earth TM imagery and sampling grids to select households in the second sampling stage. The sampling method is implemented in a household mortality study in Iraq in 2011. Factors affecting feasibility and methodological quality are described. Sampling is a challenge in retrospective population-based mortality studies and alternatives that improve on the conventional approaches are needed. The sampling strategy presented here was designed to generate a representative sample of the Iraqi population while reducing the potential for bias and considering the context specific challenges of the study setting. This sampling strategy, or variations on it, are adaptable and should be considered and tested in other conflict settings.

  13. A two-stage cluster sampling method using gridded population data, a GIS, and Google EarthTM imagery in a population-based mortality survey in Iraq

    PubMed Central

    2012-01-01

    Background Mortality estimates can measure and monitor the impacts of conflict on a population, guide humanitarian efforts, and help to better understand the public health impacts of conflict. Vital statistics registration and surveillance systems are rarely functional in conflict settings, posing a challenge of estimating mortality using retrospective population-based surveys. Results We present a two-stage cluster sampling method for application in population-based mortality surveys. The sampling method utilizes gridded population data and a geographic information system (GIS) to select clusters in the first sampling stage and Google Earth TM imagery and sampling grids to select households in the second sampling stage. The sampling method is implemented in a household mortality study in Iraq in 2011. Factors affecting feasibility and methodological quality are described. Conclusion Sampling is a challenge in retrospective population-based mortality studies and alternatives that improve on the conventional approaches are needed. The sampling strategy presented here was designed to generate a representative sample of the Iraqi population while reducing the potential for bias and considering the context specific challenges of the study setting. This sampling strategy, or variations on it, are adaptable and should be considered and tested in other conflict settings. PMID:22540266

  14. Inferring animal densities from tracking data using Markov chains.

    PubMed

    Whitehead, Hal; Jonsen, Ian D

    2013-01-01

    The distributions and relative densities of species are keys to ecology. Large amounts of tracking data are being collected on a wide variety of animal species using several methods, especially electronic tags that record location. These tracking data are effectively used for many purposes, but generally provide biased measures of distribution, because the starts of the tracks are not randomly distributed among the locations used by the animals. We introduce a simple Markov-chain method that produces unbiased measures of relative density from tracking data. The density estimates can be over a geographical grid, and/or relative to environmental measures. The method assumes that the tracked animals are a random subset of the population in respect to how they move through the habitat cells, and that the movements of the animals among the habitat cells form a time-homogenous Markov chain. We illustrate the method using simulated data as well as real data on the movements of sperm whales. The simulations illustrate the bias introduced when the initial tracking locations are not randomly distributed, as well as the lack of bias when the Markov method is used. We believe that this method will be important in giving unbiased estimates of density from the growing corpus of animal tracking data.

  15. Optimizing the Terzaghi Estimator of the 3D Distribution of Rock Fracture Orientations

    NASA Astrophysics Data System (ADS)

    Tang, Huiming; Huang, Lei; Juang, C. Hsein; Zhang, Junrong

    2017-08-01

    Orientation statistics are prone to bias when surveyed with the scanline mapping technique in which the observed probabilities differ, depending on the intersection angle between the fracture and the scanline. This bias leads to 1D frequency statistical data that are poorly representative of the 3D distribution. A widely accessible estimator named after Terzaghi was developed to estimate 3D frequencies from 1D biased observations, but the estimation accuracy is limited for fractures at narrow intersection angles to scanlines (termed the blind zone). Although numerous works have concentrated on accuracy with respect to the blind zone, accuracy outside the blind zone has rarely been studied. This work contributes to the limited investigations of accuracy outside the blind zone through a qualitative assessment that deploys a mathematical derivation of the Terzaghi equation in conjunction with a quantitative evaluation that uses fractures simulation and verification of natural fractures. The results show that the estimator does not provide a precise estimate of 3D distributions and that the estimation accuracy is correlated with the grid size adopted by the estimator. To explore the potential for improving accuracy, the particular grid size producing maximum accuracy is identified from 168 combinations of grid sizes and two other parameters. The results demonstrate that the 2° × 2° grid size provides maximum accuracy for the estimator in most cases when applied outside the blind zone. However, if the global sample density exceeds 0.5°-2, then maximum accuracy occurs at a grid size of 1° × 1°.

  16. Initial Thrust Measurements of Marshall's Ion-ioN Thruster

    NASA Technical Reports Server (NTRS)

    Caruso, Natalie R. S.; Scogin, Tyler; Liu, Thomas M.; Walker, Mitchell L. R.; Polzin, Kurt A.; Dankanich, John W.

    2015-01-01

    Electronegative ion thrusters are a variation of traditional gridded ion thruster technology differentiated by the production and acceleration of both positive and negative ions. Benefits of electronegative ion thrusters include the elimination of lifetime-limiting cathodes from the thruster architecture and the ability to generate appreciable thrust from both charge species. While much progress has been made in the development of electronegative ion thruster technology, direct thrust measurements are required to unambiguously demonstrate the efficacy of the concept and support continued development. In the present work, direct thrust measurements of the thrust produced by the MINT (Marshall's Ion-ioN Thruster) are performed using an inverted-pendulum thrust stand in the High-Power Electric Propulsion Laboratory's Vacuum Test Facility-1 at the Georgia Institute of Technology with operating pressures ranging from 4.8 x 10(exp -5) and 5.7 x 10(exp -5) torr. Thrust is recorded while operating with a propellant volumetric mixture ratio of 5:1 argon to nitrogen with total volumetric flow rates of 6, 12, and 24 sccm (0.17, 0.34, and 0.68 mg/s). Plasma is generated using a helical antenna at 13.56 MHz and radio frequency (RF) power levels of 150 and 350 W. The acceleration grid assembly is operated using both sinusoidal and square waveform biases of +/-350 V at frequencies of 4, 10, 25, 125, and 225 kHz. Thrust is recorded for two separate thruster configurations: with and without the magnetic filter. No thrust is discernable during thruster operation without the magnetic filter for any volumetric flow rate, RF forward Power level, or acceleration grid biasing scheme. For the full thruster configuration, with the magnetic filter installed, a brief burst of thrust of approximately 3.75 mN +/- 3 mN of error is observed at the start of grid operation for a volumetric flow rate of 24 sccm at 350 W RF power using a sinusoidal waveform grid bias at 125 kHz and +/- 350 V. Similar bursts in thrust are observed using a square waveform grid bias at 10 kHz and +/- 350 V for volumetric flow rates of 6, 10, and 12 sccm at 150, 350, and 350 W respectively. The only operating condition that exhibits repeated thrust spikes throughout thruster operation is the 24 sccm condition with a 5:1 mixture ratio at 150 W RF power using the 10 kHz square waveform acceleration grid bias. Thrust spikes for this condition measure 3 mN with an error of +/- 2.5 mN. There are no operating conditions tested that show continuous thrust production.

  17. Stabilized Finite Elements in FUN3D

    NASA Technical Reports Server (NTRS)

    Anderson, W. Kyle; Newman, James C.; Karman, Steve L.

    2017-01-01

    A Streamlined Upwind Petrov-Galerkin (SUPG) stabilized finite-element discretization has been implemented as a library into the FUN3D unstructured-grid flow solver. Motivation for the selection of this methodology is given, details of the implementation are provided, and the discretization for the interior scheme is verified for linear and quadratic elements by using the method of manufactured solutions. A methodology is also described for capturing shocks, and simulation results are compared to the finite-volume formulation that is currently the primary method employed for routine engineering applications. The finite-element methodology is demonstrated to be more accurate than the finite-volume technology, particularly on tetrahedral meshes where the solutions obtained using the finite-volume scheme can suffer from adverse effects caused by bias in the grid. Although no effort has been made to date to optimize computational efficiency, the finite-element scheme is competitive with the finite-volume scheme in terms of computer time to reach convergence.

  18. Multivariate bias adjustment of high-dimensional climate simulations: the Rank Resampling for Distributions and Dependences (R2D2) bias correction

    NASA Astrophysics Data System (ADS)

    Vrac, Mathieu

    2018-06-01

    Climate simulations often suffer from statistical biases with respect to observations or reanalyses. It is therefore common to correct (or adjust) those simulations before using them as inputs into impact models. However, most bias correction (BC) methods are univariate and so do not account for the statistical dependences linking the different locations and/or physical variables of interest. In addition, they are often deterministic, and stochasticity is frequently needed to investigate climate uncertainty and to add constrained randomness to climate simulations that do not possess a realistic variability. This study presents a multivariate method of rank resampling for distributions and dependences (R2D2) bias correction allowing one to adjust not only the univariate distributions but also their inter-variable and inter-site dependence structures. Moreover, the proposed R2D2 method provides some stochasticity since it can generate as many multivariate corrected outputs as the number of statistical dimensions (i.e., number of grid cell × number of climate variables) of the simulations to be corrected. It is based on an assumption of stability in time of the dependence structure - making it possible to deal with a high number of statistical dimensions - that lets the climate model drive the temporal properties and their changes in time. R2D2 is applied on temperature and precipitation reanalysis time series with respect to high-resolution reference data over the southeast of France (1506 grid cell). Bivariate, 1506-dimensional and 3012-dimensional versions of R2D2 are tested over a historical period and compared to a univariate BC. How the different BC methods behave in a climate change context is also illustrated with an application to regional climate simulations over the 2071-2100 period. The results indicate that the 1d-BC basically reproduces the climate model multivariate properties, 2d-R2D2 is only satisfying in the inter-variable context, 1506d-R2D2 strongly improves inter-site properties and 3012d-R2D2 is able to account for both. Applications of the proposed R2D2 method to various climate datasets are relevant for many impact studies. The perspectives of improvements are numerous, such as introducing stochasticity in the dependence itself, questioning its stability assumption, and accounting for temporal properties adjustment while including more physics in the adjustment procedures.

  19. Parameterizing Grid-Averaged Longwave Fluxes for Inhomogeneous Marine Boundary Layer Clouds

    NASA Technical Reports Server (NTRS)

    Barker, Howard W.; Wielicki, Bruce A.

    1997-01-01

    This paper examines the relative impacts on grid-averaged longwave flux transmittance (emittance) for Marine Boundary Layer (MBL) cloud fields arising from horizontal variability of optical depth tau and cloud sides, First, using fields of Landsat-inferred tau and a Monte Carlo photon transport algorithm, it is demonstrated that mean all-sky transmittances for 3D variable MBL clouds can be computed accurately by the conventional method of linearly weighting clear and cloudy transmittances by their respective sky fractions. Then, the approximations of decoupling cloud and radiative properties and assuming independent columns are shown to be adequate for computation of mean flux transmittance. Since real clouds have nonzero geometric thicknesses, cloud fractions A'(sub c) presented to isotropic beams usually exceed the more familiar vertically projected cloud fractions A(sub c). It is shown, however, that when A(sub c)less than or equal to 0.9, biases for all-sky transmittance stemming from use of A(sub c) as opposed to A'(sub c) are roughly 2-5 times smaller than, and opposite in sign to, biases due to neglect of horizontal variability of tau. By neglecting variable tau, all-sky transmittances are underestimated often by more than 0.1 for A(sub c) near 0.75 and this translates into relative errors that can exceed 40% (corresponding errors for all-sky emittance are about 20% for most values of A(sub c). Thus, priority should be given to development of General Circulation Model (GCM) parameterizations that account for the effects of horizontal variations in unresolved tau, effects of cloud sides are of secondary importance. On this note, an efficient stochastic model for computing grid-averaged cloudy-sky flux transmittances is furnished that assumes that distributions of tau, for regions comparable in size to GCM grid cells, can be described adequately by gamma distribution functions. While the plane-parallel, homogeneous model underestimates cloud transmittance by about an order of magnitude when 3D variable cloud transmittances are less than or equal to 0.2 and by approx. 20% to 100% otherwise, the stochastic model reduces these biases often by more than 80%.

  20. An improved bias correction method of daily rainfall data using a sliding window technique for climate change impact assessment

    NASA Astrophysics Data System (ADS)

    Smitha, P. S.; Narasimhan, B.; Sudheer, K. P.; Annamalai, H.

    2018-01-01

    Regional climate models (RCMs) are used to downscale the coarse resolution General Circulation Model (GCM) outputs to a finer resolution for hydrological impact studies. However, RCM outputs often deviate from the observed climatological data, and therefore need bias correction before they are used for hydrological simulations. While there are a number of methods for bias correction, most of them use monthly statistics to derive correction factors, which may cause errors in the rainfall magnitude when applied on a daily scale. This study proposes a sliding window based daily correction factor derivations that help build reliable daily rainfall data from climate models. The procedure is applied to five existing bias correction methods, and is tested on six watersheds in different climatic zones of India for assessing the effectiveness of the corrected rainfall and the consequent hydrological simulations. The bias correction was performed on rainfall data downscaled using Conformal Cubic Atmospheric Model (CCAM) to 0.5° × 0.5° from two different CMIP5 models (CNRM-CM5.0, GFDL-CM3.0). The India Meteorological Department (IMD) gridded (0.25° × 0.25°) observed rainfall data was considered to test the effectiveness of the proposed bias correction method. The quantile-quantile (Q-Q) plots and Nash Sutcliffe efficiency (NSE) were employed for evaluation of different methods of bias correction. The analysis suggested that the proposed method effectively corrects the daily bias in rainfall as compared to using monthly factors. The methods such as local intensity scaling, modified power transformation and distribution mapping, which adjusted the wet day frequencies, performed superior compared to the other methods, which did not consider adjustment of wet day frequencies. The distribution mapping method with daily correction factors was able to replicate the daily rainfall pattern of observed data with NSE value above 0.81 over most parts of India. Hydrological simulations forced using the bias corrected rainfall (distribution mapping and modified power transformation methods that used the proposed daily correction factors) was similar to those simulated by the IMD rainfall. The results demonstrate that the methods and the time scales used for bias correction of RCM rainfall data have a larger impact on the accuracy of the daily rainfall and consequently the simulated streamflow. The analysis suggests that the distribution mapping with daily correction factors can be preferred for adjusting RCM rainfall data irrespective of seasons or climate zones for realistic simulation of streamflow.

  1. Simulation fidelity of a virtual environment display

    NASA Technical Reports Server (NTRS)

    Nemire, Kenneth; Jacoby, Richard H.; Ellis, Stephen R.

    1994-01-01

    We assessed the degree to which a virtual environment system produced a faithful simulation of three-dimensional space by investigating the influence of a pitched optic array on the perception of gravity-referenced eye level (GREL). We compared the results with those obtained in a physical environment. In a within-subjects factorial design, 12 subjects indicated GREL while viewing virtual three-dimensional arrays at different static orientations. A physical array biased GREL more than did a geometrically identical virtual pitched array. However, addition of two sets of orthogonal parallel lines (a grid) to the virtual pitched array resulted in as large a bias as that obtained with the physical pitched array. The increased bias was caused by longitudinal, but not the transverse, components of the grid. We discuss implications of our results for spatial orientation models and for designs of virtual displays.

  2. On the impact of the resolution on the surface and subsurface Eastern Tropical Atlantic warm bias

    NASA Astrophysics Data System (ADS)

    Martín-Rey, Marta; Lazar, Alban

    2016-04-01

    The tropical variability has a great importance for the climate of adjacent areas. Its sea surface temperature anomalies (SSTA) affect in particular the Brazilian Nordeste and the Sahelian region, as well as the tropical Pacific or the Euro-Atlantic sector. Nevertheless, the state-of the art climate models exhibits very large systematic errors in reproducing the seasonal cycle and inter-annual variability in the equatorial and coastal Africa upwelling zones (up to several °C for SST). Theses biases exist already, in smaller proportions though, in forced ocean models (several 1/10th of °C), and affect not only the mixed layer but also the whole thermocline. Here, we present an analysis of the impact of horizontal and vertical resolution changes on these biases. Three different DRAKKAR NEMO OGCM simulations have been analysed, associated to the same forcing set (DFS4.4) with different grid resolutions: "REF" for reference (1/4°, 46 vertical levels), "HH" with a finer horizontal grid (1/12°, 46 v.l.) and "HV" with a finer vertical grid (1/4°, 75 v.l.). At the surface, a more realistic seasonal SST cycle is produced in HH in the three upwellings, where the warm bias decreases (by 10% - 20%) during boreal spring and summer. A notable result is that increasing vertical resolution in HV causes a shift (in advance) of the upwelling SST seasonal cycles. In order to better understand these results, we estimate the three upwelling subsurface temperature errors, using various in-situ datasets, and provide thus a three-dimensional view of the biases.

  3. The R package 'icosa' for coarse resolution global triangular and penta-hexagonal gridding

    NASA Astrophysics Data System (ADS)

    Kocsis, Adam T.

    2017-04-01

    With the development of the internet and the computational power of personal computers, open source programming environments have become indispensable for science in the past decade. This includes the increase of the GIS capacity of the free R environment, which was originally developed for statistical analyses. The flexibility of R made it a preferred programming tool in a multitude of disciplines from the area of the biological and geological sciences. Many of these subdisciplines operate with incidence (occurrence) data that are in a large number of cases to be grained before further analyses can be conducted. This graining is executed mostly by gridding data to cells of a Gaussian grid of various resolutions to increase the density of data in a single unit of the analyses. This method has obvious shortcomings despite the ease of its application: well-known systematic biases are induced to cell sizes and shapes that can interfere with the results of statistical procedures, especially if the number of incidence points influences the metrics in question. The 'icosa' package employs a common method to overcome this obstacle by implementing grids with roughly equal cell sizes and shapes that are based on tessellated icosahedra. These grid objects are essentially polyhedra with xyz Cartesian vertex data that are linked to tables of faces and edges. At its current developmental stage, the package uses a single method of tessellation which balances grid cell size and shape distortions, but its structure allows the implementation of various other types of tessellation algorithms. The resolution of the grids can be set by the number of breakpoints inserted into a segment forming an edge of the original icosahedron. Both the triangular and their inverted penta-hexagonal grids are available for creation with the package. The package also incorporates functions to look up coordinates in the grid very effectively and data containers to link data to the grid structure. The classes defined in the package are communicating with classes of the 'sp' and 'raster' packages and functions are supplied that allow resolution change and type conversions. Three-dimensional rendering is made available with the 'rgl' package and two-dimensional projections can be calculated using 'sp' and 'rgdal'. The package was developed as part of a project funded by the Deutsche Forschungsgemeinschaft (KO - 5382/1-1).

  4. Observations of solar-cell metallization corrosion

    NASA Technical Reports Server (NTRS)

    Mon, G. R.

    1983-01-01

    The Engineering Sciences Area of the Jet Propulsion Laboratory (JPL) Flat-Plate Solar Array Project is performing long term environmental tests on photovoltaic modules at Wyle Laboratories in Huntsville, Alabama. Some modules have been exposed to 85 C/85% RH and 40 C/93% RH for up to 280 days. Other modules undergoing temperature-only exposures ( 3% RH) at 85 C and 100 C have been tested for more than 180 days. At least two modules of each design type are exposed to each environment - one with, and the other without a 100-mA forward bias. Degradation is both visually observed and electrically monitored. Visual observations of changes in appearance are recorded at each inspection time. Significant visual observations relating to metallization corrosion (and/or metallization-induced corrosion) include discoloration (yellowing and browning) of grid lines, migration of grid line material into the encapsulation (blossoming), the appearance of rainbow-like diffraction patterns on the grid lines, and brown spots on collectors and grid lines. All of these observations were recorded for electrically biased modules in the 280-day tests with humidity.

  5. A Generalized Simple Formulation of Convective Adjustment ...

    EPA Pesticide Factsheets

    Convective adjustment timescale (τ) for cumulus clouds is one of the most influential parameters controlling parameterized convective precipitation in climate and weather simulation models at global and regional scales. Due to the complex nature of deep convection, a prescribed value or ad hoc representation of τ is used in most global and regional climate/weather models making it a tunable parameter and yet still resulting in uncertainties in convective precipitation simulations. In this work, a generalized simple formulation of τ for use in any convection parameterization for shallow and deep clouds is developed to reduce convective precipitation biases at different grid spacing. Unlike existing other methods, our new formulation can be used with field campaign measurements to estimate τ as demonstrated by using data from two different special field campaigns. Then, we implemented our formulation into a regional model (WRF) for testing and evaluation. Results indicate that our simple τ formulation can give realistic temporal and spatial variations of τ across continental U.S. as well as grid-scale and subgrid scale precipitation. We also found that as the grid spacing decreases (e.g., from 36 to 4-km grid spacing), grid-scale precipitation dominants over subgrid-scale precipitation. The generalized τ formulation works for various types of atmospheric conditions (e.g., continental clouds due to heating and large-scale forcing over la

  6. Daily air temperature interpolated at high spatial resolution over a large mountainous region

    USGS Publications Warehouse

    Dodson, R.; Marks, D.

    1997-01-01

    Two methods are investigated for interpolating daily minimum and maximum air temperatures (Tmin and Tmax) at a 1 km spatial resolution over a large mountainous region (830 000 km2) in the U.S. Pacific Northwest. The methods were selected because of their ability to (1) account for the effect of elevation on temperature and (2) efficiently handle large volumes of data. The first method, the neutral stability algorithm (NSA), used the hydrostatic and potential temperature equations to convert measured temperatures and elevations to sea-level potential temperatures. The potential temperatures were spatially interpolated using an inverse-squared-distance algorithm and then mapped to the elevation surface of a digital elevation model (DEM). The second method, linear lapse rate adjustment (LLRA), involved the same basic procedure as the NSA, but used a constant linear lapse rate instead of the potential temperature equation. Cross-validation analyses were performed using the NSA and LLRA methods to interpolate Tmin and Tmax each day for the 1990 water year, and the methods were evaluated based on mean annual interpolation error (IE). The NSA method showed considerable bias for sites associated with vertical extrapolation. A correction based on climate station/grid cell elevation differences was developed and found to successfully remove the bias. The LLRA method was tested using 3 lapse rates, none of which produced a serious extrapolation bias. The bias-adjusted NSA and the 3 LLRA methods produced almost identical levels of accuracy (mean absolute errors between 1.2 and 1.3??C), and produced very similar temperature surfaces based on image difference statistics. In terms of accuracy, speed, and ease of implementation, LLRA was chosen as the best of the methods tested.

  7. Use of error grid analysis to evaluate acceptability of a point of care prothrombin time meter.

    PubMed

    Petersen, John R; Vonmarensdorf, Hans M; Weiss, Heidi L; Elghetany, M Tarek

    2010-02-01

    Statistical methods (linear regression, correlation analysis, etc.) are frequently employed in comparing methods in the central laboratory (CL). Assessing acceptability of point of care testing (POCT) equipment, however, is more difficult because statistically significant biases may not have an impact on clinical care. We showed how error grid (EG) analysis can be used to evaluate POCT PT INR with the CL. We compared results from 103 patients seen in an anti-coagulation clinic that were on Coumadin maintenance therapy using fingerstick samples for POCT (Roche CoaguChek XS and S) and citrated venous blood samples for CL (Stago STAR). To compare clinical acceptability of results we developed an EG with zones A, B, C and D. Using 2nd order polynomial equation analysis, POCT results highly correlate with the CL for CoaguChek XS (R(2)=0. 955) and CoaguChek S (R(2)=0. 93), respectively but does not indicate if POCT results are clinically interchangeable with the CL. Using EG it is readily apparent which levels can be considered clinically identical to the CL despite analytical bias. We have demonstrated the usefulness of EG in determining acceptability of POCT PT INR testing and how it can be used to determine cut-offs where differences in POCT results may impact clinical care. Copyright 2009 Elsevier B.V. All rights reserved.

  8. Bias-correction and Spatial Disaggregation for Climate Change Impact Assessments at a basin scale

    NASA Astrophysics Data System (ADS)

    Nyunt, Cho; Koike, Toshio; Yamamoto, Akio; Nemoto, Toshihoro; Kitsuregawa, Masaru

    2013-04-01

    Basin-scale climate change impact studies mainly rely on general circulation models (GCMs) comprising the related emission scenarios. Realistic and reliable data from GCM is crucial for national scale or basin scale impact and vulnerability assessments to build safety society under climate change. However, GCM fail to simulate regional climate features due to the imprecise parameterization schemes in atmospheric physics and coarse resolution scale. This study describes how to exclude some unsatisfactory GCMs with respect to focused basin, how to minimize the biases of GCM precipitation through statistical bias correction and how to cover spatial disaggregation scheme, a kind of downscaling, within in a basin. GCMs rejection is based on the regional climate features of seasonal evolution as a bench mark and mainly depends on spatial correlation and root mean square error of precipitation and atmospheric variables over the target region. Global Precipitation Climatology Project (GPCP) and Japanese 25-uear Reanalysis Project (JRA-25) are specified as references in figuring spatial pattern and error of GCM. Statistical bias-correction scheme comprises improvements of three main flaws of GCM precipitation such as low intensity drizzled rain days with no dry day, underestimation of heavy rainfall and inter-annual variability of local climate. Biases of heavy rainfall are conducted by generalized Pareto distribution (GPD) fitting over a peak over threshold series. Frequency of rain day error is fixed by rank order statistics and seasonal variation problem is solved by using a gamma distribution fitting in each month against insi-tu stations vs. corresponding GCM grids. By implementing the proposed bias-correction technique to all insi-tu stations and their respective GCM grid, an easy and effective downscaling process for impact studies at the basin scale is accomplished. The proposed method have been examined its applicability to some of the basins in various climate regions all over the world. The biases are controlled very well by using this scheme in all applied basins. After that, bias-corrected and downscaled GCM precipitation are ready to use for simulating the Water and Energy Budget based Distributed Hydrological Model (WEB-DHM) to analyse the stream flow change or water availability of a target basin under the climate change in near future. Furthermore, it can be investigated any inter-disciplinary studies such as drought, flood, food, health and so on.In summary, an effective and comprehensive statistical bias-correction method was established to fulfil the generative applicability of GCM scale to basin scale without difficulty. This gap filling also promotes the sound decision of river management in the basin with more reliable information to build the resilience society.

  9. Regional and seasonal estimates of fractional storm coverage based on station precipitation observations

    NASA Technical Reports Server (NTRS)

    Gong, Gavin; Entekhabi, Dara; Salvucci, Guido D.

    1994-01-01

    Simulated climates using numerical atmospheric general circulation models (GCMs) have been shown to be highly sensitive to the fraction of GCM grid area assumed to be wetted during rain events. The model hydrologic cycle and land-surface water and energy balance are influenced by the parameter bar-kappa, which is the dimensionless fractional wetted area for GCM grids. Hourly precipitation records for over 1700 precipitation stations within the contiguous United States are used to obtain observation-based estimates of fractional wetting that exhibit regional and seasonal variations. The spatial parameter bar-kappa is estimated from the temporal raingauge data using conditional probability relations. Monthly bar-kappa values are estimated for rectangular grid areas over the contiguous United States as defined by the Goddard Institute for Space Studies 4 deg x 5 deg GCM. A bias in the estimates is evident due to the unavoidably sparse raingauge network density, which causes some storms to go undetected by the network. This bias is corrected by deriving the probability of a storm escaping detection by the network. A Monte Carlo simulation study is also conducted that consists of synthetically generated storm arrivals over an artificial grid area. It is used to confirm the bar-kappa estimation procedure and to test the nature of the bias and its correction. These monthly fractional wetting estimates, based on the analysis of station precipitation data, provide an observational basis for assigning the influential parameter bar-kappa in GCM land-surface hydrology parameterizations.

  10. Effect of plasma grid bias on extracted currents in the RF driven surface-plasma negative ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belchenko, Yu., E-mail: belchenko@inp.nsk.su; Ivanov, A.; Sanin, A.

    2016-02-15

    Extraction of negative ions from the large inductively driven surface-plasma negative ion source was studied. The dependencies of the extracted currents vs plasma grid (PG) bias potential were measured for two modifications of radio-frequency driver with and without Faraday screen, for different hydrogen feeds and for different levels of cesium conditioning. The maximal PG current was independent of driver modification and it was lower in the case of inhibited cesium. The maximal extracted negative ion current depends on the potential difference between the near-PG plasma and the PG bias potentials, while the absolute value of plasma potential in the drivermore » and in the PG area is less important for the negative ion production. The last conclusion confirms the main mechanism of negative ion production through the surface conversion of fast atoms.« less

  11. Thermal detection of single e-h pairs in a biased silicon crystal detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romani, R. K.; Brink, P. L.; Cabrera, B.

    We demonstrate that individual electron-hole pairs are resolved in a 1 cm 2 by 4 mm thick silicon crystal (0.93 g) operated at ~35 mK. One side of the detector is patterned with two quasiparticle-trap-assisted electro-thermal-feedback transition edge sensor arrays held near ground potential. The other side contains a bias grid with 20% coverage. Bias potentials up to ±160 V were used in the work reported here. A fiber optic provides 650 nm (1.9 eV) photons that each produce an electron-hole (e –h +) pair in the crystal near the grid. The energy of the drifting charges is measured withmore » a phonon sensor noise σ ~0.09 e – h + pair. In conclusion, the observed charge quantization is nearly identical for h +s or e –s transported across the crystal.« less

  12. Thermal detection of single e-h pairs in a biased silicon crystal detector

    DOE PAGES

    Romani, R. K.; Brink, P. L.; Cabrera, B.; ...

    2018-01-23

    We demonstrate that individual electron-hole pairs are resolved in a 1 cm 2 by 4 mm thick silicon crystal (0.93 g) operated at ~35 mK. One side of the detector is patterned with two quasiparticle-trap-assisted electro-thermal-feedback transition edge sensor arrays held near ground potential. The other side contains a bias grid with 20% coverage. Bias potentials up to ±160 V were used in the work reported here. A fiber optic provides 650 nm (1.9 eV) photons that each produce an electron-hole (e –h +) pair in the crystal near the grid. The energy of the drifting charges is measured withmore » a phonon sensor noise σ ~0.09 e – h + pair. In conclusion, the observed charge quantization is nearly identical for h +s or e –s transported across the crystal.« less

  13. Extraction of topographic and material contrasts on surfaces from SEM images obtained by energy filtering detection with low-energy primary electrons.

    PubMed

    Nagoshi, Masayasu; Aoyama, Tomohiro; Sato, Kaoru

    2013-01-01

    Secondary electron microscope (SEM) images have been obtained for practical materials using low primary electron energies and an in-lens type annular detector with changing negative bias voltage supplied to a grid placed in front of the detector. The kinetic-energy distribution of the detected electrons was evaluated by the gradient of the bias-energy dependence of the brightness of the images. This is divided into mainly two parts at about 500 V, high and low brightness in the low- and high-energy regions, respectively and shows difference among the surface regions having different composition and topography. The combination of the negative grid bias and the pixel-by-pixel image subtraction provides the band-pass filtered images and extracts the material and topographic information of the specimen surfaces. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Thermal detection of single e-h pairs in a biased silicon crystal detector

    NASA Astrophysics Data System (ADS)

    Romani, R. K.; Brink, P. L.; Cabrera, B.; Cherry, M.; Howarth, T.; Kurinsky, N.; Moffatt, R. A.; Partridge, R.; Ponce, F.; Pyle, M.; Tomada, A.; Yellin, S.; Yen, J. J.; Young, B. A.

    2018-01-01

    We demonstrate that individual electron-hole pairs are resolved in a 1 cm2 by 4 mm thick silicon crystal (0.93 g) operated at ˜35 mK. One side of the detector is patterned with two quasiparticle-trap-assisted electro-thermal-feedback transition edge sensor arrays held near ground potential. The other side contains a bias grid with 20% coverage. Bias potentials up to ±160 V were used in the work reported here. A fiber optic provides 650 nm (1.9 eV) photons that each produce an electron-hole (e- h+) pair in the crystal near the grid. The energy of the drifting charges is measured with a phonon sensor noise σ ˜0.09 e- h+ pair. The observed charge quantization is nearly identical for h+s or e-s transported across the crystal.

  15. A polar-region-adaptable systematic bias collaborative measurement method for shipboard redundant rotational inertial navigation systems

    NASA Astrophysics Data System (ADS)

    Wang, Lin; Wu, Wenqi; Wei, Guo; Lian, Junxiang; Yu, Ruihang

    2018-05-01

    The shipboard redundant rotational inertial navigation system (RINS) configuration, including a dual-axis RINS and a single-axis RINS, can satisfy the demand of marine INSs of especially high reliability as well as achieving trade-off between position accuracy and cost. Generally, the dual-axis RINS is the master INS, and the single-axis RINS is the hot backup INS for high reliability purposes. An integrity monitoring system performs a fault detection function to ensure sailing safety. However, improving the accuracy of the backup INS in case of master INS failure has not been given enough attention. Without the aid of any external information, a systematic bias collaborative measurement method based on an augmented Kalman filter is proposed for the redundant RINSs. Estimates of inertial sensor biases can be used by the built-in integrity monitoring system to monitor the RINS running condition. On the other hand, a position error prediction model is designed for the single-axis RINS to estimate the systematic error caused by its azimuth gyro bias. After position error compensation, the position information provided by the single-axis RINS still remains highly accurate, even if the integrity monitoring system detects a dual-axis RINS fault. Moreover, use of a grid frame as a navigation frame makes the proposed method applicable in any area, including the polar regions. Semi-physical simulation and experiments including sea trials verify the validity of the method.

  16. A computationally efficient Bayesian sequential simulation approach for the assimilation of vast and diverse hydrogeophysical datasets

    NASA Astrophysics Data System (ADS)

    Nussbaumer, Raphaël; Gloaguen, Erwan; Mariéthoz, Grégoire; Holliger, Klaus

    2016-04-01

    Bayesian sequential simulation (BSS) is a powerful geostatistical technique, which notably has shown significant potential for the assimilation of datasets that are diverse with regard to the spatial resolution and their relationship. However, these types of applications of BSS require a large number of realizations to adequately explore the solution space and to assess the corresponding uncertainties. Moreover, such simulations generally need to be performed on very fine grids in order to adequately exploit the technique's potential for characterizing heterogeneous environments. Correspondingly, the computational cost of BSS algorithms in their classical form is very high, which so far has limited an effective application of this method to large models and/or vast datasets. In this context, it is also important to note that the inherent assumption regarding the independence of the considered datasets is generally regarded as being too strong in the context of sequential simulation. To alleviate these problems, we have revisited the classical implementation of BSS and incorporated two key features to increase the computational efficiency. The first feature is a combined quadrant spiral - superblock search, which targets run-time savings on large grids and adds flexibility with regard to the selection of neighboring points using equal directional sampling and treating hard data and previously simulated points separately. The second feature is a constant path of simulation, which enhances the efficiency for multiple realizations. We have also modified the aggregation operator to be more flexible with regard to the assumption of independence of the considered datasets. This is achieved through log-linear pooling, which essentially allows for attributing weights to the various data components. Finally, a multi-grid simulating path was created to enforce large-scale variance and to allow for adapting parameters, such as, for example, the log-linear weights or the type of simulation path at various scales. The newly implemented search method for kriging reduces the computational cost from an exponential dependence with regard to the grid size in the original algorithm to a linear relationship, as each neighboring search becomes independent from the grid size. For the considered examples, our results show a sevenfold reduction in run time for each additional realization when a constant simulation path is used. The traditional criticism that constant path techniques introduce a bias to the simulations was explored and our findings do indeed reveal a minor reduction in the diversity of the simulations. This bias can, however, be largely eliminated by changing the path type at different scales through the use of the multi-grid approach. Finally, we show that adapting the aggregation weight at each scale considered in our multi-grid approach allows for reproducing both the variogram and histogram, and the spatial trend of the underlying data.

  17. From GCM grid cell to agricultural plot: scale issues affecting modelling of climate impact

    PubMed Central

    Baron, Christian; Sultan, Benjamin; Balme, Maud; Sarr, Benoit; Traore, Seydou; Lebel, Thierry; Janicot, Serge; Dingkuhn, Michael

    2005-01-01

    General circulation models (GCM) are increasingly capable of making relevant predictions of seasonal and long-term climate variability, thus improving prospects of predicting impact on crop yields. This is particularly important for semi-arid West Africa where climate variability and drought threaten food security. Translating GCM outputs into attainable crop yields is difficult because GCM grid boxes are of larger scale than the processes governing yield, involving partitioning of rain among runoff, evaporation, transpiration, drainage and storage at plot scale. This study analyses the bias introduced to crop simulation when climatic data is aggregated spatially or in time, resulting in loss of relevant variation. A detailed case study was conducted using historical weather data for Senegal, applied to the crop model SARRA-H (version for millet). The study was then extended to a 10°N–17° N climatic gradient and a 31 year climate sequence to evaluate yield sensitivity to the variability of solar radiation and rainfall. Finally, a down-scaling model called LGO (Lebel–Guillot–Onibon), generating local rain patterns from grid cell means, was used to restore the variability lost by aggregation. Results indicate that forcing the crop model with spatially aggregated rainfall causes yield overestimations of 10–50% in dry latitudes, but nearly none in humid zones, due to a biased fraction of rainfall available for crop transpiration. Aggregation of solar radiation data caused significant bias in wetter zones where radiation was limiting yield. Where climatic gradients are steep, these two situations can occur within the same GCM grid cell. Disaggregation of grid cell means into a pattern of virtual synoptic stations having high-resolution rainfall distribution removed much of the bias caused by aggregation and gave realistic simulations of yield. It is concluded that coupling of GCM outputs with plot level crop models can cause large systematic errors due to scale incompatibility. These errors can be avoided by transforming GCM outputs, especially rainfall, to simulate the variability found at plot level. PMID:16433096

  18. Using the Atmospheric Radiation Measurement (ARM) Datasets to Evaluate Climate Models in Simulating Diurnal and Seasonal Variations of Tropical Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hailong; Burleyson, Casey D.; Ma, Po-Lun

    We use the long-term Atmospheric Radiation Measurement (ARM) datasets collected at the three Tropical Western Pacific (TWP) sites as a tropical testbed to evaluate the ability of the Community Atmosphere Model (CAM5) to simulate the various types of clouds, their seasonal and diurnal variations, and their impact on surface radiation. We conducted a series of CAM5 simulations at various horizontal grid spacing (around 2°, 1°, 0.5°, and 0.25°) with meteorological constraints from reanalysis. Model biases in the seasonal cycle of cloudiness are found to be weakly dependent on model resolution. Positive biases (up to 20%) in the annual mean totalmore » cloud fraction appear mostly in stratiform ice clouds. Higher-resolution simulations do reduce the positive bias in the frequency of ice clouds, but they inadvertently increase the negative biases in convective clouds and low-level liquid clouds, leading to a positive bias in annual mean shortwave fluxes at the sites, as high as 65 W m-2 in the 0.25° simulation. Such resolution-dependent biases in clouds can adversely lead to biases in ambient thermodynamic properties and, in turn, feedback on clouds. Both the CAM5 model and ARM observations show distinct diurnal cycles in total, stratiform and convective cloud fractions; however, they are out-of-phase by 12 hours and the biases vary by site. Our results suggest that biases in deep convection affect the vertical distribution and diurnal cycle of stratiform clouds through the transport of vapor and/or the detrainment of liquid and ice. We also found that the modelled gridmean surface longwave fluxes are systematically larger than site measurements when the grid that the ARM sites reside in is partially covered by ocean. The modeled longwave fluxes at such sites also lack a discernable diurnal cycle because the ocean part of the grid is warmer and less sensitive to radiative heating/cooling compared to land. Higher spatial resolution is more helpful is this regard. Our testbed approach can be easily adapted for the evaluation of new parameterizations being developed for CAM5 or other global or regional model simulations at high spatial resolutions.« less

  19. The influence of misrepresenting the nocturnal boundary layer on idealized daytime convection in large-eddy simulation

    NASA Astrophysics Data System (ADS)

    van Stratum, Bart J. H.; Stevens, Bjorn

    2015-06-01

    The influence of poorly resolving mixing processes in the nocturnal boundary layer (NBL) on the development of the convective boundary layer the following day is studied using large-eddy simulation (LES). Guided by measurement data from meteorological sites in Cabauw (Netherlands) and Hamburg (Germany), the typical summertime NBL conditions for Western Europe are characterized, and used to design idealized (absence of moisture and large-scale forcings) numerical experiments of the diel cycle. Using the UCLA-LES code with a traditional Smagorinsky-Lilly subgrid model and a simplified land-surface scheme, a sensitivity study to grid spacing is performed. At horizontal grid spacings ranging from 3.125 m in which we are capable of resolving most turbulence in the cases of interest to grid a spacing of 100 m which is clearly insufficient to resolve the NBL, the ability of LES to represent the NBL and the influence of NBL biases on the subsequent daytime development of the convective boundary layer are examined. Although the low-resolution experiments produce substantial biases in the NBL, the influence on daytime convection is shown to be small, with biases in the afternoon boundary layer depth and temperature of approximately 100 m and 0.5 K, which partially cancel each other in terms of the mixed-layer top relative humidity.

  20. Impact of bias-corrected reanalysis-derived lateral boundary conditions on WRF simulations

    NASA Astrophysics Data System (ADS)

    Moalafhi, Ditiro Benson; Sharma, Ashish; Evans, Jason Peter; Mehrotra, Rajeshwar; Rocheta, Eytan

    2017-08-01

    Lateral and lower boundary conditions derived from a suitable global reanalysis data set form the basis for deriving a dynamically consistent finer resolution downscaled product for climate and hydrological assessment studies. A problem with this, however, is that systematic biases have been noted to be present in the global reanalysis data sets that form these boundaries, biases which can be carried into the downscaled simulations thereby reducing their accuracy or efficacy. In this work, three Weather Research and Forecasting (WRF) model downscaling experiments are undertaken to investigate the impact of bias correcting European Centre for Medium range Weather Forecasting Reanalysis ERA-Interim (ERA-I) atmospheric temperature and relative humidity using Atmospheric Infrared Sounder (AIRS) satellite data. The downscaling is performed over a domain centered over southern Africa between the years 2003 and 2012. The sample mean and the mean as well as standard deviation at each grid cell for each variable are used for bias correction. The resultant WRF simulations of near-surface temperature and precipitation are evaluated seasonally and annually against global gridded observational data sets and compared with ERA-I reanalysis driving field. The study reveals inconsistencies between the impact of the bias correction prior to downscaling and the resultant model simulations after downscaling. Mean and standard deviation bias-corrected WRF simulations are, however, found to be marginally better than mean only bias-corrected WRF simulations and raw ERA-I reanalysis-driven WRF simulations. Performances, however, differ when assessing different attributes in the downscaled field. This raises questions about the efficacy of the correction procedures adopted.

  1. Solving Upwind-Biased Discretizations. 2; Multigrid Solver Using Semicoarsening

    NASA Technical Reports Server (NTRS)

    Diskin, Boris

    1999-01-01

    This paper studies a novel multigrid approach to the solution for a second order upwind biased discretization of the convection equation in two dimensions. This approach is based on semi-coarsening and well balanced explicit correction terms added to coarse-grid operators to maintain on coarse-grid the same cross-characteristic interaction as on the target (fine) grid. Colored relaxation schemes are used on all the levels allowing a very efficient parallel implementation. The results of the numerical tests can be summarized as follows: 1) The residual asymptotic convergence rate of the proposed V(0, 2) multigrid cycle is about 3 per cycle. This convergence rate far surpasses the theoretical limit (4/3) predicted for standard multigrid algorithms using full coarsening. The reported efficiency does not deteriorate with increasing the cycle, depth (number of levels) and/or refining the target-grid mesh spacing. 2) The full multi-grid algorithm (FMG) with two V(0, 2) cycles on the target grid and just one V(0, 2) cycle on all the coarse grids always provides an approximate solution with the algebraic error less than the discretization error. Estimates of the total work in the FMG algorithm are ranged between 18 and 30 minimal work units (depending on the target (discretizatioin). Thus, the overall efficiency of the FMG solver closely approaches (if does not achieve) the goal of the textbook multigrid efficiency. 3) A novel approach to deriving a discrete solution approximating the true continuous solution with a relative accuracy given in advance is developed. An adaptive multigrid algorithm (AMA) using comparison of the solutions on two successive target grids to estimate the accuracy of the current target-grid solution is defined. A desired relative accuracy is accepted as an input parameter. The final target grid on which this accuracy can be achieved is chosen automatically in the solution process. the actual relative accuracy of the discrete solution approximation obtained by AMA is always better than the required accuracy; the computational complexity of the AMA algorithm is (nearly) optimal (comparable with the complexity of the FMG algorithm applied to solve the problem on the optimally spaced target grid).

  2. Effectiveness and limitations of parameter tuning in reducing biases of top-of-atmosphere radiation and clouds in MIROC version 5

    NASA Astrophysics Data System (ADS)

    Ogura, Tomoo; Shiogama, Hideo; Watanabe, Masahiro; Yoshimori, Masakazu; Yokohata, Tokuta; Annan, James D.; Hargreaves, Julia C.; Ushigami, Naoto; Hirota, Kazuya; Someya, Yu; Kamae, Youichi; Tatebe, Hiroaki; Kimoto, Masahide

    2017-12-01

    This study discusses how much of the biases in top-of-atmosphere (TOA) radiation and clouds can be removed by parameter tuning in the present-day simulation of a climate model in the Coupled Model Inter-comparison Project phase 5 (CMIP5) generation. We used output of a perturbed parameter ensemble (PPE) experiment conducted with an atmosphere-ocean general circulation model (AOGCM) without flux adjustment. The Model for Interdisciplinary Research on Climate version 5 (MIROC5) was used for the PPE experiment. Output of the PPE was compared with satellite observation data to evaluate the model biases and the parametric uncertainty of the biases with respect to TOA radiation and clouds. The results indicate that removing or changing the sign of the biases by parameter tuning alone is difficult. In particular, the cooling bias of the shortwave cloud radiative effect at low latitudes could not be removed, neither in the zonal mean nor at each latitude-longitude grid point. The bias was related to the overestimation of both cloud amount and cloud optical thickness, which could not be removed by the parameter tuning either. However, they could be alleviated by tuning parameters such as the maximum cumulus updraft velocity at the cloud base. On the other hand, the bias of the shortwave cloud radiative effect in the Arctic was sensitive to parameter tuning. It could be removed by tuning such parameters as albedo of ice and snow both in the zonal mean and at each grid point. The obtained results illustrate the benefit of PPE experiments which provide useful information regarding effectiveness and limitations of parameter tuning. Implementing a shallow convection parameterization is suggested as a potential measure to alleviate the biases in radiation and clouds.

  3. Implementation and testing of the gridded Vienna Mapping Function 1 (VMF1)

    NASA Astrophysics Data System (ADS)

    Kouba, J.

    2008-04-01

    The new gridded Vienna Mapping Function (VMF1) was implemented and compared to the well-established site-dependent VMF1, directly and by using precise point positioning (PPP) with International GNSS Service (IGS) Final orbits/clocks for a 1.5-year GPS data set of 11 globally distributed IGS stations. The gridded VMF1 data can be interpolated for any location and for any time after 1994, whereas the site-dependent VMF1 data are only available at selected IGS stations and only after 2004. Both gridded and site-dependent VMF1 PPP solutions agree within 1 and 2 mm for the horizontal and vertical position components, respectively, provided that respective VMF1 hydrostatic zenith path delays (ZPD) are used for hydrostatic ZPD mapping to slant delays. The total ZPD of the gridded and site-dependent VMF1 data agree with PPP ZPD solutions with RMS of 1.5 and 1.8 cm, respectively. Such precise total ZPDs could provide useful initial a priori ZPD estimates for kinematic PPP and regional static GPS solutions. The hydrostatic ZPDs of the gridded VMF1 compare with the site-dependent VMF1 ZPDs with RMS of 0.3 cm, subject to some biases and discontinuities of up to 4 cm, which are likely due to different strategies used in the generation of the site-dependent VMF1 data. The precision of gridded hydrostatic ZPD should be sufficient for accurate a priori hydrostatic ZPD mapping in all precise GPS and very long baseline interferometry (VLBI) solutions. Conversely, precise and globally distributed geodetic solutions of total ZPDs, which need to be linked to VLBI to control biases and stability, should also provide a consistent and stable reference frame for long-term and state-of-the-art numerical weather modeling.

  4. Direct Mask Overlay Inspection

    NASA Astrophysics Data System (ADS)

    Hsia, Liang-Choo; Su, Lo-Soun

    1983-11-01

    In this paper, we present a mask inspection methodology and procedure that involves direct X-Y measurements. A group of dice is selected for overlay measurement; four measurement targets were laid out in the kerf of each die. The measured coordinates are then fit-ted to either a "historical" grid, which reflects the individual tool bias, or to an ideal grid squares fashion. Measurements are done using a Nikon X-Y laser interferometric measurement system, which provides a reference grid. The stability of the measurement system is essential. We then apply appropriate statistics to the residual after the fit to determine the overlay performance. Statistical methods play an important role in the product disposition. The acceptance criterion is, however, a compromise between the cost for mask making and the final device yield. In order to satisfy the demand on mask houses for quality of masks and high volume, mixing lithographic tools in mask making has become more popular, in particular, mixing optical and E-beam tools. In this paper, we also discuss the inspection procedure for mixing different lithographic tools.

  5. Niobium thin film coating on a 500-MHz copper cavity by plasma deposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haipeng Wang; Genfa Wu; H. Phillips

    2005-05-16

    A system using an Electron Cyclotron Resonance (ECR) plasma source for the deposition of a thin niobium film inside a copper cavity for superconducting accelerator applications has been designed and is being constructed. The system uses a 500-MHz copper cavity as both substrate and vacuum chamber. The ECR plasma will be created to produce direct niobium ion deposition. The central cylindrical grid is DC biased to control the deposition energy. This paper describes the design of several subcomponents including the vacuum chamber, RF supply, biasing grid and magnet coils. Operational parameters are compared between an operating sample deposition system andmore » this system. Engineering work progress toward the first plasma creation will be reported here.« less

  6. The role of misclassification in estimating proportions and an estimator of misclassification probability

    Treesearch

    Patrick L. Zimmerman; Greg C. Liknes

    2010-01-01

    Dot grids are often used to estimate the proportion of land cover belonging to some class in an aerial photograph. Interpreter misclassification is an often-ignored source of error in dot-grid sampling that has the potential to significantly bias proportion estimates. For the case when the true class of items is unknown, we present a maximum-likelihood estimator of...

  7. Filling in the GAPS: evaluating completeness and coverage of open-access biodiversity databases in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Troia, Matthew J.; McManamay, Ryan A.

    Primary biodiversity data constitute observations of particular species at given points in time and space. Open-access electronic databases provide unprecedented access to these data, but their usefulness in characterizing species distributions and patterns in biodiversity depend on how complete species inventories are at a given survey location and how uniformly distributed survey locations are along dimensions of time, space, and environment. Our aim was to compare completeness and coverage among three open-access databases representing ten taxonomic groups (amphibians, birds, freshwater bivalves, crayfish, freshwater fish, fungi, insects, mammals, plants, and reptiles) in the contiguous United States. We compiled occurrence records frommore » the Global Biodiversity Information Facility (GBIF), the North American Breeding Bird Survey (BBS), and federally administered fish surveys (FFS). In this study, we aggregated occurrence records by 0.1° × 0.1° grid cells and computed three completeness metrics to classify each grid cell as well-surveyed or not. Next, we compared frequency distributions of surveyed grid cells to background environmental conditions in a GIS and performed Kolmogorov–Smirnov tests to quantify coverage through time, along two spatial gradients, and along eight environmental gradients. The three databases contributed >13.6 million reliable occurrence records distributed among >190,000 grid cells. The percent of well-surveyed grid cells was substantially lower for GBIF (5.2%) than for systematic surveys (BBS and FFS; 82.5%). Still, the large number of GBIF occurrence records produced at least 250 well-surveyed grid cells for six of nine taxonomic groups. Coverages of systematic surveys were less biased across spatial and environmental dimensions but were more biased in temporal coverage compared to GBIF data. GBIF coverages also varied among taxonomic groups, consistent with commonly recognized geographic, environmental, and institutional sampling biases. Lastly, this comprehensive assessment of biodiversity data across the contiguous United States provides a prioritization scheme to fill in the gaps by contributing existing occurrence records to the public domain and planning future surveys.« less

  8. Filling in the GAPS: evaluating completeness and coverage of open-access biodiversity databases in the United States

    DOE PAGES

    Troia, Matthew J.; McManamay, Ryan A.

    2016-06-12

    Primary biodiversity data constitute observations of particular species at given points in time and space. Open-access electronic databases provide unprecedented access to these data, but their usefulness in characterizing species distributions and patterns in biodiversity depend on how complete species inventories are at a given survey location and how uniformly distributed survey locations are along dimensions of time, space, and environment. Our aim was to compare completeness and coverage among three open-access databases representing ten taxonomic groups (amphibians, birds, freshwater bivalves, crayfish, freshwater fish, fungi, insects, mammals, plants, and reptiles) in the contiguous United States. We compiled occurrence records frommore » the Global Biodiversity Information Facility (GBIF), the North American Breeding Bird Survey (BBS), and federally administered fish surveys (FFS). In this study, we aggregated occurrence records by 0.1° × 0.1° grid cells and computed three completeness metrics to classify each grid cell as well-surveyed or not. Next, we compared frequency distributions of surveyed grid cells to background environmental conditions in a GIS and performed Kolmogorov–Smirnov tests to quantify coverage through time, along two spatial gradients, and along eight environmental gradients. The three databases contributed >13.6 million reliable occurrence records distributed among >190,000 grid cells. The percent of well-surveyed grid cells was substantially lower for GBIF (5.2%) than for systematic surveys (BBS and FFS; 82.5%). Still, the large number of GBIF occurrence records produced at least 250 well-surveyed grid cells for six of nine taxonomic groups. Coverages of systematic surveys were less biased across spatial and environmental dimensions but were more biased in temporal coverage compared to GBIF data. GBIF coverages also varied among taxonomic groups, consistent with commonly recognized geographic, environmental, and institutional sampling biases. Lastly, this comprehensive assessment of biodiversity data across the contiguous United States provides a prioritization scheme to fill in the gaps by contributing existing occurrence records to the public domain and planning future surveys.« less

  9. Climate Prediction for Brazil's Nordeste: Performance of Empirical and Numerical Modeling Methods.

    NASA Astrophysics Data System (ADS)

    Moura, Antonio Divino; Hastenrath, Stefan

    2004-07-01

    Comparisons of performance of climate forecast methods require consistency in the predictand and a long common reference period. For Brazil's Nordeste, empirical methods developed at the University of Wisconsin use preseason (October January) rainfall and January indices of the fields of meridional wind component and sea surface temperature (SST) in the tropical Atlantic and the equatorial Pacific as input to stepwise multiple regression and neural networking. These are used to predict the March June rainfall at a network of 27 stations. An experiment at the International Research Institute for Climate Prediction, Columbia University, with a numerical model (ECHAM4.5) used global SST information through February to predict the March June rainfall at three grid points in the Nordeste. The predictands for the empirical and numerical model forecasts are correlated at +0.96, and the period common to the independent portion of record of the empirical prediction and the numerical modeling is 1968 99. Over this period, predicted versus observed rainfall are evaluated in terms of correlation, root-mean-square error, absolute error, and bias. Performance is high for both approaches. Numerical modeling produces a correlation of +0.68, moderate errors, and strong negative bias. For the empirical methods, errors and bias are small, and correlations of +0.73 and +0.82 are reached between predicted and observed rainfall.


  10. Comparison of Water Vapor Measurements by Airborne Sun Photometer and Near-Coincident in Situ and Satellite Sensors during INTEX/ITCT 2004

    NASA Technical Reports Server (NTRS)

    Livingston, J.; Schmid, B.; Redemann, J.; Russell, P. B.; Ramirez, S. A.; Eilers, J.; Gore, W.; Howard, S.; Pommier, J.; Fetzer, E. J.; hide

    2007-01-01

    We have retrieved columnar water vapor (CWV) from measurements acquired by the 14-channel NASA Ames Airborne Tracking Sun photometer (AATS-14) during 19 Jetstream 31 (J31) flights over the Gulf of Maine in summer 2004 in support of the Intercontinental Chemical Transport Experiment (INTEX)/Intercontinental Transport and Chemical Transformation (ITCT) experiments. In this paper we compare AATS-14 water vapor retrievals during aircraft vertical profiles with measurements by an onboard Vaisala HMP243 humidity sensor and by ship radiosondes and with water vapor profiles retrieved from AIRS measurements during eight Aqua overpasses. We also compare AATS CWV and MODIS infrared CWV retrievals during five Aqua and five Terra overpasses. For 35 J31 vertical profiles, mean (bias) and RMS AATS-minus-Vaisala layer-integrated water vapor (LWV) differences are -7.1 percent and 8.8 percent, respectively. For 22 aircraft profiles within 1 hour and 130 km of radiosonde soundings, AATS-minus-sonde bias and RMS LWV differences are -5.4 percent and 10.7 percent, respectively, and corresponding J31 Vaisala-minus-sonde differences are 2.3 percent and 8.4 percent, respectively. AIRS LWV retrievals within 80 lan of J31 profiles yield lower bias and RMS differences compared to AATS or Vaisala retrievals than do AIRS retrievals within 150 km of the J31. In particular, for AIRS-minus-AATS LWV differences, the bias decreases from 8.8 percent to 5.8 percent, and the RMS difference decreases from 2 1.5 percent to 16.4 percent. Comparison of vertically resolved AIRS water vapor retrievals (LWVA) to AATS values in fixed pressure layers yields biases of -2 percent to +6 percent and RMS differences of -20 percent below 700 hPa. Variability and magnitude of these differences increase significantly above 700 hPa. MODIS IR retrievals of CWV in 205 grid cells (5 x 5 km at nadir) are biased wet by 10.4 percent compared to AATS over-ocean near-surface retrievals. The MODIS-Aqua subset (79 grid cells) exhibits a wet bias of 5.1 percent, and the MODIS-Terra subset (126 grid cells) yields a wet bias of 13.2 percent.

  11. Evaluation of the Relative Contribution of Observing Systems in Reanalyses: Aircraft Temperature Bias and Analysis Innovations

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Dasilva, Arindo M.

    2012-01-01

    Reanalyses have become important sources of data in weather and climate research. While observations are the most crucial component of the systems, few research projects consider carefully the multitudes of assimilated observations and their impact on the results. This is partly due to the diversity of observations and their individual complexity, but also due to the unfriendly nature of the data formats. Here, we discuss the NASA Modern-Era Retrospective analysis for Research and Applications (MERRA) and a companion dataset, the Gridded Innovations and Observations (GIO). GIO is simply a post-processing of the assimilated observations and their innovations (forecast error and analysis error) to a common spatio-temporal grid, following that of the MERRA analysis fields. This data includes in situ, retrieved and radiance observations that are assimilated and used in the reanalysis. While all these disparate observations and statistics are in a uniform easily accessible format, there are some limitations. Similar observations are binned to the grid, so that multiple observations are combined in the gridding process. The data is then implicitly thinned. Some details in the meta data may also be lost (e.g. aircraft or station ID). Nonetheless, the gridded observations should provide easy access to all the observations input to the reanalysis. To provide an example of the GIO data, a case study evaluating observing systems over the United States and statistics is presented, and demonstrates the evaluation of the observations and the data assimilation. The GIO data is used to collocate 200mb Radiosonde and Aircraft temperature measurements from 1979-2009. A known warm bias of the aircraft measurements is apparent compared to the radiosonde data. However, when larger quantities of aircraft data are available, they dominate the analysis and the radiosonde data become biased against the forecast. When AMSU radiances become available the radiosonde and aircraft analysis and forecast error take on an annual cycle. While this supports results of previous work that recommend bias corrections for the aircraft measurements, the interactions with AMSU radiances will also require further investigation. This also provides an example for reanalysis users in examining the available observations and their impact on the analysis. GIO data is presently available alongside the MERRA reanalysis.

  12. Implications of possible shuttle charging. [prediction analysis techniques for insulation and electrical grounding against ionospheric conductivity

    NASA Technical Reports Server (NTRS)

    Taylor, W. W. L.

    1979-01-01

    Shuttle charging is discussed and two analyses of shuttle charging are performed. The first predicts the effective collecting area of a wire grid, biased with the respect to the potential of the magnetoplasma surrounding it. The second predicts the intensity of broadband electromagnetic noise that is emitted when surface electrostatic discharges occur between the beta cloth and the wire grid sewn on it.

  13. Pervasive access to MRI bias artifact suppression service on a grid.

    PubMed

    Ardizzone, Edoardo; Gambino, Orazio; Genco, Alessandro; Pirrone, Roberto; Sorce, Salvatore

    2009-01-01

    Bias artifact corrupts MRIs in such a way that the image is afflicted by illumination variations. Some of the authors proposed the exponential entropy-driven homomorphic unsharp masking ( E(2)D-HUM) algorithm that corrects this artifact without any a priori hypothesis about the tissues or the MRI modality. Moreover, E(2)D-HUM does not care about the body part under examination and does not require any particular training task. People who want to use this algorithm, which is Matlab-based, have to set their own computers in order to execute it. Furthermore, they have to be Matlab-skilled to exploit all the features of the algorithm. In this paper, we propose to make such algorithm available as a service on a grid infrastructure, so that people can use it almost from everywhere, in a pervasive fashion, by means of a suitable user interface running on smartphones. The proposed solution allows physicians to use the E(2)D-HUM algorithm (or any other kind of algorithm, given that it is available as a service on the grid), being it remotely executed somewhere in the grid, and the results are sent back to the user's device. This way, physicians do not need to be aware of how to use Matlab to process their images. The pervasive service provision for medical image enhancement is presented, along with some experimental results obtained using smartphones connected to an existing Globus-based grid infrastructure.

  14. How Well Can Saliency Models Predict Fixation Selection in Scenes Beyond Central Bias? A New Approach to Model Evaluation Using Generalized Linear Mixed Models.

    PubMed

    Nuthmann, Antje; Einhäuser, Wolfgang; Schütz, Immo

    2017-01-01

    Since the turn of the millennium, a large number of computational models of visual salience have been put forward. How best to evaluate a given model's ability to predict where human observers fixate in images of real-world scenes remains an open research question. Assessing the role of spatial biases is a challenging issue; this is particularly true when we consider the tendency for high-salience items to appear in the image center, combined with a tendency to look straight ahead ("central bias"). This problem is further exacerbated in the context of model comparisons, because some-but not all-models implicitly or explicitly incorporate a center preference to improve performance. To address this and other issues, we propose to combine a-priori parcellation of scenes with generalized linear mixed models (GLMM), building upon previous work. With this method, we can explicitly model the central bias of fixation by including a central-bias predictor in the GLMM. A second predictor captures how well the saliency model predicts human fixations, above and beyond the central bias. By-subject and by-item random effects account for individual differences and differences across scene items, respectively. Moreover, we can directly assess whether a given saliency model performs significantly better than others. In this article, we describe the data processing steps required by our analysis approach. In addition, we demonstrate the GLMM analyses by evaluating the performance of different saliency models on a new eye-tracking corpus. To facilitate the application of our method, we make the open-source Python toolbox "GridFix" available.

  15. Analysis of Helimak Plasma Using Movies of Density Contours

    NASA Astrophysics Data System (ADS)

    Williams, Chad; Gentle, Kenneth; Li, Bo

    2013-10-01

    Using an array of Langmuir probes we have created two-dimensional contour plot movies showing the arrangement, convection, and time sequence of plasma structures inside of the Texas Helimak, which approximates aspects of the tokamak SOL. These structures are seen to vary with time, magnetic field line pitch, and applied bias voltage. The probes are distributed in two sets of 48 probes arranged in a grid with two centimeter spacing, providing good spatial resolution of these structures. We find that, for negative biases, the plasma moves away from the biased plate in agreement with the simulations. For positive biases, the plasma is found close to the bias plate. Positive biases are seen to induce more radial convection than the negatively biased case. While all structures vary with time, those at lower magnetic field line pitch are seen to vary most dramatically.

  16. Precipitation Estimate Using NEXRAD Ground-Based Radar Images: Validation, Calibration and Spatial Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xuesong

    2012-12-17

    Precipitation is an important input variable for hydrologic and ecological modeling and analysis. Next Generation Radar (NEXRAD) can provide precipitation products that cover most of the continental United States with a high resolution display of approximately 4 × 4 km2. Two major issues concerning the applications of NEXRAD data are (1) lack of a NEXRAD geo-processing and geo-referencing program and (2) bias correction of NEXRAD estimates. In this chapter, a geographic information system (GIS) based software that can automatically support processing of NEXRAD data for hydrologic and ecological models is presented. Some geostatistical approaches to calibrating NEXRAD data using rainmore » gauge data are introduced, and two case studies on evaluating accuracy of NEXRAD Multisensor Precipitation Estimator (MPE) and calibrating MPE with rain-gauge data are presented. The first case study examines the performance of MPE in mountainous region versus south plains and cold season versus warm season, as well as the effect of sub-grid variability and temporal scale on NEXRAD performance. From the results of the first case study, performance of MPE was found to be influenced by complex terrain, frozen precipitation, sub-grid variability, and temporal scale. Overall, the assessment of MPE indicates the importance of removing bias of the MPE precipitation product before its application, especially in the complex mountainous region. The second case study examines the performance of three MPE calibration methods using rain gauge observations in the Little River Experimental Watershed in Georgia. The comparison results show that no one method can perform better than the others in terms of all evaluation coefficients and for all time steps. For practical estimation of precipitation distribution, implementation of multiple methods to predict spatial precipitation is suggested.« less

  17. Derived Optimal Linear Combination Evapotranspiration (DOLCE): a global gridded synthesis ET estimate

    NASA Astrophysics Data System (ADS)

    Hobeichi, Sanaa; Abramowitz, Gab; Evans, Jason; Ukkola, Anna

    2018-02-01

    Accurate global gridded estimates of evapotranspiration (ET) are key to understanding water and energy budgets, in addition to being required for model evaluation. Several gridded ET products have already been developed which differ in their data requirements, the approaches used to derive them and their estimates, yet it is not clear which provides the most reliable estimates. This paper presents a new global ET dataset and associated uncertainty with monthly temporal resolution for 2000-2009. Six existing gridded ET products are combined using a weighting approach trained by observational datasets from 159 FLUXNET sites. The weighting method is based on a technique that provides an analytically optimal linear combination of ET products compared to site data and accounts for both the performance differences and error covariance between the participating ET products. We examine the performance of the weighting approach in several in-sample and out-of-sample tests that confirm that point-based estimates of flux towers provide information on the grid scale of these products. We also provide evidence that the weighted product performs better than its six constituent ET product members in four common metrics. Uncertainty in the ET estimate is derived by rescaling the spread of participating ET products so that their spread reflects the ability of the weighted mean estimate to match flux tower data. While issues in observational data and any common biases in participating ET datasets are limitations to the success of this approach, future datasets can easily be incorporated and enhance the derived product.

  18. Long-Term Quantitative Precipitation Estimates (QPE) at High Spatial and Temporal Resolution over CONUS: Bias-Adjustment of the Radar-Only National Mosaic and Multi-sensor QPE (NMQ/Q2) Precipitation Reanalysis (2001-2012)

    NASA Astrophysics Data System (ADS)

    Prat, Olivier; Nelson, Brian; Stevens, Scott; Seo, Dong-Jun; Kim, Beomgeun

    2015-04-01

    The processing of radar-only precipitation via the reanalysis from the National Mosaic and Multi-Sensor Quantitative (NMQ/Q2) based on the WSR-88D Next-generation Radar (NEXRAD) network over Continental United States (CONUS) is completed for the period covering from 2001 to 2012. This important milestone constitutes a unique opportunity to study precipitation processes at a 1-km spatial resolution for a 5-min temporal resolution. However, in order to be suitable for hydrological, meteorological and climatological applications, the radar-only product needs to be bias-adjusted and merged with in-situ rain gauge information. Several in-situ datasets are available to assess the biases of the radar-only product and to adjust for those biases to provide a multi-sensor QPE. The rain gauge networks that are used such as the Global Historical Climatology Network-Daily (GHCN-D), the Hydrometeorological Automated Data System (HADS), the Automated Surface Observing Systems (ASOS), and the Climate Reference Network (CRN), have different spatial density and temporal resolution. The challenges related to incorporating non-homogeneous networks over a vast area and for a long-term record are enormous. Among the challenges we are facing are the difficulties incorporating differing resolution and quality surface measurements to adjust gridded estimates of precipitation. Another challenge is the type of adjustment technique. The objective of this work is threefold. First, we investigate how the different in-situ networks can impact the precipitation estimates as a function of the spatial density, sensor type, and temporal resolution. Second, we assess conditional and un-conditional biases of the radar-only QPE for various time scales (daily, hourly, 5-min) using in-situ precipitation observations. Finally, after assessing the bias and applying reduction or elimination techniques, we are using a unique in-situ dataset merging the different RG networks (CRN, ASOS, HADS, GHCN-D) to adjust the radar-only QPE product via an Inverse Distance Weighting (IDW) approach. In addition, we also investigate alternate adjustment techniques such as the kriging method and its variants (Simple Kriging: SK; Ordinary Kriging: OK; Conditional Bias-Penalized Kriging: CBPK). From this approach, we also hope to generate estimates of uncertainty for the gridded bias-adjusted QPE. Further comparison with a suite of lower resolution QPEs derived from ground based radar measurements (Stage IV) and satellite products (TMPA, CMORPH, PERSIANN) is also provided in order to give a detailed picture of the improvements and remaining challenges.

  19. Negative ion source

    DOEpatents

    Leung, Ka-Ngo; Ehlers, Kenneth W.

    1984-01-01

    An ionization vessel is divided into an ionizing zone and an extraction zone by a magnetic filter. The magnetic filter prevents high-energy electrons from crossing from the ionizing zone to the extraction zone. A small positive voltage impressed on a plasma grid, located adjacent an extraction grid, positively biases the plasma in the extraction zone to thereby prevent positive ions from migrating from the ionizing zone to the extraction zone. Low-energy electrons, which would ordinarily be dragged by the positive ions into the extraction zone, are thereby prevented from being present in the extraction zone and being extracted along with negative ions by the extraction grid. Additional electrons are suppressed from the output flux using ExB drift provided by permanent magnets and the extractor grid electrical field.

  20. Negative ion source

    DOEpatents

    Leung, K.N.; Ehlers, K.W.

    1982-08-06

    An ionization vessel is divided into an ionizing zone and an extraction zone by a magnetic filter. The magnetic filter prevents high-energy electrons from crossing from the ionizing zone to the extraction zone. A small positive voltage impressed on a plasma grid, located adjacent an extraction grid, positively biases the plasma in the extraction zone to thereby prevent positive ions from migrating from the ionizing zone to the extraction zone. Low-energy electrons, which would ordinarily be dragged by the positive ions into the extraction zone, are thereby prevented from being present in the extraction zone and being extracted along with negative ions by the extraction grid. Additional electrons are suppressed from the output flux using ExB drift provided by permanent magnets and the extractor grid electrical field.

  1. Negative ion source

    DOEpatents

    Leung, K.N.; Ehlers, K.W.

    1984-12-04

    An ionization vessel is divided into an ionizing zone and an extraction zone by a magnetic filter. The magnetic filter prevents high-energy electrons from crossing from the ionizing zone to the extraction zone. A small positive voltage impressed on a plasma grid, located adjacent an extraction grid, positively biases the plasma in the extraction zone to thereby prevent positive ions from migrating from the ionizing zone to the extraction zone. Low-energy electrons, which would ordinarily be dragged by the positive ions into the extraction zone, are thereby prevented from being present in the extraction zone and being extracted along with negative ions by the extraction grid. Additional electrons are suppressed from the output flux using ExB drift provided by permanent magnets and the extractor grid electrical field. 14 figs.

  2. Theoretical investigation on the mass loss impact on asteroseismic grid-based estimates of mass, radius, and age for RGB stars

    NASA Astrophysics Data System (ADS)

    Valle, G.; Dell'Omodarme, M.; Prada Moroni, P. G.; Degl'Innocenti, S.

    2018-01-01

    Aims: We aim to perform a theoretical evaluation of the impact of the mass loss indetermination on asteroseismic grid based estimates of masses, radii, and ages of stars in the red giant branch (RGB) phase. Methods: We adopted the SCEPtER pipeline on a grid spanning the mass range [0.8; 1.8] M⊙. As observational constraints, we adopted the star effective temperatures, the metallicity [Fe/H], the average large frequency spacing Δν, and the frequency of maximum oscillation power νmax. The mass loss was modelled following a Reimers parametrization with the two different efficiencies η = 0.4 and η = 0.8. Results: In the RGB phase, the average random relative error (owing only to observational uncertainty) on mass and age estimates is about 8% and 30% respectively. The bias in mass and age estimates caused by the adoption of a wrong mass loss parameter in the recovery is minor for the vast majority of the RGB evolution. The biases get larger only after the RGB bump. In the last 2.5% of the RGB lifetime the error on the mass determination reaches 6.5% becoming larger than the random error component in this evolutionary phase. The error on the age estimate amounts to 9%, that is, equal to the random error uncertainty. These results are independent of the stellar metallicity [Fe/H] in the explored range. Conclusions: Asteroseismic-based estimates of stellar mass, radius, and age in the RGB phase can be considered mass loss independent within the range (η ∈ [0.0,0.8]) as long as the target is in an evolutionary phase preceding the RGB bump.

  3. A method to analyze molecular tagging velocimetry data using the Hough transform.

    PubMed

    Sanchez-Gonzalez, R; McManamen, B; Bowersox, R D W; North, S W

    2015-10-01

    The development of a method to analyze molecular tagging velocimetry data based on the Hough transform is presented. This method, based on line fitting, parameterizes the grid lines "written" into a flowfield. Initial proof-of-principle illustration of this method was performed to obtain two-component velocity measurements in the wake of a cylinder in a Mach 4.6 flow, using a data set derived from computational fluid dynamics simulations. The Hough transform is attractive for molecular tagging velocimetry applications since it is capable of discriminating spurious features that can have a biasing effect in the fitting process. Assessment of the precision and accuracy of the method were also performed to show the dependence on analysis window size and signal-to-noise levels. The accuracy of this Hough transform-based method to quantify intersection displacements was determined to be comparable to cross-correlation methods. The employed line parameterization avoids the assumption of linearity in the vicinity of each intersection, which is important in the limit of drastic grid deformations resulting from large velocity gradients common in high-speed flow applications. This Hough transform method has the potential to enable the direct and spatially accurate measurement of local vorticity, which is important in applications involving turbulent flowfields. Finally, two-component velocity determinations using the Hough transform from experimentally obtained images are presented, demonstrating the feasibility of the proposed analysis method.

  4. Long-Term Large-Scale Bias-Adjusted Precipitation Estimates at High Spatial and Temporal Resolution Derived from the National Mosaic and Multi-Sensor QPE (NMQ/Q2) Precipitation Reanalysis over CONUS

    NASA Astrophysics Data System (ADS)

    Prat, O. P.; Nelson, B. R.; Stevens, S. E.; Seo, D. J.; Kim, B.

    2014-12-01

    The processing of radar-only precipitation via the reanalysis from the National Mosaic and Multi-Sensor Quantitative (NMQ/Q2) based on the WSR-88D Next-generation Radar (Nexrad) network over Continental United States (CONUS) is nearly completed for the period covering from 2000 to 2012. This important milestone constitutes a unique opportunity to study precipitation processes at a 1-km spatial resolution for a 5-min temporal resolution. However, in order to be suitable for hydrological, meteorological and climatological applications, the radar-only product needs to be bias-adjusted and merged with in-situ rain gauge information. Rain gauge networks such as the Hydrometeorological Automated Data System (HADS), the Automated Surface Observing Systems (ASOS), the Climate Reference Network (CRN), and the Global Historical Climatology Network - Daily (GHCN-D) are used to adjust for those biases and to merge with the radar only product to provide a multi-sensor estimate. The challenges related to incorporating non-homogeneous networks over a vast area and for a long-term record are enormous. Among the challenges we are facing are the difficulties incorporating differing resolution and quality surface measurements to adjust gridded estimates of precipitation. Another challenge is the type of adjustment technique. After assessing the bias and applying reduction or elimination techniques, we are investigating the kriging method and its variants such as simple kriging (SK), ordinary kriging (OK), and conditional bias-penalized Kriging (CBPK) among others. In addition we hope to generate estimates of uncertainty for the gridded estimate. In this work the methodology is presented as well as a comparison between the radar-only product and the final multi-sensor QPE product. The comparison is performed at various time scales from the sub-hourly, to annual. In addition, comparisons over the same period with a suite of lower resolution QPEs derived from ground based radar measurements (Stage IV) and satellite products (TMPA, CMORPH, PERSIANN) are provided in order to give a detailed picture of the improvements and remaining challenges.

  5. Quantification of myocardial fibrosis by digital image analysis and interactive stereology

    PubMed Central

    2014-01-01

    Background Cardiac fibrosis disrupts the normal myocardial structure and has a direct impact on heart function and survival. Despite already available digital methods, the pathologist’s visual score is still widely considered as ground truth and used as a primary method in histomorphometric evaluations. The aim of this study was to compare the accuracy of digital image analysis tools and the pathologist’s visual scoring for evaluating fibrosis in human myocardial biopsies, based on reference data obtained by point counting performed on the same images. Methods Endomyocardial biopsy material from 38 patients diagnosed with inflammatory dilated cardiomyopathy was used. The extent of total cardiac fibrosis was assessed by image analysis on Masson’s trichrome-stained tissue specimens using automated Colocalization and Genie software, by Stereology grid count and manually by Pathologist’s visual score. Results A total of 116 slides were analyzed. The mean results obtained by the Colocalization software (13.72 ± 12.24%) were closest to the reference value of stereology (RVS), while the Genie software and Pathologist score gave a slight underestimation. RVS values correlated strongly with values obtained using the Colocalization and Genie (r > 0.9, p < 0.001) software as well as the pathologist visual score. Differences in fibrosis quantification by Colocalization and RVS were statistically insignificant. However, significant bias was found in the results obtained by using Genie versus RVS and pathologist score versus RVS with mean difference values of: -1.61% and 2.24%. Bland-Altman plots showed a bidirectional bias dependent on the magnitude of the measurement: Colocalization software overestimated the area fraction of fibrosis in the lower end, and underestimated in the higher end of the RVS values. Meanwhile, Genie software as well as the pathologist score showed more uniform results throughout the values, with a slight underestimation in the mid-range for both. Conclusion Both applied digital image analysis methods revealed almost perfect correlation with the criterion standard obtained by stereology grid count and, in terms of accuracy, outperformed the pathologist’s visual score. Genie algorithm proved to be the method of choice with the only drawback of a slight underestimation bias, which is considered acceptable for both clinical and research evaluations. Virtual slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/9857909611227193 PMID:24912374

  6. Radio weak lensing shear measurement in the visibility domain - I. Methodology

    NASA Astrophysics Data System (ADS)

    Rivi, M.; Miller, L.; Makhathini, S.; Abdalla, F. B.

    2016-12-01

    The high sensitivity of the new generation of radio telescopes such as the Square Kilometre Array (SKA) will allow cosmological weak lensing measurements at radio wavelengths that are competitive with optical surveys. We present an adaptation to radio data of lensfit, a method for galaxy shape measurement originally developed and used for optical weak lensing surveys. This likelihood method uses an analytical galaxy model and makes a Bayesian marginalization of the likelihood over uninteresting parameters. It has the feature of working directly in the visibility domain, which is the natural approach to adopt with radio interferometer data, avoiding systematics introduced by the imaging process. As a proof of concept, we provide results for visibility simulations of individual galaxies with flux density S ≥ 10 μJy at the phase centre of the proposed SKA1-MID baseline configuration, adopting 12 frequency channels in the band 950-1190 MHz. Weak lensing shear measurements from a population of galaxies with realistic flux and scalelength distributions are obtained after natural gridding of the raw visibilities. Shear measurements are expected to be affected by `noise bias': we estimate the bias in the method as a function of signal-to-noise ratio (SNR). We obtain additive and multiplicative bias values that are comparable to SKA1 requirements for SNR > 18 and SNR > 30, respectively. The multiplicative bias for SNR >10 is comparable to that found in ground-based optical surveys such as CFHTLenS, and we anticipate that similar shear measurement calibration strategies to those used for optical surveys may be used to good effect in the analysis of SKA radio interferometer data.

  7. Implications of the methodological choices for hydrologic portrayals of climate change over the contiguous United States: Statistically downscaled forcing data and hydrologic models

    USGS Publications Warehouse

    Mizukami, Naoki; Clark, Martyn P.; Gutmann, Ethan D.; Mendoza, Pablo A.; Newman, Andrew J.; Nijssen, Bart; Livneh, Ben; Hay, Lauren E.; Arnold, Jeffrey R.; Brekke, Levi D.

    2016-01-01

    Continental-domain assessments of climate change impacts on water resources typically rely on statistically downscaled climate model outputs to force hydrologic models at a finer spatial resolution. This study examines the effects of four statistical downscaling methods [bias-corrected constructed analog (BCCA), bias-corrected spatial disaggregation applied at daily (BCSDd) and monthly scales (BCSDm), and asynchronous regression (AR)] on retrospective hydrologic simulations using three hydrologic models with their default parameters (the Community Land Model, version 4.0; the Variable Infiltration Capacity model, version 4.1.2; and the Precipitation–Runoff Modeling System, version 3.0.4) over the contiguous United States (CONUS). Biases of hydrologic simulations forced by statistically downscaled climate data relative to the simulation with observation-based gridded data are presented. Each statistical downscaling method produces different meteorological portrayals including precipitation amount, wet-day frequency, and the energy input (i.e., shortwave radiation), and their interplay affects estimations of precipitation partitioning between evapotranspiration and runoff, extreme runoff, and hydrologic states (i.e., snow and soil moisture). The analyses show that BCCA underestimates annual precipitation by as much as −250 mm, leading to unreasonable hydrologic portrayals over the CONUS for all models. Although the other three statistical downscaling methods produce a comparable precipitation bias ranging from −10 to 8 mm across the CONUS, BCSDd severely overestimates the wet-day fraction by up to 0.25, leading to different precipitation partitioning compared to the simulations with other downscaled data. Overall, the choice of downscaling method contributes to less spread in runoff estimates (by a factor of 1.5–3) than the choice of hydrologic model with use of the default parameters if BCCA is excluded.

  8. Euler solutions to nonlinear acoustics of non-lifting rotor blades

    NASA Technical Reports Server (NTRS)

    Baeder, J. D.

    1991-01-01

    For the first time a computational fluid dynamics (CFD) method is used to calculate directly the high-speed impulsive (HSI) noise of a non-lifting hovering rotor blade out to a distance of over three rotor radii. In order to accurately propagate the acoustic wave in a stable and efficient manner, an implicit upwind-biased Euler method is solved on a grid with points clustered along the line of propagation. A detailed validation of the code is performed for a rectangular rotor blade at tip Mach numbers ranging from 0.88 to 0.92. The agreement with experiment is excellent at both the sonic cylinder and at 2.18 rotor radii. The agreement at 3.09 rotor radii is still very good, showing improvements over the results from the best previous method. Grid sensitivity studies indicate that with special attention to the location of the boundaries a grid with approximately 60,000 points is adequate. This results in a computational time of approximately 40 minutes on a Cray-XMP. The practicality of the method to calculate HSI noise is demonstrated by expanding the scope of the investigation to examine the rectangular blade as well as a highly swept and tapered blade over a tip Mach number range of 0.80 to 0.95. Comparisons with experimental data are excellent and the advantages of planform modifications are clearly evident. New insight is gained into the mechanisms of nonlinear propagation and the minimum distance at which a valid comparison of different rotors can be made: approximately two rotor radii from the center of rotation.

  9. Euler solutions to nonlinear acoustics of non-lifting hovering rotor blades

    NASA Technical Reports Server (NTRS)

    Baeder, J. D.

    1991-01-01

    For the first time a computational fluid dynamics (CFD) method is used to calculate directly the high-speed impulsive (HSI) noise of a non-lifting hovering rotor blade out to a distance of over three rotor radii. In order to accurately propagate the acoustic wave in a stable and efficient manner, an implicit upwind-biased Euler method is solved on a grid with points clustered along the line of propagation. A detailed validation of the code is performed for a rectangular rotor blade at tip Mach numbers ranging from 0.88 to 0.92. The agreement with experiment is excellent at both the sonic cylinder and at 2.18 rotor radii. The agreement at 3.09 rotor radii is still very good, showing improvements over the results from the best previous method. Grid sensitivity studies indicate that with special attention to the location of the boundaries a grid with approximately 60,000 points is adequate. This results in a computational time of approximately 40 minutes on a Cray-XMP. The practicality of the method to calculate HSI noise is demonstrated by expanding the scope of the investigation to examine the rectangular blade as well as a highly swept and tapered blade over a tip Mach number range of 0.80 to 0.95. Comparisons with experimental data are excellent and the advantages of planform modifications are clearly evident. New insight is gained into the mechanisms of nonlinear propagation and the minimum distance at which a valid comparison of different rotors can be made: approximately two rotor radii from the center of rotation.

  10. Multigrid Method for Modeling Multi-Dimensional Combustion with Detailed Chemistry

    NASA Technical Reports Server (NTRS)

    Zheng, Xiaoqing; Liu, Chaoqun; Liao, Changming; Liu, Zhining; McCormick, Steve

    1996-01-01

    A highly accurate and efficient numerical method is developed for modeling 3-D reacting flows with detailed chemistry. A contravariant velocity-based governing system is developed for general curvilinear coordinates to maintain simplicity of the continuity equation and compactness of the discretization stencil. A fully-implicit backward Euler technique and a third-order monotone upwind-biased scheme on a staggered grid are used for the respective temporal and spatial terms. An efficient semi-coarsening multigrid method based on line-distributive relaxation is used as the flow solver. The species equations are solved in a fully coupled way and the chemical reaction source terms are treated implicitly. Example results are shown for a 3-D gas turbine combustor with strong swirling inflows.

  11. Potential, velocity, and density fields from sparse and noisy redshift-distance samples - Method

    NASA Technical Reports Server (NTRS)

    Dekel, Avishai; Bertschinger, Edmund; Faber, Sandra M.

    1990-01-01

    A method for recovering the three-dimensional potential, velocity, and density fields from large-scale redshift-distance samples is described. Galaxies are taken as tracers of the velocity field, not of the mass. The density field and the initial conditions are calculated using an iterative procedure that applies the no-vorticity assumption at an initial time and uses the Zel'dovich approximation to relate initial and final positions of particles on a grid. The method is tested using a cosmological N-body simulation 'observed' at the positions of real galaxies in a redshift-distance sample, taking into account their distance measurement errors. Malmquist bias and other systematic and statistical errors are extensively explored using both analytical techniques and Monte Carlo simulations.

  12. Spatial scaling of net primary productivity using subpixel landcover information

    NASA Astrophysics Data System (ADS)

    Chen, X. F.; Chen, Jing M.; Ju, Wei M.; Ren, L. L.

    2008-10-01

    Gridding the land surface into coarse homogeneous pixels may cause important biases on ecosystem model estimations of carbon budget components at local, regional and global scales. These biases result from overlooking subpixel variability of land surface characteristics. Vegetation heterogeneity is an important factor introducing biases in regional ecological modeling, especially when the modeling is made on large grids. This study suggests a simple algorithm that uses subpixel information on the spatial variability of land cover type to correct net primary productivity (NPP) estimates, made at coarse spatial resolutions where the land surface is considered as homogeneous within each pixel. The algorithm operates in such a way that NPP obtained from calculations made at coarse spatial resolutions are multiplied by simple functions that attempt to reproduce the effects of subpixel variability of land cover type on NPP. Its application to a carbon-hydrology coupled model(BEPS-TerrainLab model) estimates made at a 1-km resolution over a watershed (named Baohe River Basin) located in the southwestern part of Qinling Mountains, Shaanxi Province, China, improved estimates of average NPP as well as its spatial variability.

  13. High-voltage spark carbon-fiber sticky-tape data analyzer

    NASA Technical Reports Server (NTRS)

    Yang, L. C.; Hull, G. G.

    1980-01-01

    An efficient method for detecting carbon fibers collected on a stick tape monitor was developed. The fibers were released from a simulated crash fire situation containing carbon fiber composite material. The method utilized the ability of the fiber to initiate a spark across a set of alternately biased high voltage electrodes to electronically count the number of fiber fragments collected on the tape. It was found that the spark, which contains high energy and is of very short duration, is capable of partially damaging or consuming the fiber fragments. It also creates a mechanical disturbance which ejects the fiber from the grid. Both characteristics were helpful in establishing a single discharge pulse for each fiber segment.

  14. A critical remark on the applicability of E-OBS European gridded temperature data set for validating control climate simulations

    NASA Astrophysics Data System (ADS)

    Kyselý, Jan; Plavcová, Eva

    2010-12-01

    The study compares daily maximum (Tmax) and minimum (Tmin) temperatures in two data sets interpolated from irregularly spaced meteorological stations to a regular grid: the European gridded data set (E-OBS), produced from a relatively sparse network of stations available in the European Climate Assessment and Dataset (ECA&D) project, and a data set gridded onto the same grid from a high-density network of stations in the Czech Republic (GriSt). We show that large differences exist between the two gridded data sets, particularly for Tmin. The errors tend to be larger in tails of the distributions. In winter, temperatures below the 10% quantile of Tmin, which is still far from the very tail of the distribution, are too warm by almost 2°C in E-OBS on average. A large bias is found also for the diurnal temperature range. Comparison with simple average series from stations in two regions reveals that differences between GriSt and the station averages are minor relative to differences between E-OBS and either of the two data sets. The large deviations between the two gridded data sets affect conclusions concerning validation of temperature characteristics in regional climate model (RCM) simulations. The bias of the E-OBS data set and limitations with respect to its applicability for evaluating RCMs stem primarily from (1) insufficient density of information from station observations used for the interpolation, including the fact that the stations available may not be representative for a wider area, and (2) inconsistency between the radii of the areal average values in high-resolution RCMs and E-OBS. Further increases in the amount and quality of station data available within ECA&D and used in the E-OBS data set are essentially needed for more reliable validation of climate models against recent climate on a continental scale.

  15. Updates to Multi-Dimensional Flux Reconstruction for Hypersonic Simulations on Tetrahedral Grids

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2010-01-01

    The quality of simulated hypersonic stagnation region heating with tetrahedral meshes is investigated by using an updated three-dimensional, upwind reconstruction algorithm for the inviscid flux vector. An earlier implementation of this algorithm provided improved symmetry characteristics on tetrahedral grids compared to conventional reconstruction methods. The original formulation however displayed quantitative differences in heating and shear that were as large as 25% compared to a benchmark, structured-grid solution. The primary cause of this discrepancy is found to be an inherent inconsistency in the formulation of the flux limiter. The inconsistency is removed by employing a Green-Gauss formulation of primitive gradients at nodes to replace the previous Gram-Schmidt algorithm. Current results are now in good agreement with benchmark solutions for two challenge problems: (1) hypersonic flow over a three-dimensional cylindrical section with special attention to the uniformity of the solution in the spanwise direction and (2) hypersonic flow over a three-dimensional sphere. The tetrahedral cells used in the simulation are derived from a structured grid where cell faces are bisected across the diagonal resulting in a consistent pattern of diagonals running in a biased direction across the otherwise symmetric domain. This grid is known to accentuate problems in both shock capturing and stagnation region heating encountered with conventional, quasi-one-dimensional inviscid flux reconstruction algorithms. Therefore the test problems provide a sensitive indicator for algorithmic effects on heating. Additional simulations on a sharp, double cone and the shuttle orbiter are then presented to demonstrate the capabilities of the new algorithm on more geometrically complex flows with tetrahedral grids. These results provide the first indication that pure tetrahedral elements utilizing the updated, three-dimensional, upwind reconstruction algorithm may be used for the simulation of heating and shear in hypersonic flows in upwind, finite volume formulations.

  16. Towards a consistent framework to oversample multi-sensors, multi-species satellite data into a common grid

    NASA Astrophysics Data System (ADS)

    Sun, K.; Zhu, L.; Gonzalez Abad, G.; Nowlan, C. R.; Miller, C. E.; Huang, G.; Liu, X.; Chance, K.; Yang, K.

    2017-12-01

    It has been well demonstrated that regridding Level 2 products (satellite observations from individual footprints, or pixels) from multiple sensors/species onto regular spatial and temporal grids makes the data more accessible for scientific studies and can even lead to additional discoveries. However, synergizing multiple species retrieved from multiple satellite sensors faces many challenges, including differences in spatial coverage, viewing geometry, and data filtering criteria. These differences will lead to errors and biases if not treated carefully. Operational gridded products are often at 0.25°×0.25° resolution with a global scale, which is too coarse for local heterogeneous emission sources (e.g., urban areas), and at fixed temporal intervals (e.g., daily or monthly). We propose a consistent framework to fully use and properly weight the information of all possible individual satellite observations. A key aspect of this work is an accurate knowledge of the spatial response function (SRF) of the satellite Level 2 pixels. We found that the conventional overlap-area-weighting method (tessellation) is accurate only when the SRF is homogeneous within the parameterized pixel boundary and zero outside the boundary. There will be a tessellation error if the SRF is a smooth distribution, and if this distribution is not properly considered. On the other hand, discretizing the SRF at the destination grid will also induce errors. By balancing these error sources, we found that the SRF should be used in gridding OMI data to 0.2° for fine resolutions. Case studies by merging multiple species and wind data into 0.01° grid will be shown in the presentation.

  17. Optical sectioning microscopy using two-frame structured illumination and Hilbert-Huang data processing

    NASA Astrophysics Data System (ADS)

    Trusiak, M.; Patorski, K.; Tkaczyk, T.

    2014-12-01

    We propose a fast, simple and experimentally robust method for reconstructing background-rejected optically-sectioned microscopic images using two-shot structured illumination approach. Innovative data demodulation technique requires two grid-illumination images mutually phase shifted by π (half a grid period) but precise phase displacement value is not critical. Upon subtraction of the two frames the input pattern with increased grid modulation is computed. The proposed demodulation procedure comprises: (1) two-dimensional data processing based on the enhanced, fast empirical mode decomposition (EFEMD) method for the object spatial frequency selection (noise reduction and bias term removal), and (2) calculating high contrast optically-sectioned image using the two-dimensional spiral Hilbert transform (HS). The proposed algorithm effectiveness is compared with the results obtained for the same input data using conventional structured-illumination (SIM) and HiLo microscopy methods. The input data were collected for studying highly scattering tissue samples in reflectance mode. In comparison with the conventional three-frame SIM technique we need one frame less and no stringent requirement on the exact phase-shift between recorded frames is imposed. The HiLo algorithm outcome is strongly dependent on the set of parameters chosen manually by the operator (cut-off frequencies for low-pass and high-pass filtering and η parameter value for optically-sectioned image reconstruction) whereas the proposed method is parameter-free. Moreover very short processing time required to efficiently demodulate the input pattern predestines proposed method for real-time in-vivo studies. Current implementation completes full processing in 0.25s using medium class PC (Inter i7 2,1 GHz processor and 8 GB RAM). Simple modification employed to extract only first two BIMFs with fixed filter window size results in reducing the computing time to 0.11s (8 frames/s).

  18. Design and realization of assessment software for DC-bias of transformers

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Liu, Lian-guang; Yuan, Zhong-chen

    2013-03-01

    The transformer working at the rated state will partically be saturated, and its mangetic current will be distorted accompanying with various of harmonic, increasing reactive power demand and some other affilicated phenomenon, which will threaten the safe operation of power grid. This paper establishes a transformer saturation circuit model of DCbias under duality principle basing on J-A theory which can reflect the hysteresis characteristics of iron core, and develops an software can assess the effects of transformer DC-bias using hybrid programming technology of C#.net and MATLAB with the microsoft.net platform. This software is able to simulate the mangnetizing current of different structures and assess the Saturation Level of transformers and the influnces of affilicated phenomenon accroding to the parameter of transformers and the DC equivalent voltage. It provides an effective method to assess the influnces of transformers caused by magnetic storm disaster and the earthing current of the HVDC project.

  19. Statistical errors and systematic biases in the calibration of the convective core overshooting with eclipsing binaries. A case study: TZ Fornacis

    NASA Astrophysics Data System (ADS)

    Valle, G.; Dell'Omodarme, M.; Prada Moroni, P. G.; Degl'Innocenti, S.

    2017-04-01

    Context. Recently published work has made high-precision fundamental parameters available for the binary system TZ Fornacis, making it an ideal target for the calibration of stellar models. Aims: Relying on these observations, we attempt to constrain the initial helium abundance, the age and the efficiency of the convective core overshooting. Our main aim is in pointing out the biases in the results due to not accounting for some sources of uncertainty. Methods: We adopt the SCEPtER pipeline, a maximum likelihood technique based on fine grids of stellar models computed for various values of metallicity, initial helium abundance and overshooting efficiency by means of two independent stellar evolutionary codes, namely FRANEC and MESA. Results: Beside the degeneracy between the estimated age and overshooting efficiency, we found the existence of multiple independent groups of solutions. The best one suggests a system of age 1.10 ± 0.07 Gyr composed of a primary star in the central helium burning stage and a secondary in the sub-giant branch (SGB). The resulting initial helium abundance is consistent with a helium-to-metal enrichment ratio of ΔY/ ΔZ = 1; the core overshooting parameter is β = 0.15 ± 0.01 for FRANEC and fov = 0.013 ± 0.001 for MESA. The second class of solutions, characterised by a worse goodness-of-fit, still suggest a primary star in the central helium-burning stage but a secondary in the overall contraction phase, at the end of the main sequence (MS). In this case, the FRANEC grid provides an age of Gyr and a core overshooting parameter , while the MESA grid gives 1.23 ± 0.03 Gyr and fov = 0.025 ± 0.003. We analyse the impact on the results of a larger, but typical, mass uncertainty and of neglecting the uncertainty in the initial helium content of the system. We show that very precise mass determinations with uncertainty of a few thousandths of solar mass are required to obtain reliable determinations of stellar parameters, as mass errors larger than approximately 1% lead to estimates that are not only less precise but also biased. Moreover, we show that a fit obtained with a grid of models computed at a fixed ΔY/ ΔZ - thus neglecting the current uncertainty in the initial helium content of the system - can provide severely biased age and overshooting estimates. The possibility of independent overshooting efficiencies for the two stars of the system is also explored. Conclusions: The present analysis confirms that to constrain the core overshooting parameter by means of binary systems is a very difficult task that requires an observational precision still rarely achieved and a robust statistical treatment of the error sources.

  20. Exploring transmembrane transport through alpha-hemolysin with grid-steered molecular dynamics.

    PubMed

    Wells, David B; Abramkina, Volha; Aksimentiev, Aleksei

    2007-09-28

    The transport of biomolecules across cell boundaries is central to cellular function. While structures of many membrane channels are known, the permeation mechanism is known only for a select few. Molecular dynamics (MD) is a computational method that can provide an accurate description of permeation events at the atomic level, which is required for understanding the transport mechanism. However, due to the relatively short time scales accessible to this method, it is of limited utility. Here, we present a method for all-atom simulation of electric field-driven transport of large solutes through membrane channels, which in tens of nanoseconds can provide a realistic account of a permeation event that would require a millisecond simulation using conventional MD. In this method, the average distribution of the electrostatic potential in a membrane channel under a transmembrane bias of interest is determined first from an all-atom MD simulation. This electrostatic potential, defined on a grid, is subsequently applied to a charged solute to steer its permeation through the membrane channel. We apply this method to investigate permeation of DNA strands, DNA hairpins, and alpha-helical peptides through alpha-hemolysin. To test the accuracy of the method, we computed the relative permeation rates of DNA strands having different sequences and global orientations. The results of the G-SMD simulations were found to be in good agreement in experiment.

  1. Quantification of myocardial fibrosis by digital image analysis and interactive stereology.

    PubMed

    Daunoravicius, Dainius; Besusparis, Justinas; Zurauskas, Edvardas; Laurinaviciene, Aida; Bironaite, Daiva; Pankuweit, Sabine; Plancoulaine, Benoit; Herlin, Paulette; Bogomolovas, Julius; Grabauskiene, Virginija; Laurinavicius, Arvydas

    2014-06-09

    Cardiac fibrosis disrupts the normal myocardial structure and has a direct impact on heart function and survival. Despite already available digital methods, the pathologist's visual score is still widely considered as ground truth and used as a primary method in histomorphometric evaluations. The aim of this study was to compare the accuracy of digital image analysis tools and the pathologist's visual scoring for evaluating fibrosis in human myocardial biopsies, based on reference data obtained by point counting performed on the same images. Endomyocardial biopsy material from 38 patients diagnosed with inflammatory dilated cardiomyopathy was used. The extent of total cardiac fibrosis was assessed by image analysis on Masson's trichrome-stained tissue specimens using automated Colocalization and Genie software, by Stereology grid count and manually by Pathologist's visual score. A total of 116 slides were analyzed. The mean results obtained by the Colocalization software (13.72 ± 12.24%) were closest to the reference value of stereology (RVS), while the Genie software and Pathologist score gave a slight underestimation. RVS values correlated strongly with values obtained using the Colocalization and Genie (r>0.9, p<0.001) software as well as the pathologist visual score. Differences in fibrosis quantification by Colocalization and RVS were statistically insignificant. However, significant bias was found in the results obtained by using Genie versus RVS and pathologist score versus RVS with mean difference values of: -1.61% and 2.24%. Bland-Altman plots showed a bidirectional bias dependent on the magnitude of the measurement: Colocalization software overestimated the area fraction of fibrosis in the lower end, and underestimated in the higher end of the RVS values. Meanwhile, Genie software as well as the pathologist score showed more uniform results throughout the values, with a slight underestimation in the mid-range for both. Both applied digital image analysis methods revealed almost perfect correlation with the criterion standard obtained by stereology grid count and, in terms of accuracy, outperformed the pathologist's visual score. Genie algorithm proved to be the method of choice with the only drawback of a slight underestimation bias, which is considered acceptable for both clinical and research evaluations. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/9857909611227193.

  2. Influence of model grid size on the simulation of PM2.5 and the related excess mortality in Japan

    NASA Astrophysics Data System (ADS)

    Goto, D.; Ueda, K.; Ng, C. F.; Takami, A.; Ariga, T.; Matsuhashi, K.; Nakajima, T.

    2016-12-01

    Aerosols, especially PM2.5, can affect air pollution, climate change, and human health. The estimation of health impacts due to PM2.5 is often performed using global and regional aerosol transport models with various horizontal resolutions. To investigate the dependence of the simulated PM2.5 on model grid sizes, we executed two simulations using a high-resolution model ( 10km; HRM) and a low-resolution model ( 100km; LRM, which is a typical value for general circulation models). In this study, we used a global-to-regional atmospheric transport model to simulate PM2.5 in Japan with a stretched grid system in HRM and a uniform grid system in LRM for the present (the 2000) and the future (the 2030, as proposed by the Representative Concentrations Pathway 4.5, RCP4.5). These calculations were performed by nudging meteorological fields obtained from an atmosphere-ocean coupled model and providing emission inventories used in the coupled model. After correcting for bias, we calculated the excess mortality due to long-term exposure to PM2.5 for the elderly. Results showed the LRM underestimated by approximately 30 % (of PM2.5 concentrations in the 2000 and 2030), approximately 60 % (excess mortality in the 2000) and approximately 90 % (excess mortality in 2030) compared to the HRM results. The estimation of excess mortality therefore performed better with high-resolution grid sizes. In addition, we also found that our nesting method could be a useful tool to obtain better estimation results.

  3. Precipitation frequency analysis based on regional climate simulations in Central Alberta

    NASA Astrophysics Data System (ADS)

    Kuo, Chun-Chao; Gan, Thian Yew; Hanrahan, Janel L.

    2014-03-01

    A Regional Climate Model (RCM), MM5 (the Fifth Generation Pennsylvania State University/National Center for Atmospheric Research mesoscale model), is used to simulate summer precipitation in Central Alberta. MM5 was set up with a one-way, three-domain nested framework, with domain resolutions of 27, 9, and 3 km, respectively, and forced with ERA-Interim reanalysis data of ECMWF (European Centre for Medium-Range Weather Forecasts). The objective is to develop high resolution, grid-based Intensity-Duration-Frequency (IDF) curves based on the simulated annual maximums of precipitation (AMP) data for durations ranging from 15-min to 24-h. The performance of MM5 was assessed in terms of simulated rainfall intensity, precipitable water, and 2-m air temperature. Next, the grid-based IDF curves derived from MM5 were compared to IDF curves derived from six RCMs of the North American Regional Climate Change Assessment Program (NARCCAP) set up with 50-km grids, driven with NCEP-DOE (National Centers for Environmental Prediction-Department of Energy) Reanalysis II data, and regional IDF curves derived from observed rain gauge data (RG-IDF). The analyzed results indicate that 6-h simulated precipitable water and 2-m temperature agree well with the ERA-Interim reanalysis data. However, compared to RG-IDF curves, IDF curves based on simulated precipitation data of MM5 are overestimated especially for IDF curves of 2-year return period. In contract, IDF curves developed from NARCCAP data suffer from under-estimation and differ more from RG-IDF curves than the MM5 IDF curves. The over-estimation of IDF curves of MM5 was corrected by a quantile-based, bias correction method. By dynamically downscale the ERA-Interim and after bias correction, it is possible to develop IDF curves useful for regions with limited or no rain gauge data. This estimation process can be further extended to predict future grid-based IDF curves subjected to possible climate change impacts based on climate change projections of GCMs (general circulation models) of IPCC (Intergovernmental Panel on Climate Change).

  4. Gridded versus point data in the context of validation results from experiments of the COST action VALUE

    NASA Astrophysics Data System (ADS)

    Wibig, Joanna; Kotlarski, Sven; Maraun, Douglas; Soares, Pedro; Jaczewski, Adam; Czernecki, Bartosz; Gutierrez, Jose; Pongracz, Rita; Bartholy, Judit

    2016-04-01

    The aim of the paper is to compare the bias of selected ERA-Interim driven RCM projections when evaluated to gridded observation data (regridded to the same resolution as the considered RCM output) with those evaluated against station data to isolate the representativeness issue from the downscaling performance. The comparison has to be done for experiments of the COST action VALUE, so the same data period (1979-2008) and the same set consisting of 85 stations were used. As a gridded observations the EOBs data from the gridpoints closest to selected stations were used. The comparison was made for daily precipitation totals as well as daily minimum, maximum and mean temperature. A lot of indices were analysed to weigh up representativeness issues for marginal and temporal aspects. Relevant marginal aspects are described by average and extreme values distributions, whereas temporal aspects are presented by seasonality and length of extremespells. Set of indices used in VALUE experiment 1 is calculated for each dataset (stations, EOBs, selected RCM outputs) and biases of RCM outputs against station and EOBs data are obtained and compared. Those with most significant changes are analysed in details.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagos, Samson M.; Feng, Zhe; Burleyson, Casey D.

    Regional cloud permitting model simulations of cloud populations observed during the 2011 ARM Madden Julian Oscillation Investigation Experiment/ Dynamics of Madden-Julian Experiment (AMIE/DYNAMO) field campaign are evaluated against radar and ship-based measurements. Sensitivity of model simulated surface rain rate statistics to parameters and parameterization of hydrometeor sizes in five commonly used WRF microphysics schemes are examined. It is shown that at 2 km grid spacing, the model generally overestimates rain rate from large and deep convective cores. Sensitivity runs involving variation of parameters that affect rain drop or ice particle size distribution (more aggressive break-up process etc) generally reduce themore » bias in rain-rate and boundary layer temperature statistics as the smaller particles become more vulnerable to evaporation. Furthermore significant improvement in the convective rain-rate statistics is observed when the horizontal grid-spacing is reduced to 1 km and 0.5 km, while it is worsened when run at 4 km grid spacing as increased turbulence enhances evaporation. The results suggest modulation of evaporation processes, through parameterization of turbulent mixing and break-up of hydrometeors may provide a potential avenue for correcting cloud statistics and associated boundary layer temperature biases in regional and global cloud permitting model simulations.« less

  6. Covariance analysis of the airborne laser ranging system

    NASA Technical Reports Server (NTRS)

    Englar, T. S., Jr.; Hammond, C. L.; Gibbs, B. P.

    1981-01-01

    The requirements and limitations of employing an airborne laser ranging system for detecting crustal shifts of the Earth within centimeters over a region of approximately 200 by 400 km are presented. The system consists of an aircraft which flies over a grid of ground deployed retroreflectors, making six passes over the grid at two different altitudes. The retroreflector baseline errors are assumed to result from measurement noise, a priori errors on the aircraft and retroreflector positions, tropospheric refraction, and sensor biases.

  7. Sensitivity of U.S. summer precipitation to model resolution and convective parameterizations across gray zone resolutions

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Leung, L. Ruby; Zhao, Chun; Hagos, Samson

    2017-03-01

    Simulating summer precipitation is a significant challenge for climate models that rely on cumulus parameterizations to represent moist convection processes. Motivated by recent advances in computing that support very high-resolution modeling, this study aims to systematically evaluate the effects of model resolution and convective parameterizations across the gray zone resolutions. Simulations using the Weather Research and Forecasting model were conducted at grid spacings of 36 km, 12 km, and 4 km for two summers over the conterminous U.S. The convection-permitting simulations at 4 km grid spacing are most skillful in reproducing the observed precipitation spatial distributions and diurnal variability. Notable differences are found between simulations with the traditional Kain-Fritsch (KF) and the scale-aware Grell-Freitas (GF) convection schemes, with the latter more skillful in capturing the nocturnal timing in the Great Plains and North American monsoon regions. The GF scheme also simulates a smoother transition from convective to large-scale precipitation as resolution increases, resulting in reduced sensitivity to model resolution compared to the KF scheme. Nonhydrostatic dynamics has a positive impact on precipitation over complex terrain even at 12 km and 36 km grid spacings. With nudging of the winds toward observations, we show that the conspicuous warm biases in the Southern Great Plains are related to precipitation biases induced by large-scale circulation biases, which are insensitive to model resolution. Overall, notable improvements in simulating summer rainfall and its diurnal variability through convection-permitting modeling and scale-aware parameterizations suggest promising venues for improving climate simulations of water cycle processes.

  8. Impacts of uncertainties in European gridded precipitation observations on regional climate analysis

    PubMed Central

    Gobiet, Andreas

    2016-01-01

    ABSTRACT Gridded precipitation data sets are frequently used to evaluate climate models or to remove model output biases. Although precipitation data are error prone due to the high spatio‐temporal variability of precipitation and due to considerable measurement errors, relatively few attempts have been made to account for observational uncertainty in model evaluation or in bias correction studies. In this study, we compare three types of European daily data sets featuring two Pan‐European data sets and a set that combines eight very high‐resolution station‐based regional data sets. Furthermore, we investigate seven widely used, larger scale global data sets. Our results demonstrate that the differences between these data sets have the same magnitude as precipitation errors found in regional climate models. Therefore, including observational uncertainties is essential for climate studies, climate model evaluation, and statistical post‐processing. Following our results, we suggest the following guidelines for regional precipitation assessments. (1) Include multiple observational data sets from different sources (e.g. station, satellite, reanalysis based) to estimate observational uncertainties. (2) Use data sets with high station densities to minimize the effect of precipitation undersampling (may induce about 60% error in data sparse regions). The information content of a gridded data set is mainly related to its underlying station density and not to its grid spacing. (3) Consider undercatch errors of up to 80% in high latitudes and mountainous regions. (4) Analyses of small‐scale features and extremes are especially uncertain in gridded data sets. For higher confidence, use climate‐mean and larger scale statistics. In conclusion, neglecting observational uncertainties potentially misguides climate model development and can severely affect the results of climate change impact assessments. PMID:28111497

  9. Impacts of uncertainties in European gridded precipitation observations on regional climate analysis.

    PubMed

    Prein, Andreas F; Gobiet, Andreas

    2017-01-01

    Gridded precipitation data sets are frequently used to evaluate climate models or to remove model output biases. Although precipitation data are error prone due to the high spatio-temporal variability of precipitation and due to considerable measurement errors, relatively few attempts have been made to account for observational uncertainty in model evaluation or in bias correction studies. In this study, we compare three types of European daily data sets featuring two Pan-European data sets and a set that combines eight very high-resolution station-based regional data sets. Furthermore, we investigate seven widely used, larger scale global data sets. Our results demonstrate that the differences between these data sets have the same magnitude as precipitation errors found in regional climate models. Therefore, including observational uncertainties is essential for climate studies, climate model evaluation, and statistical post-processing. Following our results, we suggest the following guidelines for regional precipitation assessments. (1) Include multiple observational data sets from different sources (e.g. station, satellite, reanalysis based) to estimate observational uncertainties. (2) Use data sets with high station densities to minimize the effect of precipitation undersampling (may induce about 60% error in data sparse regions). The information content of a gridded data set is mainly related to its underlying station density and not to its grid spacing. (3) Consider undercatch errors of up to 80% in high latitudes and mountainous regions. (4) Analyses of small-scale features and extremes are especially uncertain in gridded data sets. For higher confidence, use climate-mean and larger scale statistics. In conclusion, neglecting observational uncertainties potentially misguides climate model development and can severely affect the results of climate change impact assessments.

  10. Online dynamical downscaling of temperature and precipitation within the iLOVECLIM model (version 1.1)

    NASA Astrophysics Data System (ADS)

    Quiquet, Aurélien; Roche, Didier M.; Dumas, Christophe; Paillard, Didier

    2018-02-01

    This paper presents the inclusion of an online dynamical downscaling of temperature and precipitation within the model of intermediate complexity iLOVECLIM v1.1. We describe the following methodology to generate temperature and precipitation fields on a 40 km × 40 km Cartesian grid of the Northern Hemisphere from the T21 native atmospheric model grid. Our scheme is not grid specific and conserves energy and moisture in the same way as the original climate model. We show that we are able to generate a high-resolution field which presents a spatial variability in better agreement with the observations compared to the standard model. Although the large-scale model biases are not corrected, for selected model parameters, the downscaling can induce a better overall performance compared to the standard version on both the high-resolution grid and on the native grid. Foreseen applications of this new model feature include the improvement of ice sheet model coupling and high-resolution land surface models.

  11. Potential biases in evapotranspiration estimates from Earth system models due to spatial heterogeneity and lateral moisture redistribution

    NASA Astrophysics Data System (ADS)

    Rouholahnejad, E.; Kirchner, J. W.

    2016-12-01

    Evapotranspiration (ET) is a key process in land-climate interactions and affects the dynamics of the atmosphere at local and regional scales. In estimating ET, most earth system models average over considerable sub-grid heterogeneity in land surface properties, precipitation (P), and potential evapotranspiration (PET). This spatial averaging could potentially bias ET estimates, due to the nonlinearities in the underlying relationships. In addition, most earth system models ignore lateral redistribution of water within and between grid cells, which could potentially alter both local and regional ET. Here we present a first attempt to quantify the effects of spatial heterogeneity and lateral redistribution on grid-cell-averaged ET as seen from the atmosphere over heterogeneous landscapes. Using a Budyko framework to express ET as a function of P and PET, we quantify how sub-grid heterogeneity affects average ET at the scale of typical earth system model grid cells. We show that averaging over sub-grid heterogeneity in P and PET, as typical earth system models do, leads to overestimates of average ET. We use a similar approach to quantify how lateral redistribution of water could affect average ET, as seen from the atmosphere. We show that where the aridity index P/PET increases with altitude, gravitationally driven lateral redistribution will increase average ET, implying that models that neglect lateral moisture redistribution will underestimate average ET. In contrast, where the aridity index P/PET decreases with altitude, gravitationally driven lateral redistribution will decrease average ET. This approach yields a simple conceptual framework and mathematical expressions for determining whether, and how much, spatial heterogeneity and lateral redistribution can affect regional ET fluxes as seen from the atmosphere. This analysis provides the basis for quantifying heterogeneity and redistribution effects on ET at regional and continental scales, which will be the focus of future work.

  12. Grid Resolution Study over Operability Space for a Mach 1.7 Low Boom External Compression Inlet

    NASA Technical Reports Server (NTRS)

    Anderson, Bernhard H.

    2014-01-01

    This paper presents a statistical methodology whereby the probability limits associated with CFD grid resolution of inlet flow analysis can be determined which provide quantitative information on the distribution of that error over the specified operability range. The objectives of this investigation is to quantify the effects of both random (accuracy) and systemic (biasing) errors associated with grid resolution in the analysis of the Lockheed Martin Company (LMCO) N+2 Low Boom external compression supersonic inlet. The study covers the entire operability space as defined previously by the High Speed Civil Transport (HSCT) High Speed Research (HSR) program goals. The probability limits in terms of a 95.0% confidence interval on the analysis data were evaluated for four ARP1420 inlet metrics, namely (1) total pressure recovery (PFAIP), (2) radial hub distortion (DPH/P), (3) ) radial tip distortion (DPT/P), and (4) ) circumferential distortion (DPC/P). In general, the resulting +/-0.95 delta Y interval was unacceptably large in comparison to the stated goals of the HSCT program. Therefore, the conclusion was reached that the "standard grid" size was insufficient for this type of analysis. However, in examining the statistical data, it was determined that the CFD analysis results at the outer fringes of the operability space were the determining factor in the measure of statistical uncertainty. Adequate grids are grids that are free of biasing (systemic) errors and exhibit low random (precision) errors in comparison to their operability goals. In order to be 100% certain that the operability goals have indeed been achieved for each of the inlet metrics, the Y+/-0.95 delta Y limit must fall inside the stated operability goals. For example, if the operability goal for DPC/P circumferential distortion is =0.06, then the forecast Y for DPC/P plus the 95% confidence interval on DPC/P, i.e. +/-0.95 delta Y, must all be less than or equal to 0.06.

  13. The Diagnostics of the External Plasma for the Plasma Rocket

    NASA Technical Reports Server (NTRS)

    Karr, Gerald R.

    1997-01-01

    The plasma rocket is located at NASA Johnson Space Center. To produce a thrust in space. an inert gas is ionized into a plasma and heated in the linear section of a tokamak fusion device to 1 x 10(exp 4) - 1.16 x 10(exp 6)K(p= 10(exp 10) - 10(exp 14)/cu cm ). The magnetic field used to contain the plasma has a magnitude of 2 - 10k Gauss. The plasma plume has a variable thrust and specific impulse. A high temperature retarding potential analyzer (RPA) is being developed to characterize the plasma in the plume and at the edge of the magnetically contained plasma. The RPA measures the energy and density of ions or electrons entering into its solid angle of collection. An oscilloscope displays the ion flux versus the collected current. All measurements are made relative to the facility ground. A RPA is being developed in a process which involves the investigation of several prototypes. The first prototype has been tested on a thermal plasma. The knowledge gained from its development and testing were applied to the development of a RPA for collimated plasma. The prototypes consist of four equally spaced grids and an ion collector. The outermost grid is a ground. The second grid acts as a bias to repel electrons. The third is a variable v voltage ion suppressor. Grid four (inner grid) acts to repel secondary electrons, being biased equal to the first. Knowledge gained during these two stages are being applied to the development of a high temperature RPA Testing of this device involves the determination of its output parameters. sensitivity, and responses to a wide range of energies and densities. Each grid will be tested individually by changing only its voltage and observing the output from the RPA. To verify that the RPA is providing proper output. it is compared to the output from a Langmuir or Faraday probe.

  14. Influence of Sub-grid-Scale Isentropic Transports on McRAS Evaluations using ARM-CART SCM Datasets

    NASA Technical Reports Server (NTRS)

    Sud, Y. C.; Walker, G. K.; Tao, W. K.

    2004-01-01

    In GCM-physics evaluations with the currently available ARM-CART SCM datasets, McRAS produced very similar character of near surface errors of simulated temperature and humidity containing typically warm and moist biases near the surface and cold and dry biases aloft. We argued it must have a common cause presumably rooted in the model physics. Lack of vertical adjustment of horizontal transport was thought to be a plausible source. Clearly, debarring such a freedom would force the incoming air to diffuse into the grid-cell which would naturally bias the surface air to become warm and moist while the upper air becomes cold and dry, a characteristic feature of McRAS biases. Since, the errors were significantly larger in the two winter cases that contain potentially more intense episodes of cold and warm advective transports, it further reaffirmed our argument and provided additional motivation to introduce the corrections. When the horizontal advective transports were suitably modified to allow rising and/or sinking following isentropic pathways of subgrid scale motions, the outcome was to cool and dry (or warm and moisten) the lower (or upper) levels. Ever, crude approximations invoking such a correction reduced the temperature and humidity biases considerably. The tests were performed on all the available ARM-CART SCM cases with consistent outcome. With the isentropic corrections implemented through two different numerical approximations, virtually similar benefits were derived further confirming the robustness of our inferences. These results suggest the need for insentropic advective transport adjustment in a GCM due to subgrid scale motions.

  15. Downscaling Aerosols and the Impact of Neglected Subgrid Processes on Direct Aerosol Radiative Forcing for a Representative Global Climate Model Grid Spacing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gustafson, William I.; Qian, Yun; Fast, Jerome D.

    2011-07-13

    Recent improvements to many global climate models include detailed, prognostic aerosol calculations intended to better reproduce the observed climate. However, the trace gas and aerosol fields are treated at the grid-cell scale with no attempt to account for sub-grid impacts on the aerosol fields. This paper begins to quantify the error introduced by the neglected sub-grid variability for the shortwave aerosol radiative forcing for a representative climate model grid spacing of 75 km. An analysis of the value added in downscaling aerosol fields is also presented to give context to the WRF-Chem simulations used for the sub-grid analysis. We foundmore » that 1) the impact of neglected sub-grid variability on the aerosol radiative forcing is strongest in regions of complex topography and complicated flow patterns, and 2) scale-induced differences in emissions contribute strongly to the impact of neglected sub-grid processes on the aerosol radiative forcing. The two of these effects together, when simulated at 75 km vs. 3 km in WRF-Chem, result in an average daytime mean bias of over 30% error in top-of-atmosphere shortwave aerosol radiative forcing for a large percentage of central Mexico during the MILAGRO field campaign.« less

  16. GridMan: A grid manipulation system

    NASA Technical Reports Server (NTRS)

    Eiseman, Peter R.; Wang, Zhu

    1992-01-01

    GridMan is an interactive grid manipulation system. It operates on grids to produce new grids which conform to user demands. The input grids are not constrained to come from any particular source. They may be generated by algebraic methods, elliptic methods, hyperbolic methods, parabolic methods, or some combination of methods. The methods are included in the various available structured grid generation codes. These codes perform the basic assembly function for the various elements of the initial grid. For block structured grids, the assembly can be quite complex due to a large number of clock corners, edges, and faces for which various connections and orientations must be properly identified. The grid generation codes are distinguished among themselves by their balance between interactive and automatic actions and by their modest variations in control. The basic form of GridMan provides a much more substantial level of grid control and will take its input from any of the structured grid generation codes. The communication link to the outside codes is a data file which contains the grid or section of grid.

  17. A global data set of soil hydraulic properties and sub-grid variability of soil water retention and hydraulic conductivity curves

    NASA Astrophysics Data System (ADS)

    Montzka, Carsten; Herbst, Michael; Weihermüller, Lutz; Verhoef, Anne; Vereecken, Harry

    2017-07-01

    Agroecosystem models, regional and global climate models, and numerical weather prediction models require adequate parameterization of soil hydraulic properties. These properties are fundamental for describing and predicting water and energy exchange processes at the transition zone between solid earth and atmosphere, and regulate evapotranspiration, infiltration and runoff generation. Hydraulic parameters describing the soil water retention (WRC) and hydraulic conductivity (HCC) curves are typically derived from soil texture via pedotransfer functions (PTFs). Resampling of those parameters for specific model grids is typically performed by different aggregation approaches such a spatial averaging and the use of dominant textural properties or soil classes. These aggregation approaches introduce uncertainty, bias and parameter inconsistencies throughout spatial scales due to nonlinear relationships between hydraulic parameters and soil texture. Therefore, we present a method to scale hydraulic parameters to individual model grids and provide a global data set that overcomes the mentioned problems. The approach is based on Miller-Miller scaling in the relaxed form by Warrick, that fits the parameters of the WRC through all sub-grid WRCs to provide an effective parameterization for the grid cell at model resolution; at the same time it preserves the information of sub-grid variability of the water retention curve by deriving local scaling parameters. Based on the Mualem-van Genuchten approach we also derive the unsaturated hydraulic conductivity from the water retention functions, thereby assuming that the local parameters are also valid for this function. In addition, via the Warrick scaling parameter λ, information on global sub-grid scaling variance is given that enables modellers to improve dynamical downscaling of (regional) climate models or to perturb hydraulic parameters for model ensemble output generation. The present analysis is based on the ROSETTA PTF of Schaap et al. (2001) applied to the SoilGrids1km data set of Hengl et al. (2014). The example data set is provided at a global resolution of 0.25° at https://doi.org/10.1594/PANGAEA.870605.

  18. Application of multi-grid methods for solving the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.

    1989-01-01

    The application of a class of multi-grid methods to the solution of the Navier-Stokes equations for two-dimensional laminar flow problems is discussed. The methods consist of combining the full approximation scheme-full multi-grid technique (FAS-FMG) with point-, line-, or plane-relaxation routines for solving the Navier-Stokes equations in primitive variables. The performance of the multi-grid methods is compared to that of several single-grid methods. The results show that much faster convergence can be procured through the use of the multi-grid approach than through the various suggestions for improving single-grid methods. The importance of the choice of relaxation scheme for the multi-grid method is illustrated.

  19. Application of multi-grid methods for solving the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.

    1989-01-01

    This paper presents the application of a class of multi-grid methods to the solution of the Navier-Stokes equations for two-dimensional laminar flow problems. The methods consists of combining the full approximation scheme-full multi-grid technique (FAS-FMG) with point-, line- or plane-relaxation routines for solving the Navier-Stokes equations in primitive variables. The performance of the multi-grid methods is compared to those of several single-grid methods. The results show that much faster convergence can be procured through the use of the multi-grid approach than through the various suggestions for improving single-grid methods. The importance of the choice of relaxation scheme for the multi-grid method is illustrated.

  20. Continuously amplified warming in the Alaskan Arctic: Implications for estimating global warming hiatus: SPATIAL COVERAGE AND BIAS IN TREND

    DOE PAGES

    Wang, Kang; Zhang, Tingjun; Zhang, Xiangdong; ...

    2017-09-13

    Historically, in-situ measurements have been notoriously sparse over the Arctic. As a consequence, the existing gridded data of Surface Air Temperature (SAT) may have large biases in estimating the warming trend in this region. Using data from an expanded monitoring network with 31 stations in the Alaskan Arctic, we demonstrate that the SAT has increased by 2.19 °C in this region, or at a rate of 0.23 °C/decade during 1921-2015. Mean- while, we found that the SAT warmed at 0.71 °C/decade over 1998-2015, which is two to three times faster than the rate established from the gridded datasets. Focusing onmore » the "hiatus" period 1998-2012 as identied by the Intergovernmental Panel on Climate Change (IPCC) report, the SAT has increased at 0.45 °C/decade, which captures more than 90% of the regional trend for 1951- 2012. We suggest that sparse in-situ measurements are responsible for underestimation of the SAT change in the gridded datasets. It is likely that enhanced climate warming may also have happened in the other regions of the Arctic since the late 1990s but left undetected because of incomplete observational coverage.« less

  1. Continuously amplified warming in the Alaskan Arctic: Implications for estimating global warming hiatus: SPATIAL COVERAGE AND BIAS IN TREND

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Kang; Zhang, Tingjun; Zhang, Xiangdong

    Historically, in-situ measurements have been notoriously sparse over the Arctic. As a consequence, the existing gridded data of Surface Air Temperature (SAT) may have large biases in estimating the warming trend in this region. Using data from an expanded monitoring network with 31 stations in the Alaskan Arctic, we demonstrate that the SAT has increased by 2.19 °C in this region, or at a rate of 0.23 °C/decade during 1921-2015. Mean- while, we found that the SAT warmed at 0.71 °C/decade over 1998-2015, which is two to three times faster than the rate established from the gridded datasets. Focusing onmore » the "hiatus" period 1998-2012 as identied by the Intergovernmental Panel on Climate Change (IPCC) report, the SAT has increased at 0.45 °C/decade, which captures more than 90% of the regional trend for 1951- 2012. We suggest that sparse in-situ measurements are responsible for underestimation of the SAT change in the gridded datasets. It is likely that enhanced climate warming may also have happened in the other regions of the Arctic since the late 1990s but left undetected because of incomplete observational coverage.« less

  2. Reassessing biases and other uncertainties in sea surface temperature observations measured in situ since 1850: 2. Biases and homogenization

    NASA Astrophysics Data System (ADS)

    Kennedy, J. J.; Rayner, N. A.; Smith, R. O.; Parker, D. E.; Saunby, M.

    2011-07-01

    Changes in instrumentation and data availability have caused time-varying biases in estimates of global and regional average sea surface temperature. The size of the biases arising from these changes are estimated and their uncertainties evaluated. The estimated biases and their associated uncertainties are largest during the period immediately following the Second World War, reflecting the rapid and incompletely documented changes in shipping and data availability at the time. Adjustments have been applied to reduce these effects in gridded data sets of sea surface temperature and the results are presented as a set of interchangeable realizations. Uncertainties of estimated trends in global and regional average sea surface temperature due to bias adjustments since the Second World War are found to be larger than uncertainties arising from the choice of analysis technique, indicating that this is an important source of uncertainty in analyses of historical sea surface temperatures. Despite this, trends over the twentieth century remain qualitatively consistent.

  3. Intercomparison of Downscaling Methods on Hydrological Impact for Earth System Model of NE United States

    NASA Astrophysics Data System (ADS)

    Yang, P.; Fekete, B. M.; Rosenzweig, B.; Lengyel, F.; Vorosmarty, C. J.

    2012-12-01

    Atmospheric dynamics are essential inputs to Regional-scale Earth System Models (RESMs). Variables including surface air temperature, total precipitation, solar radiation, wind speed and humidity must be downscaled from coarse-resolution, global General Circulation Models (GCMs) to the high temporal and spatial resolution required for regional modeling. However, this downscaling procedure can be challenging due to the need to correct for bias from the GCM and to capture the spatiotemporal heterogeneity of the regional dynamics. In this study, the results obtained using several downscaling techniques and observational datasets were compared for a RESM of the Northeast Corridor of the United States. Previous efforts have enhanced GCM model outputs through bias correction using novel techniques. For example, the Climate Impact Research at Potsdam Institute developed a series of bias-corrected GCMs towards the next generation climate change scenarios (Schiermeier, 2012; Moss et al., 2010). Techniques to better represent the heterogeneity of climate variables have also been improved using statistical approaches (Maurer, 2008; Abatzoglou, 2011). For this study, four downscaling approaches to transform bias-corrected HADGEM2-ES Model output (daily at .5 x .5 degree) to the 3'*3'(longitude*latitude) daily and monthly resolution required for the Northeast RESM were compared: 1) Bilinear Interpolation, 2) Daily bias-corrected spatial downscaling (D-BCSD) with Gridded Meteorological Datasets (developed by Abazoglou 2011), 3) Monthly bias-corrected spatial disaggregation (M-BCSD) with CRU(Climate Research Unit) and 4) Dynamic Downscaling based on Weather Research and Forecast (WRF) model. Spatio-temporal analysis of the variability in precipitation was conducted over the study domain. Validation of the variables of different downscaling methods against observational datasets was carried out for assessment of the downscaled climate model outputs. The effects of using the different approaches to downscale atmospheric variables (specifically air temperature and precipitation) for use as inputs to the Water Balance Model (WBMPlus, Vorosmarty et al., 1998;Wisser et al., 2008) for simulation of daily discharge and monthly stream flow in the Northeast US for a 100-year period in the 21st century were also assessed. Statistical techniques especially monthly bias-corrected spatial disaggregation (M-BCSD) showed potential advantage among other methods for the daily discharge and monthly stream flow simulation. However, Dynamic Downscaling will provide important complements to the statistical approaches tested.

  4. Multi-off-grid methods in multi-step integration of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Beaudet, P. R.

    1974-01-01

    Description of methods of solving first- and second-order systems of differential equations in which all derivatives are evaluated at off-grid locations in order to circumvent the Dahlquist stability limitation on the order of on-grid methods. The proposed multi-off-grid methods require off-grid state predictors for the evaluation of the n derivatives at each step. Progressing forward in time, the off-grid states are predicted using a linear combination of back on-grid state values and off-grid derivative evaluations. A comparison is made between the proposed multi-off-grid methods and the corresponding Adams and Cowell on-grid integration techniques in integrating systems of ordinary differential equations, showing a significant reduction in the error at larger step sizes in the case of the multi-off-grid integrator.

  5. BeiDou Geostationary Satellite Code Bias Modeling Using Fengyun-3C Onboard Measurements.

    PubMed

    Jiang, Kecai; Li, Min; Zhao, Qile; Li, Wenwen; Guo, Xiang

    2017-10-27

    This study validated and investigated elevation- and frequency-dependent systematic biases observed in ground-based code measurements of the Chinese BeiDou navigation satellite system, using the onboard BeiDou code measurement data from the Chinese meteorological satellite Fengyun-3C. Particularly for geostationary earth orbit satellites, sky-view coverage can be achieved over the entire elevation and azimuth angle ranges with the available onboard tracking data, which is more favorable to modeling code biases. Apart from the BeiDou-satellite-induced biases, the onboard BeiDou code multipath effects also indicate pronounced near-field systematic biases that depend only on signal frequency and the line-of-sight directions. To correct these biases, we developed a proposed code correction model by estimating the BeiDou-satellite-induced biases as linear piece-wise functions in different satellite groups and the near-field systematic biases in a grid approach. To validate the code bias model, we carried out orbit determination using single-frequency BeiDou data with and without code bias corrections applied. Orbit precision statistics indicate that those code biases can seriously degrade single-frequency orbit determination. After the correction model was applied, the orbit position errors, 3D root mean square, were reduced from 150.6 to 56.3 cm.

  6. BeiDou Geostationary Satellite Code Bias Modeling Using Fengyun-3C Onboard Measurements

    PubMed Central

    Jiang, Kecai; Li, Min; Zhao, Qile; Li, Wenwen; Guo, Xiang

    2017-01-01

    This study validated and investigated elevation- and frequency-dependent systematic biases observed in ground-based code measurements of the Chinese BeiDou navigation satellite system, using the onboard BeiDou code measurement data from the Chinese meteorological satellite Fengyun-3C. Particularly for geostationary earth orbit satellites, sky-view coverage can be achieved over the entire elevation and azimuth angle ranges with the available onboard tracking data, which is more favorable to modeling code biases. Apart from the BeiDou-satellite-induced biases, the onboard BeiDou code multipath effects also indicate pronounced near-field systematic biases that depend only on signal frequency and the line-of-sight directions. To correct these biases, we developed a proposed code correction model by estimating the BeiDou-satellite-induced biases as linear piece-wise functions in different satellite groups and the near-field systematic biases in a grid approach. To validate the code bias model, we carried out orbit determination using single-frequency BeiDou data with and without code bias corrections applied. Orbit precision statistics indicate that those code biases can seriously degrade single-frequency orbit determination. After the correction model was applied, the orbit position errors, 3D root mean square, were reduced from 150.6 to 56.3 cm. PMID:29076998

  7. Grid-Sphere Electrodes for Contact with Ionospheric Plasma

    NASA Technical Reports Server (NTRS)

    Stone, Nobie H.; Poe, Garrett D.

    2010-01-01

    Grid-sphere electrodes have been proposed for use on the positively biased end of electrodynamic space tethers. A grid-sphere electrode is fabricated by embedding a wire mesh in a thin film from which a spherical balloon is formed. The grid-sphere electrode would be deployed from compact stowage by inflating the balloon in space. The thin-film material used to inflate the balloon is formulated to vaporize when exposed to the space environment. This would leave the bare metallic spherical grid electrode attached to the tether, which would present a small cross-sectional area (essentially, the geometric wire shadow area only) to incident neutral atoms and molecules. Most of the neutral particles, which produce dynamic drag when they impact a surface, would pass unimpeded through the open grid spaces. However, partly as a result of buildup of a space charge inside the grid-sphere, and partially, the result of magnetic field effects, the electrode would act almost like a solid surface with respect to the flux of electrons. The net result would be that grid-sphere electrodes would introduce minimal aerodynamic drag, yet have effective electrical-contact surface areas large enough to collect multiampere currents from the ionospheric plasma that are needed for operation of electrodynamic tethers. The vaporizable-balloon concept could also be applied to the deployment of large radio antennas in outer space.

  8. Unraveling the Hydrology of the Glacierized Kaidu Basin by Integrating Multisource Data in the Tianshan Mountains, Northwestern China

    NASA Astrophysics Data System (ADS)

    Shen, Yan-Jun; Shen, Yanjun; Fink, Manfred; Kralisch, Sven; Brenning, Alexander

    2018-01-01

    Understanding the water balance, especially as it relates to the distribution of runoff components, is crucial for water resource management and coping with the impacts of climate change. However, hydrological processes are poorly known in mountainous regions due to data scarcity and the complex dynamics of snow and glaciers. This study aims to provide a quantitative comparison of gridded precipitation products in the Tianshan Mountains, located in Central Asia and in order to further understand the mountain hydrology and distribution of runoff components in the glacierized Kaidu Basin. We found that gridded precipitation products are affected by inconsistent biases based on a spatiotemporal comparison with the nearest weather stations and should be evaluated with caution before using them as boundary conditions in hydrological modeling. Although uncertainties remain in this data-scarce basin, driven by field survey data and bias-corrected gridded data sets (ERA-Interim and APHRODITE), the water balance and distribution of runoff components can be plausibly quantified based on the distributed hydrological model (J2000). We further examined parameter sensitivity and uncertainty with respect to both simulated streamflow and different runoff components based on an ensemble of simulations. This study demonstrated the possibility of integrating gridded products in hydrological modeling. The methodology used can be important for model applications and design in other data-scarce mountainous regions. The model-based simulation quantified the water balance and how the water resources are partitioned throughout the year in Tianshan Mountain basins, although the uncertainties present in this study result in important limitations.

  9. Hybrid deterministic-stochastic modeling of x-ray beam bowtie filter scatter on a CT system.

    PubMed

    Liu, Xin; Hsieh, Jiang

    2015-01-01

    Knowledge of scatter generated by bowtie filter (i.e. x-ray beam compensator) is crucial for providing artifact free images on the CT scanners. Our approach is to use a hybrid deterministic-stochastic simulation to estimate the scatter level generated by a bowtie filter made of a material with low atomic number. First, major components of CT systems, such as source, flat filter, bowtie filter, body phantom, are built into a 3D model. The scattered photon fluence and the primary transmitted photon fluence are simulated by MCNP - a Monte Carlo simulation toolkit. The rejection of scattered photon by the post patient collimator (anti-scatter grid) is simulated with an analytical formula. The biased sinogram is created by superimposing scatter signal generated by the simulation onto the primary x-ray beam signal. Finally, images with artifacts are reconstructed with the biased signal. The effect of anti-scatter grid height on scatter rejection are also discussed and demonstrated.

  10. Spatiotemporal evaluation of EMEP4UK-WRF v4.3 atmospheric chemistry transport simulations of health-related metrics for NO2, O3, PM10, and PM2. 5 for 2001-2010

    NASA Astrophysics Data System (ADS)

    Lin, Chun; Heal, Mathew R.; Vieno, Massimo; MacKenzie, Ian A.; Armstrong, Ben G.; Butland, Barbara K.; Milojevic, Ai; Chalabi, Zaid; Atkinson, Richard W.; Stevenson, David S.; Doherty, Ruth M.; Wilkinson, Paul

    2017-04-01

    This study was motivated by the use in air pollution epidemiology and health burden assessment of data simulated at 5 km × 5 km horizontal resolution by the EMEP4UK-WRF v4.3 atmospheric chemistry transport model. Thus the focus of the model-measurement comparison statistics presented here was on the health-relevant metrics of annual and daily means of NO2, O3, PM2. 5, and PM10 (daily maximum 8 h running mean for O3). The comparison was temporally and spatially comprehensive, covering a 10-year period (2 years for PM2. 5) and all non-roadside measurement data from the UK national reference monitor network, which applies consistent operational and QA/QC procedures for each pollutant (44, 47, 24, and 30 sites for NO2, O3, PM2. 5, and PM10, respectively). Two important statistics highlighted in the literature for evaluation of air quality model output against policy (and hence health)-relevant standards - correlation and bias - together with root mean square error, were evaluated by site type, year, month, and day-of-week. Model-measurement statistics were generally better than, or comparable to, values that allow for realistic magnitudes of measurement uncertainties. Temporal correlations of daily concentrations were good for O3, NO2, and PM2. 5 at both rural and urban background sites (median values of r across sites in the range 0.70-0.76 for O3 and NO2, and 0.65-0.69 for PM2. 5), but poorer for PM10 (0.47-0.50). Bias differed between environments, with generally less bias at rural background sites (median normalized mean bias (NMB) values for daily O3 and NO2 of 8 and 11 %, respectively). At urban background sites there was a negative model bias for NO2 (median NMB = -29 %) and PM2. 5 (-26 %) and a positive model bias for O3 (26 %). The directions of these biases are consistent with expectations of the effects of averaging primary emissions across the 5 km × 5 km model grid in urban areas, compared with monitor locations that are more influenced by these emissions (e.g. closer to traffic sources) than the grid average. The biases are also indicative of potential underestimations of primary NOx and PM emissions in the model, and, for PM, with known omissions in the model of some PM components, e.g. some components of wind-blown dust. There were instances of monthly and weekday/weekend variations in the extent of model-measurement bias. Overall, the greater uniformity in temporal correlation than in bias is strongly indicative that the main driver of model-measurement differences (aside from grid versus monitor spatial representivity) was inaccuracy of model emissions - both in annual totals and in the monthly and day-of-week temporal factors applied in the model to the totals - rather than simulation of atmospheric chemistry and transport processes. Since, in general for epidemiology, capturing correlation is more important than bias, the detailed analyses presented here support the use of data from this model framework in air pollution epidemiology.

  11. Material Development of Faraday Cup Grids for the Solar Probe Plus Mission

    NASA Technical Reports Server (NTRS)

    Volz, M. P.; Mazuruk, K.; Wright, K. H.; Cirtain, J. W.; Lee, R.; Kasper, J. C.

    2011-01-01

    The Solar Probe Plus mission will launch a spacecraft to the Sun to study it's outer atmosphere. One of the instruments on board will be a Faraday Cup (FC) sensor. The FC will determine solar wind properties by measuring the current produced by ions striking a metal collector plate. It will be directly exposed to the Sun and will be subject to the temperature and radiation environment that exist within 10 solar radii. Conducting grids within the FC are biased up to 10 kV and are used to selectively transmit particles based on their energy to charge ratio. We report on the development of SiC grids. Tests were done on nitrogen-doped SiC starting disks obtained from several vendors, including annealing under vacuum at 1400 C and measurement of their electrical properties. SiC grids were manufactured using a photolithographic and plasma-etching process. The grids were incorporated into a prototype FC and tested in a simulated solar wind chamber. The energy cutoffs were measured for both proton and electron fluxes and met the anticipated sensor requirements.

  12. Structured background grids for generation of unstructured grids by advancing front method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar

    1991-01-01

    A new method of background grid construction is introduced for generation of unstructured tetrahedral grids using the advancing-front technique. Unlike the conventional triangular/tetrahedral background grids which are difficult to construct and usually inadequate in performance, the new method exploits the simplicity of uniform Cartesian meshes and provides grids of better quality. The approach is analogous to solving a steady-state heat conduction problem with discrete heat sources. The spacing parameters of grid points are distributed over the nodes of a Cartesian background grid by interpolating from a few prescribed sources and solving a Poisson equation. To increase the control over the grid point distribution, a directional clustering approach is used. The new method is convenient to use and provides better grid quality and flexibility. Sample results are presented to demonstrate the power of the method.

  13. A numerical study of hypersonic stagnation heat transfer predictions at a coordinate singularity

    NASA Technical Reports Server (NTRS)

    Grasso, Francesco; Gnoffo, Peter A.

    1990-01-01

    The problem of grid induced errors associated with a coordinate singularity on heating predictions in the stagnation region of a three-dimensional body in hypersonic flow is examined. The test problem is for Mach 10 flow over an Aeroassist Flight Experiment configuration. This configuration is composed of an elliptic nose, a raked elliptic cone, and a circular shoulder. Irregularities in the heating predictions in the vicinity of the coordinate singularity, located at the axis of the elliptic nose near the stagnation point, are examined with respect to grid refinement and grid restructuring. The algorithm is derived using a finite-volume formulation. An upwind-biased total-variation diminishing scheme is employed for the inviscid flux contribution, and central differences are used for the viscous terms.

  14. Small-mammal density estimation: A field comparison of grid-based vs. web-based density estimators

    USGS Publications Warehouse

    Parmenter, R.R.; Yates, Terry L.; Anderson, D.R.; Burnham, K.P.; Dunnum, J.L.; Franklin, A.B.; Friggens, M.T.; Lubow, B.C.; Miller, M.; Olson, G.S.; Parmenter, Cheryl A.; Pollard, J.; Rexstad, E.; Shenk, T.M.; Stanley, T.R.; White, Gary C.

    2003-01-01

    Statistical models for estimating absolute densities of field populations of animals have been widely used over the last century in both scientific studies and wildlife management programs. To date, two general classes of density estimation models have been developed: models that use data sets from capture–recapture or removal sampling techniques (often derived from trapping grids) from which separate estimates of population size (NÌ‚) and effective sampling area (AÌ‚) are used to calculate density (DÌ‚ = NÌ‚/AÌ‚); and models applicable to sampling regimes using distance-sampling theory (typically transect lines or trapping webs) to estimate detection functions and densities directly from the distance data. However, few studies have evaluated these respective models for accuracy, precision, and bias on known field populations, and no studies have been conducted that compare the two approaches under controlled field conditions. In this study, we evaluated both classes of density estimators on known densities of enclosed rodent populations. Test data sets (n = 11) were developed using nine rodent species from capture–recapture live-trapping on both trapping grids and trapping webs in four replicate 4.2-ha enclosures on the Sevilleta National Wildlife Refuge in central New Mexico, USA. Additional “saturation” trapping efforts resulted in an enumeration of the rodent populations in each enclosure, allowing the computation of true densities. Density estimates (DÌ‚) were calculated using program CAPTURE for the grid data sets and program DISTANCE for the web data sets, and these results were compared to the known true densities (D) to evaluate each model's relative mean square error, accuracy, precision, and bias. In addition, we evaluated a variety of approaches to each data set's analysis by having a group of independent expert analysts calculate their best density estimates without a priori knowledge of the true densities; this “blind” test allowed us to evaluate the influence of expertise and experience in calculating density estimates in comparison to simply using default values in programs CAPTURE and DISTANCE. While the rodent sample sizes were considerably smaller than the recommended minimum for good model results, we found that several models performed well empirically, including the web-based uniform and half-normal models in program DISTANCE, and the grid-based models Mb and Mbh in program CAPTURE (with AÌ‚ adjusted by species-specific full mean maximum distance moved (MMDM) values). These models produced accurate DÌ‚ values (with 95% confidence intervals that included the true D values) and exhibited acceptable bias but poor precision. However, in linear regression analyses comparing each model's DÌ‚ values to the true D values over the range of observed test densities, only the web-based uniform model exhibited a regression slope near 1.0; all other models showed substantial slope deviations, indicating biased estimates at higher or lower density values. In addition, the grid-based DÌ‚ analyses using full MMDM values for WÌ‚ area adjustments required a number of theoretical assumptions of uncertain validity, and we therefore viewed their empirical successes with caution. Finally, density estimates from the independent analysts were highly variable, but estimates from web-based approaches had smaller mean square errors and better achieved confidence-interval coverage of D than did grid-based approaches. Our results support the contention that web-based approaches for density estimation of small-mammal populations are both theoretically and empirically superior to grid-based approaches, even when sample size is far less than often recommended. In view of the increasing need for standardized environmental measures for comparisons among ecosystems and through time, analytical models based on distance sampling appear to offer accurate density estimation approaches for research studies involving small-mammal abundances.

  15. Simulating the impact of the large-scale circulation on the 2-m temperature and precipitation climatology

    NASA Astrophysics Data System (ADS)

    Bowden, Jared H.; Nolte, Christopher G.; Otte, Tanya L.

    2013-04-01

    The impact of the simulated large-scale atmospheric circulation on the regional climate is examined using the Weather Research and Forecasting (WRF) model as a regional climate model. The purpose is to understand the potential need for interior grid nudging for dynamical downscaling of global climate model (GCM) output for air quality applications under a changing climate. In this study we downscale the NCEP-Department of Energy Atmospheric Model Intercomparison Project (AMIP-II) Reanalysis using three continuous 20-year WRF simulations: one simulation without interior grid nudging and two using different interior grid nudging methods. The biases in 2-m temperature and precipitation for the simulation without interior grid nudging are unreasonably large with respect to the North American Regional Reanalysis (NARR) over the eastern half of the contiguous United States (CONUS) during the summer when air quality concerns are most relevant. This study examines how these differences arise from errors in predicting the large-scale atmospheric circulation. It is demonstrated that the Bermuda high, which strongly influences the regional climate for much of the eastern half of the CONUS during the summer, is poorly simulated without interior grid nudging. In particular, two summers when the Bermuda high was west (1993) and east (2003) of its climatological position are chosen to illustrate problems in the large-scale atmospheric circulation anomalies. For both summers, WRF without interior grid nudging fails to simulate the placement of the upper-level anticyclonic (1993) and cyclonic (2003) circulation anomalies. The displacement of the large-scale circulation impacts the lower atmosphere moisture transport and precipitable water, affecting the convective environment and precipitation. Using interior grid nudging improves the large-scale circulation aloft and moisture transport/precipitable water anomalies, thereby improving the simulated 2-m temperature and precipitation. The results demonstrate that constraining the RCM to the large-scale features in the driving fields improves the overall accuracy of the simulated regional climate, and suggest that in the absence of such a constraint, the RCM will likely misrepresent important large-scale shifts in the atmospheric circulation under a future climate.

  16. A gating grid driver for time projection chambers

    NASA Astrophysics Data System (ADS)

    Tangwancharoen, S.; Lynch, W. G.; Barney, J.; Estee, J.; Shane, R.; Tsang, M. B.; Zhang, Y.; Isobe, T.; Kurata-Nishimura, M.; Murakami, T.; Xiao, Z. G.; Zhang, Y. F.; SπRIT Collaboration

    2017-05-01

    A simple but novel driver system has been developed to operate the wire gating grid of a Time Projection Chamber (TPC). This system connects the wires of the gating grid to its driver via low impedance transmission lines. When the gating grid is open, all wires have the same voltage allowing drift electrons, produced by the ionization of the detector gas molecules, to pass through to the anode wires. When the grid is closed, the wires have alternating higher and lower voltages causing the drift electrons to terminate at the more positive wires. Rapid opening of the gating grid with low pickup noise is achieved by quickly shorting the positive and negative wires to attain the average bias potential with N-type and P-type MOSFET switches. The circuit analysis and simulation software SPICE shows that the driver restores the gating grid voltage to 90% of the opening voltage in less than 0.20 μs, for small values of the termination resistors. When tested in the experimental environment of a time projection chamber larger termination resistors were chosen so that the driver opens the gating grid in 0.35 μs. In each case, opening time is basically characterized by the RC constant given by the resistance of the switches and terminating resistors and the capacitance of the gating grid and its transmission line. By adding a second pair of N-type and P-type MOSFET switches, the gating grid is closed by restoring 99% of the original charges to the wires within 3 μs.

  17. Well-tempered metadynamics converges asymptotically.

    PubMed

    Dama, James F; Parrinello, Michele; Voth, Gregory A

    2014-06-20

    Metadynamics is a versatile and capable enhanced sampling method for the computational study of soft matter materials and biomolecular systems. However, over a decade of application and several attempts to give this adaptive umbrella sampling method a firm theoretical grounding prove that a rigorous convergence analysis is elusive. This Letter describes such an analysis, demonstrating that well-tempered metadynamics converges to the final state it was designed to reach and, therefore, that the simple formulas currently used to interpret the final converged state of tempered metadynamics are correct and exact. The results do not rely on any assumption that the collective variable dynamics are effectively Brownian or any idealizations of the hill deposition function; instead, they suggest new, more permissive criteria for the method to be well behaved. The results apply to tempered metadynamics with or without adaptive Gaussians or boundary corrections and whether the bias is stored approximately on a grid or exactly.

  18. Well-Tempered Metadynamics Converges Asymptotically

    NASA Astrophysics Data System (ADS)

    Dama, James F.; Parrinello, Michele; Voth, Gregory A.

    2014-06-01

    Metadynamics is a versatile and capable enhanced sampling method for the computational study of soft matter materials and biomolecular systems. However, over a decade of application and several attempts to give this adaptive umbrella sampling method a firm theoretical grounding prove that a rigorous convergence analysis is elusive. This Letter describes such an analysis, demonstrating that well-tempered metadynamics converges to the final state it was designed to reach and, therefore, that the simple formulas currently used to interpret the final converged state of tempered metadynamics are correct and exact. The results do not rely on any assumption that the collective variable dynamics are effectively Brownian or any idealizations of the hill deposition function; instead, they suggest new, more permissive criteria for the method to be well behaved. The results apply to tempered metadynamics with or without adaptive Gaussians or boundary corrections and whether the bias is stored approximately on a grid or exactly.

  19. On coupling fluid plasma and kinetic neutral physics models

    DOE PAGES

    Joseph, I.; Rensink, M. E.; Stotler, D. P.; ...

    2017-03-01

    The coupled fluid plasma and kinetic neutral physics equations are analyzed through theory and simulation of benchmark cases. It is shown that coupling methods that do not treat the coupling rates implicitly are restricted to short time steps for stability. Fast charge exchange, ionization and recombination coupling rates exist, even after constraining the solution by requiring that the neutrals are at equilibrium. For explicit coupling, the present implementation of Monte Carlo correlated sampling techniques does not allow for complete convergence in slab geometry. For the benchmark case, residuals decay with particle number and increase with grid size, indicating that theymore » scale in a manner that is similar to the theoretical prediction for nonlinear bias error. Progress is reported on implementation of a fully implicit Jacobian-free Newton–Krylov coupling scheme. The present block Jacobi preconditioning method is still sensitive to time step and methods that better precondition the coupled system are under investigation.« less

  20. Method of grid generation

    DOEpatents

    Barnette, Daniel W.

    2002-01-01

    The present invention provides a method of grid generation that uses the geometry of the problem space and the governing relations to generate a grid. The method can generate a grid with minimized discretization errors, and with minimal user interaction. The method of the present invention comprises assigning grid cell locations so that, when the governing relations are discretized using the grid, at least some of the discretization errors are substantially zero. Conventional grid generation is driven by the problem space geometry; grid generation according to the present invention is driven by problem space geometry and by governing relations. The present invention accordingly can provide two significant benefits: more efficient and accurate modeling since discretization errors are minimized, and reduced cost grid generation since less human interaction is required.

  1. Identifying the values and preferences of prosthetic users: a case study series using the repertory grid technique.

    PubMed

    Schaffalitzky, Elisabeth; NiMhurchadha, Sinead; Gallagher, Pamela; Hofkamp, Susan; MacLachlan, Malcolm; Wegener, Stephen T

    2009-06-01

    The matching of prosthetic devices to the needs of the individual is a challenge for providers and patients. The aims of this study are to explore the values and preferences that prosthetic users have of their prosthetic devices; to investigate users' perceptions of alternative prosthetic options and to demonstrate a novel method for exploring the values and preferences of prosthetic users. This study describes four case studies of upper limb and lower limb high tech and conventional prosthetic users. Participants were interviewed using the repertory grid technique (RGT), a qualitative technique to explore individual values and preferences regarding specific choices and events. The participants generated distinctive patterns of personal constructs and ratings regarding prosthetic use and different prosthetic options available. The RGT produced a unique profile of preferences regarding prosthetic technologies for each participant. User choice is an important factor when matching prosthetic technology to the user. The consumer's values regarding different prosthetic options are likely to be a critical factor in prosthetic acceptance and ultimate quality of life. The RGT offers a structured method of exploring these attitudes and values without imposing researcher or practitioner bias and identifies personalized dimensions for providers and users to evaluate the individuals' preferences in prosthetic technology.

  2. A variational assimilation method for satellite and conventional data: Development of basic model for diagnosis of cyclone systems

    NASA Technical Reports Server (NTRS)

    Achtemeier, G. L.; Ochs, H. T., III; Kidder, S. Q.; Scott, R. W.; Chen, J.; Isard, D.; Chance, B.

    1986-01-01

    A three-dimensional diagnostic model for the assimilation of satellite and conventional meteorological data is developed with the variational method of undetermined multipliers. Gridded fields of data from different type, quality, location, and measurement source are weighted according to measurement accuracy and merged using least squares criteria so that the two nonlinear horizontal momentum equations, the hydrostatic equation, and an integrated continuity equation are satisfied. The model is used to compare multivariate variational objective analyses with and without satellite data with initial analyses and the observations through criteria that were determined by the dynamical constraints, the observations, and pattern recognition. It is also shown that the diagnoses of local tendencies of the horizontal velocity components are in good comparison with the observed patterns and tendencies calculated with unadjusted data. In addition, it is found that the day-night difference in TOVS biases are statistically different (95% confidence) at most levels. Also developed is a hybrid nonlinear sigma vertical coordinate that eliminates hydrostatic truncation error in the middle and upper troposphere and reduces truncation error in the lower troposphere. Finally, it is found that the technique used to grid the initial data causes boundary effects to intrude into the interior of the analysis a distance equal to the average separation between observations.

  3. Ion collection from a plasma by a pinhole

    NASA Technical Reports Server (NTRS)

    Snyder, David B.; Herr, Joel L.

    1992-01-01

    Ion focusing by a biased pinhole is studied numerically. Laplace's equation is solved in 3-D for cylindrical symmetry on a constant grid to determine the potential field produced by a biased pinhole in a dielectric material. Focusing factors are studied for ions of uniform incident velocity with a 3-D Maxwellian distribution superimposed. Ion currents to the pinhole are found by particle tracking. The focusing factor of positive ions as a function of initial velocity, temperature, injection radius, and hole size is reported. For a typical Space Station Freedom environment (oxygen ions having a 4.5 eV ram energy, 0.1 eV temperature, and a -140 V biased pinhole), a focusing factor of 13.35 is found for a 1.5 mm radius pinhole.

  4. Simultaneous statistical bias correction of multiplePM2.5 species from a regional photochemical grid model

    EPA Science Inventory

    In recent years environmental epidemiologists have begun utilizing regionalscale air quality computer models to predict ambient air pollution concentrations in health studies instead of or in addition to monitoring data from central sites. The advantages of using such models i...

  5. Introducing GFWED: The Global Fire Weather Database

    NASA Technical Reports Server (NTRS)

    Field, R. D.; Spessa, A. C.; Aziz, N. A.; Camia, A.; Cantin, A.; Carr, R.; de Groot, W. J.; Dowdy, A. J.; Flannigan, M. D.; Manomaiphiboon, K.; hide

    2015-01-01

    The Canadian Forest Fire Weather Index (FWI) System is the mostly widely used fire danger rating system in the world. We have developed a global database of daily FWI System calculations, beginning in 1980, called the Global Fire WEather Database (GFWED) gridded to a spatial resolution of 0.5 latitude by 2-3 longitude. Input weather data were obtained from the NASA Modern Era Retrospective-Analysis for Research and Applications (MERRA), and two different estimates of daily precipitation from rain gauges over land. FWI System Drought Code calculations from the gridded data sets were compared to calculations from individual weather station data for a representative set of 48 stations in North, Central and South America, Europe, Russia,Southeast Asia and Australia. Agreement between gridded calculations and the station-based calculations tended to be most different at low latitudes for strictly MERRA based calculations. Strong biases could be seen in either direction: MERRA DC over the Mato Grosso in Brazil reached unrealistically high values exceeding DCD1500 during the dry season but was too low over Southeast Asia during the dry season. These biases are consistent with those previously identified in MERRAs precipitation, and they reinforce the need to consider alternative sources of precipitation data. GFWED can be used for analyzing historical relationships between fire weather and fire activity at continental and global scales, in identifying large-scale atmosphereocean controls on fire weather, and calibration of FWI-based fire prediction models.

  6. Are ion acoustic waves supported by high-density plasmas in the Large Plasma Device (LaPD)?

    NASA Astrophysics Data System (ADS)

    Roycroft, Rebecca; Dorfman, Seth; Carter, Troy A.; Gekelman, Walter; Tripathi, Shreekrishna

    2012-10-01

    Ion acoustic waves are a type of longitudinal wave in a plasma, propagating though the motion of the ions. The wave plays a key role in a parametric decay process thought to be responsible for the spectrum of turbulence observed in the solar wind. In recent LaPD experiments aimed at studying this process, modes thought to be ion acoustic waves are strongly damped when the pump Alfven waves are turned off. This observation motivates an experiment focused on directly launching ion acoustic waves under similar conditions. Our first attempt to launch ion acoustic waves using a metal grid in the plasma was unsuccessful at high magnetic fields and densities due to electrons shorting out the bias applied between the grid and the wall. Results from a new device based on [1] to launch ion acoustic waves will be presented; this device will consist of a small chamber with a plasma source separated from the main chamber by two biased grids. The plasma created inside the small device will be held at a different potential from the main plasma; modulation of this difference should affect the ions, allowing ion acoustic waves to be launched and their properties compared to the prior LaPD experiments.[4pt] [1] W. Gekelman and R. L. Stenzel, Phys. Fluids 21, 2014 (1978).

  7. Bias correction of satellite precipitation products for flood forecasting application at the Upper Mahanadi River Basin in Eastern India

    NASA Astrophysics Data System (ADS)

    Beria, H.; Nanda, T., Sr.; Chatterjee, C.

    2015-12-01

    High resolution satellite precipitation products such as Tropical Rainfall Measuring Mission (TRMM), Climate Forecast System Reanalysis (CFSR), European Centre for Medium-Range Weather Forecasts (ECMWF), etc., offer a promising alternative to flood forecasting in data scarce regions. At the current state-of-art, these products cannot be used in the raw form for flood forecasting, even at smaller lead times. In the current study, these precipitation products are bias corrected using statistical techniques, such as additive and multiplicative bias corrections, and wavelet multi-resolution analysis (MRA) with India Meteorological Department (IMD) gridded precipitation product,obtained from gauge-based rainfall estimates. Neural network based rainfall-runoff modeling using these bias corrected products provide encouraging results for flood forecasting upto 48 hours lead time. We will present various statistical and graphical interpretations of catchment response to high rainfall events using both the raw and bias corrected precipitation products at different lead times.

  8. Unstructured viscous grid generation by advancing-front method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar

    1993-01-01

    A new method of generating unstructured triangular/tetrahedral grids with high-aspect-ratio cells is proposed. The method is based on new grid-marching strategy referred to as 'advancing-layers' for construction of highly stretched cells in the boundary layer and the conventional advancing-front technique for generation of regular, equilateral cells in the inviscid-flow region. Unlike the existing semi-structured viscous grid generation techniques, the new procedure relies on a totally unstructured advancing-front grid strategy resulting in a substantially enhanced grid flexibility and efficiency. The method is conceptually simple but powerful, capable of producing high quality viscous grids for complex configurations with ease. A number of two-dimensional, triangular grids are presented to demonstrate the methodology. The basic elements of the method, however, have been primarily designed with three-dimensional problems in mind, making it extendible for tetrahedral, viscous grid generation.

  9. Effects of Grid Resolution on Modeled Air Pollutant Concentrations Due to Emissions from Large Point Sources: Case Study during KORUS-AQ 2016 Campaign

    NASA Astrophysics Data System (ADS)

    Ju, H.; Bae, C.; Kim, B. U.; Kim, H. C.; Kim, S.

    2017-12-01

    Large point sources in the Chungnam area received a nation-wide attention in South Korea because the area is located southwest of the Seoul Metropolitan Area whose population is over 22 million and the summertime prevalent winds in the area is northeastward. Therefore, emissions from the large point sources in the Chungnam area were one of the major observation targets during the KORUS-AQ 2016 including aircraft measurements. In general, horizontal grid resolutions of eulerian photochemical models have profound effects on estimated air pollutant concentrations. It is due to the formulation of grid models; that is, emissions in a grid cell will be assumed to be mixed well under planetary boundary layers regardless of grid cell sizes. In this study, we performed series of simulations with the Comprehensive Air Quality Model with eXetension (CAMx). For 9-km and 3-km simulations, we used meteorological fields obtained from the Weather Research and Forecast model while utilizing the "Flexi-nesting" option in the CAMx for the 1-km simulation. In "Flexi-nesting" mode, CAMx interpolates or assigns model inputs from the immediate parent grid. We compared modeled concentrations with ground observation data as well as aircraft measurements to quantify variations of model bias and error depending on horizontal grid resolutions.

  10. A Critical Study of Agglomerated Multigrid Methods for Diffusion

    NASA Technical Reports Server (NTRS)

    Nishikawa, Hiroaki; Diskin, Boris; Thomas, James L.

    2011-01-01

    Agglomerated multigrid techniques used in unstructured-grid methods are studied critically for a model problem representative of laminar diffusion in the incompressible limit. The studied target-grid discretizations and discretizations used on agglomerated grids are typical of current node-centered formulations. Agglomerated multigrid convergence rates are presented using a range of two- and three-dimensional randomly perturbed unstructured grids for simple geometries with isotropic and stretched grids. Two agglomeration techniques are used within an overall topology-preserving agglomeration framework. The results show that multigrid with an inconsistent coarse-grid scheme using only the edge terms (also referred to in the literature as a thin-layer formulation) provides considerable speedup over single-grid methods but its convergence deteriorates on finer grids. Multigrid with a Galerkin coarse-grid discretization using piecewise-constant prolongation and a heuristic correction factor is slower and also grid-dependent. In contrast, grid-independent convergence rates are demonstrated for multigrid with consistent coarse-grid discretizations. Convergence rates of multigrid cycles are verified with quantitative analysis methods in which parts of the two-grid cycle are replaced by their idealized counterparts.

  11. An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    1999-01-01

    An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.

  12. Intercomparison and validation of the mixed layer depth fields of global ocean syntheses

    NASA Astrophysics Data System (ADS)

    Toyoda, Takahiro; Fujii, Yosuke; Kuragano, Tsurane; Kamachi, Masafumi; Ishikawa, Yoichi; Masuda, Shuhei; Sato, Kanako; Awaji, Toshiyuki; Hernandez, Fabrice; Ferry, Nicolas; Guinehut, Stéphanie; Martin, Matthew J.; Peterson, K. Andrew; Good, Simon A.; Valdivieso, Maria; Haines, Keith; Storto, Andrea; Masina, Simona; Köhl, Armin; Zuo, Hao; Balmaseda, Magdalena; Yin, Yonghong; Shi, Li; Alves, Oscar; Smith, Gregory; Chang, You-Soon; Vernieres, Guillaume; Wang, Xiaochun; Forget, Gael; Heimbach, Patrick; Wang, Ou; Fukumori, Ichiro; Lee, Tong

    2017-08-01

    Intercomparison and evaluation of the global ocean surface mixed layer depth (MLD) fields estimated from a suite of major ocean syntheses are conducted. Compared with the reference MLDs calculated from individual profiles, MLDs calculated from monthly mean and gridded profiles show negative biases of 10-20 m in early spring related to the re-stratification process of relatively deep mixed layers. Vertical resolution of profiles also influences the MLD estimation. MLDs are underestimated by approximately 5-7 (14-16) m with the vertical resolution of 25 (50) m when the criterion of potential density exceeding the 10-m value by 0.03 kg m-3 is used for the MLD estimation. Using the larger criterion (0.125 kg m-3) generally reduces the underestimations. In addition, positive biases greater than 100 m are found in wintertime subpolar regions when MLD criteria based on temperature are used. Biases of the reanalyses are due to both model errors and errors related to differences between the assimilation methods. The result shows that these errors are partially cancelled out through the ensemble averaging. Moreover, the bias in the ensemble mean field of the reanalyses is smaller than in the observation-only analyses. This is largely attributed to comparably higher resolutions of the reanalyses. The robust reproduction of both the seasonal cycle and interannual variability by the ensemble mean of the reanalyses indicates a great potential of the ensemble mean MLD field for investigating and monitoring upper ocean processes.

  13. Comparison of Models for Spacer Grid Pressure Loss in Nuclear Fuel Bundles for One and Two-Phase Flows

    NASA Astrophysics Data System (ADS)

    Maskal, Alan B.

    Spacer grids maintain the structural integrity of the fuel rods within fuel bundles of nuclear power plants. They can also improve flow characteristics within the nuclear reactor core. However, spacer grids add reactor coolant pressure losses, which require estimation and engineering into the design. Several mathematical models and computer codes were developed over decades to predict spacer grid pressure loss. Most models use generalized characteristics, measured by older, less precise equipment. The study of OECD/US-NRC BWR Full-Size Fine Mesh Bundle Tests (BFBT) provides updated and detailed experimental single and two-phase results, using technically advanced flow measurements for a wide range of boundary conditions. This thesis compares the predictions from the mathematical models to the BFBT experimental data by utilizing statistical formulae for accuracy and precision. This thesis also analyzes the effects of BFBT flow characteristics on spacer grids. No single model has been identified as valid for all flow conditions. However, some models' predictions perform better than others within a range of flow conditions, based on the accuracy and precision of the models' predictions. This study also demonstrates that pressure and flow quality have a significant effect on two-phase flow spacer grid models' biases.

  14. Grid-Based Surface Generalized Born Model for Calculation of Electrostatic Binding Free Energies.

    PubMed

    Forouzesh, Negin; Izadi, Saeed; Onufriev, Alexey V

    2017-10-23

    Fast and accurate calculation of solvation free energies is central to many applications, such as rational drug design. In this study, we present a grid-based molecular surface implementation of "R6" flavor of the generalized Born (GB) implicit solvent model, named GBNSR6. The speed, accuracy relative to numerical Poisson-Boltzmann treatment, and sensitivity to grid surface parameters are tested on a set of 15 small protein-ligand complexes and a set of biomolecules in the range of 268 to 25099 atoms. Our results demonstrate that the proposed model provides a relatively successful compromise between the speed and accuracy of computing polar components of the solvation free energies (ΔG pol ) and binding free energies (ΔΔG pol ). The model tolerates a relatively coarse grid size h = 0.5 Å, where the grid artifact error in computing ΔΔG pol remains in the range of k B T ∼ 0.6 kcal/mol. The estimated ΔΔG pol s are well correlated (r 2 = 0.97) with the numerical Poisson-Boltzmann reference, while showing virtually no systematic bias and RMSE = 1.43 kcal/mol. The grid-based GBNSR6 model is available in Amber (AmberTools) package of molecular simulation programs.

  15. Enhanced Conformational Sampling in Molecular Dynamics Simulations of Solvated Peptides: Fragment-Based Local Elevation Umbrella Sampling.

    PubMed

    Hansen, Halvor S; Daura, Xavier; Hünenberger, Philippe H

    2010-09-14

    A new method, fragment-based local elevation umbrella sampling (FB-LEUS), is proposed to enhance the conformational sampling in explicit-solvent molecular dynamics (MD) simulations of solvated polymers. The method is derived from the local elevation umbrella sampling (LEUS) method [ Hansen and Hünenberger , J. Comput. Chem. 2010 , 31 , 1 - 23 ], which combines the local elevation (LE) conformational searching and the umbrella sampling (US) conformational sampling approaches into a single scheme. In LEUS, an initial (relatively short) LE build-up (searching) phase is used to construct an optimized (grid-based) biasing potential within a subspace of conformationally relevant degrees of freedom, which is then frozen and used in a (comparatively longer) US sampling phase. This combination dramatically enhances the sampling power of MD simulations but, due to computational and memory costs, is only applicable to relevant subspaces of low dimensionalities. As an attempt to expand the scope of the LEUS approach to solvated polymers with more than a few relevant degrees of freedom, the FB-LEUS scheme involves an US sampling phase that relies on a superposition of low-dimensionality biasing potentials optimized using LEUS at the fragment level. The feasibility of this approach is tested using polyalanine (poly-Ala) and polyvaline (poly-Val) oligopeptides. Two-dimensional biasing potentials are preoptimized at the monopeptide level, and subsequently applied to all dihedral-angle pairs within oligopeptides of 4,  6,  8, or 10 residues. Two types of fragment-based biasing potentials are distinguished: (i) the basin-filling (BF) potentials act so as to "fill" free-energy basins up to a prescribed free-energy level above the global minimum; (ii) the valley-digging (VD) potentials act so as to "dig" valleys between the (four) free-energy minima of the two-dimensional maps, preserving barriers (relative to linearly interpolated free-energy changes) of a prescribed magnitude. The application of these biasing potentials may lead to an impressive enhancement of the searching power (volume of conformational space visited in a given amount of simulation time). However, this increase is largely offset by a deterioration of the statistical efficiency (representativeness of the biased ensemble in terms of the conformational distribution appropriate for the physical ensemble). As a result, it appears difficult to engineer FB-LEUS schemes representing a significant improvement over plain MD, at least for the systems considered here.

  16. Electron-less negative ion extraction from ion-ion plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rafalskyi, Dmytro; Aanesland, Ane

    2015-03-09

    This paper presents experimental results showing that continuous negative ion extraction, without co-extracted electrons, is possible from highly electronegative SF{sub 6} ion-ion plasma at low gas pressure (1 mTorr). The ratio between the negative ion and electron densities is more than 3000 in the vicinity of the two-grid extraction and acceleration system. The measurements are conducted by both magnetized and non-magnetized energy analyzers attached to the external grid. With these two analyzers, we show that the extracted negative ion flux is almost electron-free and has the same magnitude as the positive ion flux extracted and accelerated when the grids aremore » biased oppositely. The results presented here can be used for validation of numerical and analytical models of ion extraction from ion-ion plasma.« less

  17. A methodology to ensure and improve accuracy of Ki67 labelling index estimation by automated digital image analysis in breast cancer tissue

    PubMed Central

    2014-01-01

    Introduction Immunohistochemical Ki67 labelling index (Ki67 LI) reflects proliferative activity and is a potential prognostic/predictive marker of breast cancer. However, its clinical utility is hindered by the lack of standardized measurement methodologies. Besides tissue heterogeneity aspects, the key element of methodology remains accurate estimation of Ki67-stained/counterstained tumour cell profiles. We aimed to develop a methodology to ensure and improve accuracy of the digital image analysis (DIA) approach. Methods Tissue microarrays (one 1-mm spot per patient, n = 164) from invasive ductal breast carcinoma were stained for Ki67 and scanned. Criterion standard (Ki67-Count) was obtained by counting positive and negative tumour cell profiles using a stereology grid overlaid on a spot image. DIA was performed with Aperio Genie/Nuclear algorithms. A bias was estimated by ANOVA, correlation and regression analyses. Calibration steps of the DIA by adjusting the algorithm settings were performed: first, by subjective DIA quality assessment (DIA-1), and second, to compensate the bias established (DIA-2). Visual estimate (Ki67-VE) on the same images was performed by five pathologists independently. Results ANOVA revealed significant underestimation bias (P < 0.05) for DIA-0, DIA-1 and two pathologists’ VE, while DIA-2, VE-median and three other VEs were within the same range. Regression analyses revealed best accuracy for the DIA-2 (R-square = 0.90) exceeding that of VE-median, individual VEs and other DIA settings. Bidirectional bias for the DIA-2 with overestimation at low, and underestimation at high ends of the scale was detected. Measurement error correction by inverse regression was applied to improve DIA-2-based prediction of the Ki67-Count, in particular for the clinically relevant interval of Ki67-Count < 40%. Potential clinical impact of the prediction was tested by dichotomising the cases at the cut-off values of 10, 15, and 20%. Misclassification rate of 5-7% was achieved, compared to that of 11-18% for the VE-median-based prediction. Conclusions Our experiments provide methodology to achieve accurate Ki67-LI estimation by DIA, based on proper validation, calibration, and measurement error correction procedures, guided by quantified bias from reference values obtained by stereology grid count. This basic validation step is an important prerequisite for high-throughput automated DIA applications to investigate tissue heterogeneity and clinical utility aspects of Ki67 and other immunohistochemistry (IHC) biomarkers. PMID:24708745

  18. Spectral methods on arbitrary grids

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Gottlieb, David

    1995-01-01

    Stable and spectrally accurate numerical methods are constructed on arbitrary grids for partial differential equations. These new methods are equivalent to conventional spectral methods but do not rely on specific grid distributions. Specifically, we show how to implement Legendre Galerkin, Legendre collocation, and Laguerre Galerkin methodology on arbitrary grids.

  19. The effects of spatial heterogeneity and subsurface lateral transfer on evapotranspiration estimates in large scale Earth system models

    NASA Astrophysics Data System (ADS)

    Rouholahnejad, E.; Fan, Y.; Kirchner, J. W.; Miralles, D. G.

    2017-12-01

    Most Earth system models (ESM) average over considerable sub-grid heterogeneity in land surface properties, and overlook subsurface lateral flow. This could potentially bias evapotranspiration (ET) estimates and has implications for future temperature predictions, since overestimations in ET imply greater latent heat fluxes and potential underestimation of dry and warm conditions in the context of climate change. Here we quantify the bias in evaporation estimates that may arise from the fact that ESMs average over considerable heterogeneity in surface properties, and also neglect lateral transfer of water across the heterogeneous landscapes at global scale. We use a Budyko framework to express ET as a function of P and PET to derive simple sub-grid closure relations that quantify how spatial heterogeneity and lateral transfer could affect average ET as seen from the atmosphere. We show that averaging over sub-grid heterogeneity in P and PET, as typical Earth system models do, leads to overestimation of average ET. Our analysis at global scale shows that the effects of sub-grid heterogeneity will be most pronounced in steep mountainous areas where the topographic gradient is high and where P is inversely correlated with PET across the landscape. In addition, we use the Total Water Storage (TWS) anomaly estimates from the Gravity Recovery and Climate Experiment (GRACE) remote sensing product and assimilate it into the Global Land Evaporation Amsterdam Model (GLEAM) to correct for existing free drainage lower boundary condition in GLEAM and quantify whether, and how much, accounting for changes in terrestrial storage can improve the simulation of soil moisture and regional ET fluxes at global scale.

  20. CO2 Flux Estimation Errors Associated with Moist Atmospheric Processes

    NASA Technical Reports Server (NTRS)

    Parazoo, N. C.; Denning, A. S.; Kawa, S. R.; Pawson, S.; Lokupitiya, R.

    2012-01-01

    Vertical transport by moist sub-grid scale processes such as deep convection is a well-known source of uncertainty in CO2 source/sink inversion. However, a dynamical link between vertical transport, satellite based retrievals of column mole fractions of CO2, and source/sink inversion has not yet been established. By using the same offline transport model with meteorological fields from slightly different data assimilation systems, we examine sensitivity of frontal CO2 transport and retrieved fluxes to different parameterizations of sub-grid vertical transport. We find that frontal transport feeds off background vertical CO2 gradients, which are modulated by sub-grid vertical transport. The implication for source/sink estimation is two-fold. First, CO2 variations contained in moist poleward moving air masses are systematically different from variations in dry equatorward moving air. Moist poleward transport is hidden from orbital sensors on satellites, causing a sampling bias, which leads directly to small but systematic flux retrieval errors in northern mid-latitudes. Second, differences in the representation of moist sub-grid vertical transport in GEOS-4 and GEOS-5 meteorological fields cause differences in vertical gradients of CO2, which leads to systematic differences in moist poleward and dry equatorward CO2 transport and therefore the fraction of CO2 variations hidden in moist air from satellites. As a result, sampling biases are amplified and regional scale flux errors enhanced, most notably in Europe (0.43+/-0.35 PgC /yr). These results, cast from the perspective of moist frontal transport processes, support previous arguments that the vertical gradient of CO2 is a major source of uncertainty in source/sink inversion.

  1. The impact of the resolution of meteorological datasets on catchment-scale drought studies

    NASA Astrophysics Data System (ADS)

    Hellwig, Jost; Stahl, Kerstin

    2017-04-01

    Gridded meteorological datasets provide the basis to study drought at a range of scales, including catchment scale drought studies in hydrology. They are readily available to study past weather conditions and often serve real time monitoring as well. As these datasets differ in spatial/temporal coverage and spatial/temporal resolution, for most studies there is a tradeoff between these features. Our investigation examines whether biases occur when studying drought on catchment scale with low resolution input data. For that, a comparison among the datasets HYRAS (covering Central Europe, 1x1 km grid, daily data, 1951 - 2005), E-OBS (Europe, 0.25° grid, daily data, 1950-2015) and GPCC (whole world, 0.5° grid, monthly data, 1901 - 2013) is carried out. Generally, biases in precipitation increase with decreasing resolution. Most important variations are found during summer. In low mountain range of Central Europe the datasets of sparse resolution (E-OBS, GPCC) overestimate dry days and underestimate total precipitation since they are not able to describe high spatial variability. However, relative measures like the correlation coefficient reveal good consistencies of dry and wet periods, both for absolute precipitation values and standardized indices like the Standardized Precipitation Index (SPI) or Standardized Precipitation Evaporation Index (SPEI). Particularly the most severe droughts derived from the different datasets match very well. These results indicate that absolute values of sparse resolution datasets applied to catchment scale might be critical to use for an assessment of the hydrological drought at catchment scale, whereas relative measures for determining periods of drought are more trustworthy. Therefore, studies on drought, that downscale meteorological data, should carefully consider their data needs and focus on relative measures for dry periods if sufficient for the task.

  2. On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models

    NASA Astrophysics Data System (ADS)

    Xu, S.; Wang, B.; Liu, J.

    2015-10-01

    In this article we propose two grid generation methods for global ocean general circulation models. Contrary to conventional dipolar or tripolar grids, the proposed methods are based on Schwarz-Christoffel conformal mappings that map areas with user-prescribed, irregular boundaries to those with regular boundaries (i.e., disks, slits, etc.). The first method aims at improving existing dipolar grids. Compared with existing grids, the sample grid achieves a better trade-off between the enlargement of the latitudinal-longitudinal portion and the overall smooth grid cell size transition. The second method addresses more modern and advanced grid design requirements arising from high-resolution and multi-scale ocean modeling. The generated grids could potentially achieve the alignment of grid lines to the large-scale coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the grids are orthogonal curvilinear, they can be easily utilized by the majority of ocean general circulation models that are based on finite difference and require grid orthogonality. The proposed grid generation algorithms can also be applied to the grid generation for regional ocean modeling where complex land-sea distribution is present.

  3. Conservative treatment of boundary interfaces for overlaid grids and multi-level grid adaptations

    NASA Technical Reports Server (NTRS)

    Moon, Young J.; Liou, Meng-Sing

    1989-01-01

    Conservative algorithms for boundary interfaces of overlaid grids are presented. The basic method is zeroth order, and is extended to a higher order method using interpolation and subcell decomposition. The present method, strictly based on a conservative constraint, is tested with overlaid grids for various applications of unsteady and steady supersonic inviscid flows with strong shock waves. The algorithm is also applied to a multi-level grid adaptation in which the next level finer grid is overlaid on the coarse base grid with an arbitrary orientation.

  4. A Critical Study of Agglomerated Multigrid Methods for Diffusion

    NASA Technical Reports Server (NTRS)

    Thomas, James L.; Nishikawa, Hiroaki; Diskin, Boris

    2009-01-01

    Agglomerated multigrid techniques used in unstructured-grid methods are studied critically for a model problem representative of laminar diffusion in the incompressible limit. The studied target-grid discretizations and discretizations used on agglomerated grids are typical of current node-centered formulations. Agglomerated multigrid convergence rates are presented using a range of two- and three-dimensional randomly perturbed unstructured grids for simple geometries with isotropic and highly stretched grids. Two agglomeration techniques are used within an overall topology-preserving agglomeration framework. The results show that multigrid with an inconsistent coarse-grid scheme using only the edge terms (also referred to in the literature as a thin-layer formulation) provides considerable speedup over single-grid methods but its convergence deteriorates on finer grids. Multigrid with a Galerkin coarse-grid discretization using piecewise-constant prolongation and a heuristic correction factor is slower and also grid-dependent. In contrast, grid-independent convergence rates are demonstrated for multigrid with consistent coarse-grid discretizations. Actual cycle results are verified using quantitative analysis methods in which parts of the cycle are replaced by their idealized counterparts.

  5. Three-dimensional local grid refinement for block-centered finite-difference groundwater models using iteratively coupled shared nodes: A new method of interpolation and analysis of errors

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2004-01-01

    This paper describes work that extends to three dimensions the two-dimensional local-grid refinement method for block-centered finite-difference groundwater models of Mehl and Hill [Development and evaluation of a local grid refinement method for block-centered finite-difference groundwater models using shared nodes. Adv Water Resour 2002;25(5):497-511]. In this approach, the (parent) finite-difference grid is discretized more finely within a (child) sub-region. The grid refinement method sequentially solves each grid and uses specified flux (parent) and specified head (child) boundary conditions to couple the grids. Iteration achieves convergence between heads and fluxes of both grids. Of most concern is how to interpolate heads onto the boundary of the child grid such that the physics of the parent-grid flow is retained in three dimensions. We develop a new two-step, "cage-shell" interpolation method based on the solution of the flow equation on the boundary of the child between nodes shared with the parent grid. Error analysis using a test case indicates that the shared-node local grid refinement method with cage-shell boundary head interpolation is accurate and robust, and the resulting code is used to investigate three-dimensional local grid refinement of stream-aquifer interactions. Results reveal that (1) the parent and child grids interact to shift the true head and flux solution to a different solution where the heads and fluxes of both grids are in equilibrium, (2) the locally refined model provided a solution for both heads and fluxes in the region of the refinement that was more accurate than a model without refinement only if iterations are performed so that both heads and fluxes are in equilibrium, and (3) the accuracy of the coupling is limited by the parent-grid size - A coarse parent grid limits correct representation of the hydraulics in the feedback from the child grid.

  6. The Effect of Elevation Bias in Interpolated Air Temperature Data Sets on Surface Warming in China During 1951-2015

    NASA Astrophysics Data System (ADS)

    Wang, Tingting; Sun, Fubao; Ge, Quansheng; Kleidon, Axel; Liu, Wenbin

    2018-02-01

    Although gridded air temperature data sets share much of the same observations, different rates of warming can be detected due to different approaches employed for considering elevation signatures in the interpolation processes. Here we examine the influence of varying spatiotemporal distribution of sites on surface warming in the long-term trend and over the recent warming hiatus period in China during 1951-2015. A suspicious cooling trend in raw interpolated air temperature time series is found in the 1950s, and 91% of which can be explained by the artificial elevation changes introduced by the interpolation process. We define the regression slope relating temperature difference and elevation difference as the bulk lapse rate of -5.6°C/km, which tends to be higher (-8.7°C/km) in dry regions but lower (-2.4°C/km) in wet regions. Compared to independent experimental observations, we find that the estimated monthly bulk lapse rates work well to capture the elevation bias. Significant improvement can be achieved in adjusting the interpolated original temperature time series using the bulk lapse rate. The results highlight that the developed bulk lapse rate is useful to account for the elevation signature in the interpolation of site-based surface air temperature to gridded data sets and is necessary for avoiding elevation bias in climate change studies.

  7. Hyperbolic Prismatic Grid Generation and Solution of Euler Equations on Prismatic Grids

    NASA Technical Reports Server (NTRS)

    Pandya, S. A.; Chattot, JJ; Hafez, M. M.; Kutler, Paul (Technical Monitor)

    1994-01-01

    A hyperbolic grid generation method is used to generate prismatic grids and an approach using prismatic grids to solve the Euler equations is presented. The theory of the stability and feasibility of the hyperbolic grid generation method is presented. The hyperbolic grid generation method of Steger et al for structured grids is applied to a three dimensional triangularized surface definition to generate a grid that is unstructured on each successive layer. The grid, however, retains structure in the body-normal direction and has a computational cell shaped like a triangular prism. In order to take advantage of the structure in the normal direction, a finite-volume scheme that treats the unknowns along the normal direction implicitly is introduced and the flow over a sphere is simulated.

  8. A robust, efficient equidistribution 2D grid generation method

    NASA Astrophysics Data System (ADS)

    Chacon, Luis; Delzanno, Gian Luca; Finn, John; Chung, Jeojin; Lapenta, Giovanni

    2007-11-01

    We present a new cell-area equidistribution method for two- dimensional grid adaptation [1]. The method is able to satisfy the equidistribution constraint to arbitrary precision while optimizing desired grid properties (such as isotropy and smoothness). The method is based on the minimization of the grid smoothness integral, constrained to producing a given positive-definite cell volume distribution. The procedure gives rise to a single, non-linear scalar equation with no free-parameters. We solve this equation numerically with the Newton-Krylov technique. The ellipticity property of the linearized scalar equation allows multigrid preconditioning techniques to be effectively used. We demonstrate a solution exists and is unique. Therefore, once the solution is found, the adapted grid cannot be folded due to the positivity of the constraint on the cell volumes. We present several challenging tests to show that our new method produces optimal grids in which the constraint is satisfied numerically to arbitrary precision. We also compare the new method to the deformation method [2] and show that our new method produces better quality grids. [1] G.L. Delzanno, L. Chac'on, J.M. Finn, Y. Chung, G. Lapenta, A new, robust equidistribution method for two-dimensional grid generation, in preparation. [2] G. Liao and D. Anderson, A new approach to grid generation, Appl. Anal. 44, 285--297 (1992).

  9. PEAK LIMITING AMPLIFIER

    DOEpatents

    Goldsworthy, W.W.; Robinson, J.B.

    1959-03-31

    A peak voltage amplitude limiting system adapted for use with a cascade type amplifier is described. In its detailed aspects, the invention includes an amplifier having at least a first triode tube and a second triode tube, the cathode of the second tube being connected to the anode of the first tube. A peak limiter triode tube has its control grid coupled to thc anode of the second tube and its anode connected to the cathode of the second tube. The operation of the limiter is controlled by a bias voltage source connected to the control grid of the limiter tube and the output of the system is taken from the anode of the second tube.

  10. Evaluation of Greenland near surface air temperature datasets

    DOE PAGES

    Reeves Eyre, J. E. Jack; Zeng, Xubin

    2017-07-05

    Near-surface air temperature (SAT) over Greenland has important effects on mass balance of the ice sheet, but it is unclear which SAT datasets are reliable in the region. Here extensive in situ SAT measurements ( ∼  1400 station-years) are used to assess monthly mean SAT from seven global reanalysis datasets, five gridded SAT analyses, one satellite retrieval and three dynamically downscaled reanalyses. Strengths and weaknesses of these products are identified, and their biases are found to vary by season and glaciological regime. MERRA2 reanalysis overall performs best with mean absolute error less than 2 °C in all months. Ice sheet-average annual mean SAT frommore » different datasets are highly correlated in recent decades, but their 1901–2000 trends differ even in sign. Compared with the MERRA2 climatology combined with gridded SAT analysis anomalies, thirty-one earth system model historical runs from the CMIP5 archive reach  ∼  5 °C for the 1901–2000 average bias and have opposite trends for a number of sub-periods.« less

  11. Evaluation of Greenland near surface air temperature datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reeves Eyre, J. E. Jack; Zeng, Xubin

    Near-surface air temperature (SAT) over Greenland has important effects on mass balance of the ice sheet, but it is unclear which SAT datasets are reliable in the region. Here extensive in situ SAT measurements ( ∼  1400 station-years) are used to assess monthly mean SAT from seven global reanalysis datasets, five gridded SAT analyses, one satellite retrieval and three dynamically downscaled reanalyses. Strengths and weaknesses of these products are identified, and their biases are found to vary by season and glaciological regime. MERRA2 reanalysis overall performs best with mean absolute error less than 2 °C in all months. Ice sheet-average annual mean SAT frommore » different datasets are highly correlated in recent decades, but their 1901–2000 trends differ even in sign. Compared with the MERRA2 climatology combined with gridded SAT analysis anomalies, thirty-one earth system model historical runs from the CMIP5 archive reach  ∼  5 °C for the 1901–2000 average bias and have opposite trends for a number of sub-periods.« less

  12. On the improvement for charging large-scale flexible electrostatic actuators

    NASA Astrophysics Data System (ADS)

    Liao, Hsu-Ching; Chen, Han-Long; Su, Yu-Hao; Chen, Yu-Chi; Ko, Wen-Ching; Liou, Chang-Ho; Wu, Wen-Jong; Lee, Chih-Kung

    2011-04-01

    Recently, the development of flexible electret based electrostatic actuator has been widely discussed. The devices was shown to have high sound quality, energy saving, flexible structure and can be cut to any shape. However, achieving uniform charge on the electret diaphragm is one of the most critical processes needed to have the speaker ready for large-scale production. In this paper, corona discharge equipment contains multi-corona probes and grid bias was set up to inject spatial charges within the electret diaphragm. The optimal multi-corona probes system was adjusted to achieve uniform charge distribution of electret diaphragm. The processing conditions include the distance between the corona probes, the voltages of corona probe and grid bias, etc. We assembled the flexible electret loudspeakers first and then measured their sound pressure and beam pattern. The uniform charge distribution within the electret diaphragm based flexible electret loudspeaker provided us with the opportunity to shape the loudspeaker arbitrarily and to tailor the sound distribution per specifications request. Some of the potential futuristic applications for this device such as sound poster, smart clothes, and sound wallpaper, etc. were discussed as well.

  13. GRID3D-v2: An updated version of the GRID2D/3D computer program for generating grid systems in complex-shaped three-dimensional spatial domains

    NASA Technical Reports Server (NTRS)

    Steinthorsson, E.; Shih, T. I-P.; Roelke, R. J.

    1991-01-01

    In order to generate good quality systems for complicated three-dimensional spatial domains, the grid-generation method used must be able to exert rather precise controls over grid-point distributions. Several techniques are presented that enhance control of grid-point distribution for a class of algebraic grid-generation methods known as the two-, four-, and six-boundary methods. These techniques include variable stretching functions from bilinear interpolation, interpolating functions based on tension splines, and normalized K-factors. The techniques developed in this study were incorporated into a new version of GRID3D called GRID3D-v2. The usefulness of GRID3D-v2 was demonstrated by using it to generate a three-dimensional grid system in the coolent passage of a radial turbine blade with serpentine channels and pin fins.

  14. Manual Optical Attitude Re-initialization of a Crew Vehicle in Space Using Bias Corrected Gyro Data

    NASA Astrophysics Data System (ADS)

    Gioia, Christopher J.

    NASA and other space agencies have shown interest in sending humans on missions beyond low Earth orbit. Proposed is an algorithm that estimates the attitude of a manned spacecraft using measured line-of-sight (LOS) vectors to stars and gyroscope measurements. The Manual Optical Attitude Reinitialization (MOAR) algorithm and corresponding device draw inspiration from existing technology from the Gemini, Apollo and Space Shuttle programs. The improvement over these devices is the capability of estimating gyro bias completely independent from re-initializing attitude. It may be applied to the lost-in-space problem, where the spacecraft's attitude is unknown. In this work, a model was constructed that simulated gyro data using the Farrenkopf gyro model, and LOS measurements from a spotting scope were then computed from it. Using these simulated measurements, gyro bias was estimated by comparing measured interior star angles to those derived from a star catalog and then minimizing the difference using an optimization technique. Several optimization techniques were analyzed, and it was determined that the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm performed the best when combined with a grid search technique. Once estimated, the gyro bias was removed and attitude was determined by solving the Wahba Problem via the Singular Value Decomposition (SVD) approach. Several Monte Carlo simulations were performed that looked at different operating conditions for the MOAR algorithm. These included the effects of bias instability, using different constellations for data collection, sampling star measurements in different orders, and varying the time between measurements. A common method of estimating gyro bias and attitude in a Multiplicative Extended Kalman Filter (MEKF) was also explored and disproven for use in the MOAR algorithm. A prototype was also constructed to validate the proposed concepts. It was built using a simple spotting scope, MEMS grade IMU, and a Raspberry Pi computer. It was mounted on a tripod, used to target stars with the scope and measure the rotation between them using the IMU. The raw measurements were then post-processed using the MOAR algorithm, and attitude estimates were determined. Two different constellations---the Big Dipper and Orion---were used for experimental data collection. The results suggest that the novel method of estimating gyro bias independently from attitude in this document is credible for use onboard a spacecraft.

  15. Broadband and high modulation-depth THz modulator using low bias controlled VO2-integrated metasurface.

    PubMed

    Zhou, Gaochao; Dai, Penghui; Wu, Jingbo; Jin, Biaobing; Wen, Qiye; Zhu, Guanghao; Shen, Ze; Zhang, Caihong; Kang, Lin; Xu, Weiwei; Chen, Jian; Wu, Peiheng

    2017-07-24

    An active vanadium dioxide integrated metasurface offering broadband transmitted terahertz wave modulation with large modulation-depth under electrical control is demonstrated. The device consists of metal bias-lines arranged with grid-structure patterned vanadium dioxide (VO 2 ) film on sapphire substrate. Amplitude transmission is continuously tuned from more than 78% to 28% or lower in the frequency range from 0.3 THz to 1.0 THz, by means of electrical bias at temperature of 68 °C. The physical mechanism underlying the device's electrical tunability is investigated and found to be attributed to the ohmic heating. The developed device possessing over 87% modulation depth with 0.7 THz frequency band is expected to have many potential applications in THz regime such as tunable THz attenuator.

  16. A modified adjoint-based grid adaptation and error correction method for unstructured grid

    NASA Astrophysics Data System (ADS)

    Cui, Pengcheng; Li, Bin; Tang, Jing; Chen, Jiangtao; Deng, Youqi

    2018-05-01

    Grid adaptation is an important strategy to improve the accuracy of output functions (e.g. drag, lift, etc.) in computational fluid dynamics (CFD) analysis and design applications. This paper presents a modified robust grid adaptation and error correction method for reducing simulation errors in integral outputs. The procedure is based on discrete adjoint optimization theory in which the estimated global error of output functions can be directly related to the local residual error. According to this relationship, local residual error contribution can be used as an indicator in a grid adaptation strategy designed to generate refined grids for accurately estimating the output functions. This grid adaptation and error correction method is applied to subsonic and supersonic simulations around three-dimensional configurations. Numerical results demonstrate that the sensitive grids to output functions are detected and refined after grid adaptation, and the accuracy of output functions is obviously improved after error correction. The proposed grid adaptation and error correction method is shown to compare very favorably in terms of output accuracy and computational efficiency relative to the traditional featured-based grid adaptation.

  17. Reservoir property grids improve with geostatistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vogt, J.

    1993-09-01

    Visualization software, reservoir simulators and many other E and P software applications need reservoir property grids as input. Using geostatistics, as compared to other gridding methods, to produce these grids leads to the best output from the software programs. For the purpose stated herein, geostatistics is simply two types of gridding methods. Mathematically, these methods are based on minimizing or duplicating certain statistical properties of the input data. One geostatical method, called kriging, is used when the highest possible point-by-point accuracy is desired. The other method, called conditional simulation, is used when one wants statistics and texture of the resultingmore » grid to be the same as for the input data. In the following discussion, each method is explained, compared to other gridding methods, and illustrated through example applications. Proper use of geostatistical data in flow simulations, use of geostatistical data for history matching, and situations where geostatistics has no significant advantage over other methods, also will be covered.« less

  18. Field induced transient current in one-dimensional nanostructure

    NASA Astrophysics Data System (ADS)

    Sako, Tokuei; Ishida, Hiroshi

    2018-07-01

    Field-induced transient current in one-dimensional nanostructures has been studied by a model of an electron confined in a 1D attractive Gaussian potential subjected both to electrodes at the terminals and to an ultrashort pulsed oscillatory electric field with the central frequency ω and the FWHM pulse width Γ. The time-propagation of the electron wave packet has been simulated by integrating the time-dependent Schrödinger equation directly relying on the second-order symplectic integrator method. The transient current has been calculated as the flux of the probability density of the escaping wave packet emitted from the downstream side of the confining potential. When a static bias-field E0 is suddenly applied, the resultant transient current shows an oscillatory decay behavior with time followed by a minimum structure before converging to a nearly constant value. The ω-dependence of the integrated transient current induced by the pulsed electric field has shown an asymmetric resonance line-shape for large Γ while it shows a fringe pattern on the spectral line profile for small Γ. These observations have been rationalized on the basis of the energy-level structure and lifetime of the quasibound states in the bias-field modified confining potential obtained by the complex-scaling Fourier grid Hamiltonian method.

  19. Characterization of scatter in digital mammography from physical measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leon, Stephanie M., E-mail: Stephanie.Leon@uth.tmc.edu; Wagner, Louis K.; Brateman, Libby F.

    2014-06-15

    Purpose: That scattered radiation negatively impacts the quality of medical radiographic imaging is well known. In mammography, even slight amounts of scatter reduce the high contrast required for subtle soft-tissue imaging. In current clinical mammography, image contrast is partially improved by use of an antiscatter grid. This form of scatter rejection comes with a sizeable dose penalty related to the concomitant elimination of valuable primary radiation. Digital mammography allows the use of image processing as a method of scatter correction that might avoid effects that negatively impact primary radiation, while potentially providing more contrast improvement than is currently possible withmore » a grid. For this approach to be feasible, a detailed characterization of the scatter is needed. Previous research has modeled scatter as a constant background that serves as a DC bias across the imaging surface. The goal of this study was to provide a more substantive data set for characterizing the spatially-variant features of scatter radiation at the image detector of modern mammography units. Methods: This data set was acquired from a model of the radiation beam as a matrix of very narrow rays or pencil beams. As each pencil beam penetrates tissue, the pencil widens in a predictable manner due to the production of scatter. The resultant spreading of the pencil beam at the detector surface can be characterized by two parameters: mean radial extent (MRE) and scatter fraction (SF). The SF and MRE were calculated from measurements obtained using the beam stop method. Two digital mammography units were utilized, and the SF and MRE were found as functions of target, filter, tube potential, phantom thickness, and presence or absence of a grid. These values were then used to generate general equations allowing the SF and MRE to be calculated for any combination of the above parameters. Results: With a grid, the SF ranged from a minimum of about 0.05 to a maximum of about 0.16, and the MRE ranged from about 3 to 13 mm. Without a grid, the SF ranged from a minimum of 0.25 to a maximum of 0.52, and the MRE ranged from about 20 to 45 mm. The SF with a grid demonstrated a mild dependence on target/filter combination and kV, whereas the SF without a grid was independent of these factors. The MRE demonstrated a complex relationship as a function of kV, with notable difference among target/filter combinations. The primary source of change in both the SF and MRE was phantom thickness. Conclusions: Because breast tissue varies spatially in physical density and elemental content, the effective thickness of breast tissue varies spatially across the imaging field, resulting in a spatially-variant scatter distribution in the imaging field. The data generated in this study can be used to characterize the scatter contribution on a point-by-point basis, for a variety of different techniques.« less

  20. Multigrid method based on the transformation-free HOC scheme on nonuniform grids for 2D convection diffusion problems

    NASA Astrophysics Data System (ADS)

    Ge, Yongbin; Cao, Fujun

    2011-05-01

    In this paper, a multigrid method based on the high order compact (HOC) difference scheme on nonuniform grids, which has been proposed by Kalita et al. [J.C. Kalita, A.K. Dass, D.C. Dalal, A transformation-free HOC scheme for steady convection-diffusion on non-uniform grids, Int. J. Numer. Methods Fluids 44 (2004) 33-53], is proposed to solve the two-dimensional (2D) convection diffusion equation. The HOC scheme is not involved in any grid transformation to map the nonuniform grids to uniform grids, consequently, the multigrid method is brand-new for solving the discrete system arising from the difference equation on nonuniform grids. The corresponding multigrid projection and interpolation operators are constructed by the area ratio. Some boundary layer and local singularity problems are used to demonstrate the superiority of the present method. Numerical results show that the multigrid method with the HOC scheme on nonuniform grids almost gets as equally efficient convergence rate as on uniform grids and the computed solution on nonuniform grids retains fourth order accuracy while on uniform grids just gets very poor solution for very steep boundary layer or high local singularity problems. The present method is also applied to solve the 2D incompressible Navier-Stokes equations using the stream function-vorticity formulation and the numerical solutions of the lid-driven cavity flow problem are obtained and compared with solutions available in the literature.

  1. Abstract: Inference and Interval Estimation for Indirect Effects With Latent Variable Models.

    PubMed

    Falk, Carl F; Biesanz, Jeremy C

    2011-11-30

    Models specifying indirect effects (or mediation) and structural equation modeling are both popular in the social sciences. Yet relatively little research has compared methods that test for indirect effects among latent variables and provided precise estimates of the effectiveness of different methods. This simulation study provides an extensive comparison of methods for constructing confidence intervals and for making inferences about indirect effects with latent variables. We compared the percentile (PC) bootstrap, bias-corrected (BC) bootstrap, bias-corrected accelerated (BC a ) bootstrap, likelihood-based confidence intervals (Neale & Miller, 1997), partial posterior predictive (Biesanz, Falk, and Savalei, 2010), and joint significance tests based on Wald tests or likelihood ratio tests. All models included three reflective latent variables representing the independent, dependent, and mediating variables. The design included the following fully crossed conditions: (a) sample size: 100, 200, and 500; (b) number of indicators per latent variable: 3 versus 5; (c) reliability per set of indicators: .7 versus .9; (d) and 16 different path combinations for the indirect effect (α = 0, .14, .39, or .59; and β = 0, .14, .39, or .59). Simulations were performed using a WestGrid cluster of 1680 3.06GHz Intel Xeon processors running R and OpenMx. Results based on 1,000 replications per cell and 2,000 resamples per bootstrap method indicated that the BC and BC a bootstrap methods have inflated Type I error rates. Likelihood-based confidence intervals and the PC bootstrap emerged as methods that adequately control Type I error and have good coverage rates.

  2. Evaluation of Aquarius Version-5 Sea Surface Salinity on various spatial and temporal scales

    NASA Astrophysics Data System (ADS)

    Lee, T.

    2017-12-01

    Sea surface salinity (SSS) products from Aquarius have had three public releases with progressive improvement in data quality: Versions 2, 3, and 4, with the last one being released in October 2015. A systematic assessment of the Version-4, Level-3 Aquarius SSS product was performed on various spatial and temporal scales by comparing it with gridded Argo products (Lee 2016, Geophys. Res. Lett.). The comparison showed that the consistency of Aquarius Version-4 SSS with gridded Argo products is comparable to that between two different gridded Argo products. However, significant seasonal biases remain in high-latitude oceans. Further improvements are being made by the Aquarius team. Aquarius Version 5.0 SSS is scheduled to be released in October 2017 as the final version of the Aquarius Project. This presentation provides a similar evaluation of Version-5 SSS as reported by Lee (2016) and contrast it with the current Version-4 SSS.

  3. Size scaling of negative hydrogen ion sources for fusion

    NASA Astrophysics Data System (ADS)

    Fantz, U.; Franzen, P.; Kraus, W.; Schiesko, L.; Wimmer, C.; Wünderlich, D.

    2015-04-01

    The RF-driven negative hydrogen ion source (H-, D-) for the international fusion experiment ITER has a width of 0.9 m and a height of 1.9 m and is based on a ⅛ scale prototype source being in operation at the IPP test facilities BATMAN and MANITU for many years. Among the challenges to meet the required parameters in a caesiated source at a source pressure of 0.3 Pa or less is the challenge in size scaling of a factor of eight. As an intermediate step a ½ scale ITER source went into operation at the IPP test facility ELISE with the first plasma in February 2013. The experience and results gained so far at ELISE allowed a size scaling study from the prototype source towards the ITER relevant size at ELISE, in which operational issues, physical aspects and the source performance is addressed, highlighting differences as well as similarities. The most ITER relevant results are: low pressure operation down to 0.2 Pa is possible without problems; the magnetic filter field created by a current in the plasma grid is sufficient to reduce the electron temperature below the target value of 1 eV and to reduce together with the bias applied between the differently shaped bias plate and the plasma grid the amount of co-extracted electrons. An asymmetry of the co-extracted electron currents in the two grid segments is measured, varying strongly with filter field and bias. Contrary to the prototype source, a dedicated plasma drift in vertical direction is not observed. As in the prototype source, the performance in deuterium is limited by the amount of co-extracted electrons in short as well as in long pulse operation. Caesium conditioning is much harder in deuterium than in hydrogen for which fast and reproducible conditioning is achieved. First estimates reveal a caesium consumption comparable to the one in the prototype source despite the large size.

  4. Cloud Tolerance of Remote-Sensing Technologies to Measure Land Surface Temperature

    NASA Technical Reports Server (NTRS)

    Holmes, Thomas R. H.; Hain, Christopher R.; Anderson, Martha C.; Crow, Wade T.

    2016-01-01

    Conventional methods to estimate land surface temperature (LST) from space rely on the thermal infrared(TIR) spectral window and is limited to cloud-free scenes. To also provide LST estimates during periods with clouds, a new method was developed to estimate LST based on passive microwave(MW) observations. The MW-LST product is informed by six polar-orbiting satellites to create a global record with up to eight observations per day for each 0.25resolution grid box. For days with sufficient observations, a continuous diurnal temperature cycle (DTC) was fitted. The main characteristics of the DTC were scaled to match those of a geostationary TIR-LST product. This paper tests the cloud tolerance of the MW-LST product. In particular, we demonstrate its stable performance with respect to flux tower observation sites (four in Europe and nine in the United States), over a range of cloudiness conditions up to heavily overcast skies. The results show that TIR based LST has slightly better performance than MW-LST for clear-sky observations but suffers an increasing negative bias as cloud cover increases. This negative bias is caused by incomplete masking of cloud-covered areas within the TIR scene that affects many applications of TIR-LST. In contrast, for MW-LST we find no direct impact of clouds on its accuracy and bias. MW-LST can therefore be used to improve TIR cloud screening. Moreover, the ability to provide LST estimates for cloud-covered surfaces can help expand current clear-sky-only satellite retrieval products to all-weather applications.

  5. Climate projections and extremes in dynamically downscaled CMIP5 model outputs over the Bengal delta: a quartile based bias-correction approach with new gridded data

    NASA Astrophysics Data System (ADS)

    Hasan, M. Alfi; Islam, A. K. M. Saiful; Akanda, Ali Shafqat

    2017-11-01

    In the era of global warning, the insight of future climate and their changing extremes is critical for climate-vulnerable regions of the world. In this study, we have conducted a robust assessment of Regional Climate Model (RCM) results in a monsoon-dominated region within the new Coupled Model Intercomparison Project Phase 5 (CMIP5) and the latest Representative Concentration Pathways (RCP) scenarios. We have applied an advanced bias correction approach to five RCM simulations in order to project future climate and associated extremes over Bangladesh, a critically climate-vulnerable country with a complex monsoon system. We have also generated a new gridded product that performed better in capturing observed climatic extremes than existing products. The bias-correction approach provided a notable improvement in capturing the precipitation extremes as well as mean climate. The majority of projected multi-model RCMs indicate an increase of rainfall, where one model shows contrary results during the 2080s (2071-2100) era. The multi-model mean shows that nighttime temperatures will increase much faster than daytime temperatures and the average annual temperatures are projected to be as hot as present-day summer temperatures. The expected increase of precipitation and temperature over the hilly areas are higher compared to other parts of the country. Overall, the projected extremities of future rainfall are more variable than temperature. According to the majority of the models, the number of the heavy rainy days will increase in future years. The severity of summer-day temperatures will be alarming, especially over hilly regions, where winters are relatively warm. The projected rise of both precipitation and temperature extremes over the intense rainfall-prone northeastern region of the country creates a possibility of devastating flash floods with harmful impacts on agriculture. Moreover, the effect of bias-correction, as presented in probable changes of both bias-corrected and uncorrected extremes, can be considered in future policy making.

  6. A multigrid method for steady Euler equations on unstructured adaptive grids

    NASA Technical Reports Server (NTRS)

    Riemslagh, Kris; Dick, Erik

    1993-01-01

    A flux-difference splitting type algorithm is formulated for the steady Euler equations on unstructured grids. The polynomial flux-difference splitting technique is used. A vertex-centered finite volume method is employed on a triangular mesh. The multigrid method is in defect-correction form. A relaxation procedure with a first order accurate inner iteration and a second-order correction performed only on the finest grid, is used. A multi-stage Jacobi relaxation method is employed as a smoother. Since the grid is unstructured a Jacobi type is chosen. The multi-staging is necessary to provide sufficient smoothing properties. The domain is discretized using a Delaunay triangular mesh generator. Three grids with more or less uniform distribution of nodes but with different resolution are generated by successive refinement of the coarsest grid. Nodes of coarser grids appear in the finer grids. The multigrid method is started on these grids. As soon as the residual drops below a threshold value, an adaptive refinement is started. The solution on the adaptively refined grid is accelerated by a multigrid procedure. The coarser multigrid grids are generated by successive coarsening through point removement. The adaption cycle is repeated a few times. Results are given for the transonic flow over a NACA-0012 airfoil.

  7. Specialized CFD Grid Generation Methods for Near-Field Sonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Campbell, Richard L.; Elmiligui, Alaa; Cliff, Susan E.; Nayani, Sudheer N.

    2014-01-01

    Ongoing interest in analysis and design of low sonic boom supersonic transports re- quires accurate and ecient Computational Fluid Dynamics (CFD) tools. Specialized grid generation techniques are employed to predict near- eld acoustic signatures of these con- gurations. A fundamental examination of grid properties is performed including grid alignment with ow characteristics and element type. The issues a ecting the robustness of cylindrical surface extrusion are illustrated. This study will compare three methods in the extrusion family of grid generation methods that produce grids aligned with the freestream Mach angle. These methods are applied to con gurations from the First AIAA Sonic Boom Prediction Workshop.

  8. Fast and accurate grid representations for atom-based docking with partner flexibility.

    PubMed

    de Vries, Sjoerd J; Zacharias, Martin

    2017-06-30

    Macromolecular docking methods can broadly be divided into geometric and atom-based methods. Geometric methods use fast algorithms that operate on simplified, grid-like molecular representations, while atom-based methods are more realistic and flexible, but far less efficient. Here, a hybrid approach of grid-based and atom-based docking is presented, combining precalculated grid potentials with neighbor lists for fast and accurate calculation of atom-based intermolecular energies and forces. The grid representation is compatible with simultaneous multibody docking and can tolerate considerable protein flexibility. When implemented in our docking method ATTRACT, grid-based docking was found to be ∼35x faster. With the OPLSX forcefield instead of the ATTRACT coarse-grained forcefield, the average speed improvement was >100x. Grid-based representations may allow atom-based docking methods to explore large conformational spaces with many degrees of freedom, such as multiple macromolecules including flexibility. This increases the domain of biological problems to which docking methods can be applied. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  9. Reducing the Impact of Sampling Bias in NASA MODIS and VIIRS Level 3 Satellite Derived IR SST Observations over the Arctic

    NASA Astrophysics Data System (ADS)

    Minnett, P. J.; Liu, Y.; Kilpatrick, K. A.

    2016-12-01

    Sea-surface temperature (SST) measurements by satellites in the northern hemisphere high latitudes confront several difficulties. Year-round prevalent clouds, effects near ice edges, and the relative small difference between SST and low-level cloud temperatures lead to a significant loss of infrared observations regardless of the more frequent polar satellite overpasses. Recent research (Liu and Minnett, 2016) identified sampling issues in the Level 3 NASA MODIS SST products when 4km observations are aggregated into global grids at different time and space scales, particularly in the Arctic, where a binary decision cloud mask designed for global data is often overly conservative at high latitudes and results in many gaps and missing data. This under sampling of some Arctic regions results in a warm bias in Level 3 products, likely a result of warmer surface temperature, more distant from the ice edge, being identified more frequently as cloud free. Here we present an improved method for cloud detection in the Arctic using a majority vote from an ensemble of four classifiers trained based on an Alternative Decision Tree (ADT) algorithm (Freund and Mason 1999, Pfahringer et. al. 2001). This new cloud classifier increases sampling of clear pixel by 50% in several regions and generally produces cooler monthly average SST fields in the ice-free Arctic, while still retaining the same error characteristics at 1km resolution relative to in situ observations. SST time series of 12 years of MODIS (Aqua and Terra) and more recently VIIRS sensors are compared and the improvements in errors and uncertainties resulting from better cloud screening for Level 3 gridded products are assessed and summarized.

  10. Evaluation of climatic changes in South-Asia

    NASA Astrophysics Data System (ADS)

    Kjellstrom, Erik; Rana, Arun; Grigory, Nikulin; Renate, Wilcke; Hansson, Ulf; Kolax, Michael

    2016-04-01

    Literature has sufficient evidences of climate change impact all over the world and its impact on various sectors. In light of new advancements made in climate modeling, availability of several climate downscaling approaches, the more robust bias correction methods with varying complexities and strengths, in the present study we performed a systematic evaluation of climate change impact over South-Asia region. We have used different Regional Climate Models (RCMs) (from CORDEX domain), (Global Climate Models GCMs) and gridded observations for the study area to evaluate the models in historical/control period (1980-2010) and changes in future period (2010-2099). Firstly, GCMs and RCMs are evaluated against the Gridded observational datasets in the area using precipitation and temperature as indicative variables. Observational dataset are also evaluated against the reliable set of observational dataset, as pointed in literature. Bias, Correlation, and changes (among other statistical measures) are calculated for the entire region and both the variables. Eventually, the region was sub-divided into various smaller domains based on homogenous precipitation zones to evaluate the average changes over time period. Spatial and temporal changes for the region are then finally calculated to evaluate the future changes in the region. Future changes are calculated for 2 Representative Concentration Pathways (RCPs), the middle emission (RCP4.5) and high emission (RCP8.5) and for both climatic variables, precipitation and temperature. Lastly, Evaluation of Extremes is performed based on precipitation and temperature based indices for whole region in future dataset. Results have indicated that the whole study region is under extreme stress in future climate scenarios for both climatic variables i.e. precipitation and temperature. Precipitation variability is dependent on the location in the area leading to droughts and floods in various regions in future. Temperature is hinting towards a constant increase throughout the region regardless of location.

  11. The utility of satellite precipitation products for hydrologic prediction in topographically complex regions: The Chehalis River Basin, WA as a case study

    NASA Astrophysics Data System (ADS)

    Cao, Q.; Mehran, A.; Lettenmaier, D. P.; Mass, C.; Johnson, N.

    2015-12-01

    Accurate measurements of precipitation are of great importance in hydrologic predictions especially for floods, which are a pervasive natural hazard. One of the primary objectives of Global Precipitation Measurement (GPM) mission is to provide a basis for hydrologic predictions using satellite sensors. A major advance in GPM relative to the Tropical Rainfall Measuring Mission (TRMM) is that it observes atmospheric river (AR) events, most of which have landfall too far north to be tracked by TRMM. These events are responsible for most major floods along the U.S. West Coast. We address the question of whether, for hydrologic modeling purposes, it is better to use precipitation products derived directly from GPM and/or other precipitation fields from weather models that have assimilated satellite data. Our overall strategy is to compare different methods for prediction of flood and/or high flow events by different forcings on the hydrologic model. We examine four different configurations of the Distroibute Hydrology Soil Vegetation Model (DHSVM) over the Chehalis River Basin that use a) precipitation forcings based on gridded station data; b) precipitation forcings based on NWS WSR-88D data, c) forcings based from short-term precipitation forecasts using the Weather Research and Forecasting (WRF) mesoscale atmospheric model, and d) satellite-based precipitation estimates (TMPA and IMERG). We find that in general, biases in the radar and satellite products result in much larger errors than with either gridded station data or WRF forcings, but if these biases are removed, comparable performance in flood predictions can be achieved by Satellite-based precipitation estimates (TMPA and IMERG).

  12. Assessment of terrestrial water contributions to polar motion from GRACE and hydrological models

    NASA Astrophysics Data System (ADS)

    Jin, S. G.; Hassan, A. A.; Feng, G. P.

    2012-12-01

    The hydrological contribution to polar motion is a major challenge in explaining the observed geodetic residual of non-atmospheric and non-oceanic excitations since hydrological models have limited input of comprehensive global direct observations. Although global terrestrial water storage (TWS) estimated from the Gravity Recovery and Climate Experiment (GRACE) provides a new opportunity to study the hydrological excitation of polar motion, the GRACE gridded data are subject to the post-processing de-striping algorithm, spatial gridded mapping and filter smoothing effects as well as aliasing errors. In this paper, the hydrological contributions to polar motion are investigated and evaluated at seasonal and intra-seasonal time scales using the recovered degree-2 harmonic coefficients from all GRACE spherical harmonic coefficients and hydrological models data with the same filter smoothing and recovering methods, including the Global Land Data Assimilation Systems (GLDAS) model, Climate Prediction Center (CPC) model, the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis products and European Center for Medium-Range Weather Forecasts (ECMWF) operational model (opECMWF). It is shown that GRACE is better in explaining the geodetic residual of non-atmospheric and non-oceanic polar motion excitations at the annual period, while the models give worse estimates with a larger phase shift or amplitude bias. At the semi-annual period, the GRACE estimates are also generally closer to the geodetic residual, but with some biases in phase or amplitude due mainly to some aliasing errors at near semi-annual period from geophysical models. For periods less than 1-year, the hydrological models and GRACE are generally worse in explaining the intraseasonal polar motion excitations.

  13. Cathode-less gridded ion thrusters for small satellites

    NASA Astrophysics Data System (ADS)

    Aanesland, Ane

    2016-10-01

    Electric space propulsion is now a mature technology for commercial satellites and space missions that requires thrust in the order of hundreds of mN, and with available electric power in the order of kW. Developing electric propulsion for SmallSats (1 to 500 kg satellites) are challenging due to the small space and limited available electric power (in the worst case close to 10 W). One of the challenges in downscaling ion and Hall thrusters is the need to neutralize the positive ion beam to prevent beam stalling. This neutralization is achieved by feeding electrons into the downstream space. In most cases hollow cathodes are used for this purpose, but they are fragile and difficult to implement, and in particular for small systems they are difficult to downscale, both in size and electron current. We describe here a new alternative ion thruster that can provide thrust and specific impulse suitable for mission control of satellites as small as 3 kg. The originality of our thruster lies in the acceleration principles and propellant handling. Continuous ion acceleration is achieved by biasing a set of grids with Radio Frequency voltages (RF) via a blocking capacitor. Due to the different mobility of ions and electrons, the blocking capacitor charges up and rectifies the RF voltage. Thus, the ions are accelerated by the self-bias DC voltage. Moreover, due to the RF oscillations, the electrons escape the thruster across the grids during brief instants in the RF period ensuring a full space charge neutralization of the positive ion beam. Due to the RF nature of this system, the space charge limited current increases by almost a factor of 2 compared to classical DC biased grids, which translates into a specific thrust two times higher than for a similar DC system. This new thruster is called Neptune and operates with only one RF power supply for plasma generation, ion acceleration and electron neutralization. We will present the downscaling of this thruster to a 3cm diameter unit well adapted for a CubeSat or SmallSat mission. This work was supported by Agence Nationale de la Recherche under contract ANR-11-IDEX-0004-02 (Plas@Par) and by SATT Paris-Saclay.

  14. Mercury ion thruster research, 1978

    NASA Technical Reports Server (NTRS)

    Wilbur, P. J.

    1978-01-01

    The effects of 8 cm thruster main and neutralizer cathode operating conditions on cathode orifice plate temperatures were studied. The effects of cathode operating conditions on insert temperature profiles and keeper voltages are presented for three different types of inserts. The bulk of the emission current is generally observed to come from the downstream end of the insert rather than from the cathode orifice plate. Results of a test in which the screen grid plasma sheath of a thruster was probed as the beam current was varied are shown. Grid performance obtained with a grid machined from glass ceramic is discussed. The effects of copper and nitrogen impurities on the sputtering rates of thruster materials are measured experimentally and a model describing the rate of nitrogen chemisorption on materials in either the beam or the discharge chamber is presented. The results of optimization of a radial field thruster design are presented. Performance of this device is shown to be comparable to that of a divergent field thruster and efficient operation with the screen grid biased to floating potential, where its susceptibility to sputter erosion damage is reduced, is demonstrated.

  15. Parallel grid population

    DOEpatents

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  16. Advanced Unstructured Grid Generation for Complex Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2008-01-01

    A new approach for distribution of grid points on the surface and in the volume has been developed and implemented in the NASA unstructured grid generation code VGRID. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.

  17. Development of a Global Fire Weather Database

    NASA Technical Reports Server (NTRS)

    Field, R. D.; Spessa, A. C.; Aziz, N. A.; Camia, A.; Cantin, A.; Carr, R.; de Groot, W. J.; Dowdy, A. J.; Flannigan, M. D.; Manomaiphiboon, K.; hide

    2015-01-01

    The Canadian Forest Fire Weather Index (FWI) System is the mostly widely used fire danger rating system in the world. We have developed a global database of daily FWI System calculations, beginning in 1980, called the Global Fire WEather Database (GFWED) gridded to a spatial resolution of 0.5 latitude by 2/3 longitude. Input weather data were obtained from the NASA Modern Era Retrospective- Analysis for Research and Applications (MERRA), and two different estimates of daily precipitation from rain gauges over land. FWI System Drought Code calculations from the gridded data sets were compared to calculations from individual weather station data for a representative set of 48 stations in North, Central and South America, Europe, Russia, Southeast Asia and Australia. Agreement between gridded calculations and the station-based calculations tended to be most different at low latitudes for strictly MERRA based calculations. Strong biases could be seen in either direction: MERRA DC over the Mato Grosso in Brazil reached unrealistically high values exceeding DCD1500 during the dry season but was too low over Southeast Asia during the dry season. These biases are consistent with those previously identified in MERRA's precipitation, and they reinforce the need to consider alternative sources of precipitation data. GFWED can be used for analyzing historical relationships between fire weather and fire activity at continental and global scales, in identifying large-scale atmosphere-ocean controls on fire weather, and calibration of FWI-based fire prediction models.

  18. Development and evaluation of a local grid refinement method for block-centered finite-difference groundwater models using shared nodes

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    A new method of local grid refinement for two-dimensional block-centered finite-difference meshes is presented in the context of steady-state groundwater-flow modeling. The method uses an iteration-based feedback with shared nodes to couple two separate grids. The new method is evaluated by comparison with results using a uniform fine mesh, a variably spaced mesh, and a traditional method of local grid refinement without a feedback. Results indicate: (1) The new method exhibits quadratic convergence for homogeneous systems and convergence equivalent to uniform-grid refinement for heterogeneous systems. (2) Coupling the coarse grid with the refined grid in a numerically rigorous way allowed for improvement in the coarse-grid results. (3) For heterogeneous systems, commonly used linear interpolation of heads from the large model onto the boundary of the refined model produced heads that are inconsistent with the physics of the flow field. (4) The traditional method works well in situations where the better resolution of the locally refined grid has little influence on the overall flow-system dynamics, but if this is not true, lack of a feedback mechanism produced errors in head up to 3.6% and errors in cell-to-cell flows up to 25%. ?? 2002 Elsevier Science Ltd. All rights reserved.

  19. The Sensitivity of Atlantic Meridional Overturning Circulation to Dynamical Framework in an Ocean General Circulation Model

    NASA Astrophysics Data System (ADS)

    Li, X.; Yu, Y.

    2016-12-01

    The horizontal coordinate systems commonly used in most global ocean models are the sphere latitude-longitude grid and displaced poles such as tripolar grid. The effect of the horizontal coordinate system on Atlantic Meridional Overturning Circulation (AMOC) is evaluated using an oceanic general circulation model (OGCM). Two experiments are conducted with the model using latitude-longitude grid (Lat_1) and tripolar grid (Tri). Results show that Tri simulates a stronger NADW than Lat_1, as more saline water masses enter into the GIN Seas in Tri. Two reasons can be attributed to the stronger NADW. One is the removal of zonal filter in Tri, which leads to an increasing of zonal gradient of temperature and salinity, thus strengthens the north geostrophic flow. In turn, it decreases the positive subsurface temperature and salinity biases in the subtropical regions. The other may be associated with topography at the North Pole, because the realistic topography is applied in tripolar grid and the longitude-latitude grid employs an artificial island around the North Pole. In order to evaluate the effect of filter on AMOC, three enhanced filter experiments are carried out. Compared to Lat_1, enhanced filter can also increase the NADW, for more saline water is suppressed to go north and accumulated in the Labrador Sea, especially in the experiment with enhanced filter on salinity (Lat_2_S).

  20. A comparison of consumptive-use estimates derived from the simplified surface energy balance approach and indirect reporting methods

    USGS Publications Warehouse

    Maupin, Molly A.; Senay, Gabriel B.; Kenny, Joan F.; Savoca, Mark E.

    2012-01-01

    Recent advances in remote-sensing technology and Simplified Surface Energy Balance (SSEB) methods can provide accurate and repeatable estimates of evapotranspiration (ET) when used with satellite observations of irrigated lands. Estimates of ET are generally considered equivalent to consumptive use (CU) because they represent the part of applied irrigation water that is evaporated, transpired, or otherwise not available for immediate reuse. The U.S. Geological Survey compared ET estimates from SSEB methods to CU data collected for 1995 using indirect methods as part of the National Water Use Information Program (NWUIP). Ten-year (2000-2009) average ET estimates from SSEB methods were derived using Moderate Resolution Imaging Spectroradiometer (MODIS) 1-kilometer satellite land surface temperature and gridded weather datasets from the Global Data Assimilation System (GDAS). County-level CU estimates for 1995 were assembled and referenced to 1-kilometer grid cells to synchronize with the SSEB ET estimates. Both datasets were seasonally and spatially weighted to represent the irrigation season (June-September) and those lands that were identified in the county as irrigated. A strong relation (R2 greater than 0.7) was determined between NWUIP CU and SSEB ET data. Regionally, the relation is stronger in arid western states than in humid eastern states, and positive and negative biases are both present at state-level comparisons. SSEB ET estimates can play a major role in monitoring and updating county-based CU estimates by providing a quick and cost-effective method to detect major year-to-year changes at county levels, as well as providing a means to disaggregate county-based ET estimates to sub-county levels. More research is needed to identify the causes for differences in state-based relations.

  1. Unstructured Grids for Sonic Boom Analysis and Design

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Nayani, Sudheer N.

    2015-01-01

    An evaluation of two methods for improving the process for generating unstructured CFD grids for sonic boom analysis and design has been conducted. The process involves two steps: the generation of an inner core grid using a conventional unstructured grid generator such as VGRID, followed by the extrusion of a sheared and stretched collar grid through the outer boundary of the core grid. The first method evaluated, known as COB, automatically creates a cylindrical outer boundary definition for use in VGRID that makes the extrusion process more robust. The second method, BG, generates the collar grid by extrusion in a very efficient manner. Parametric studies have been carried out and new options evaluated for each of these codes with the goal of establishing guidelines for best practices for maintaining boom signature accuracy with as small a grid as possible. In addition, a preliminary investigation examining the use of the CDISC design method for reducing sonic boom utilizing these grids was conducted, with initial results confirming the feasibility of a new remote design approach.

  2. Multiprocessor computer overset grid method and apparatus

    DOEpatents

    Barnette, Daniel W.; Ober, Curtis C.

    2003-01-01

    A multiprocessor computer overset grid method and apparatus comprises associating points in each overset grid with processors and using mapped interpolation transformations to communicate intermediate values between processors assigned base and target points of the interpolation transformations. The method allows a multiprocessor computer to operate with effective load balance on overset grid applications.

  3. Use of statistically and dynamically downscaled atmospheric model output for hydrologic simulations in three mountainous basins in the western United States

    USGS Publications Warehouse

    Hay, L.E.; Clark, M.P.

    2003-01-01

    This paper examines the hydrologic model performance in three snowmelt-dominated basins in the western United States to dynamically- and statistically downscaled output from the National Centers for Environmental Prediction/National Center for Atmospheric Research Reanalysis (NCEP). Runoff produced using a distributed hydrologic model is compared using daily precipitation and maximum and minimum temperature timeseries derived from the following sources: (1) NCEP output (horizontal grid spacing of approximately 210 km); (2) dynamically downscaled (DDS) NCEP output using a Regional Climate Model (RegCM2, horizontal grid spacing of approximately 52 km); (3) statistically downscaled (SDS) NCEP output; (4) spatially averaged measured data used to calibrate the hydrologic model (Best-Sta) and (5) spatially averaged measured data derived from stations located within the area of the RegCM2 model output used for each basin, but excluding Best-Sta set (All-Sta). In all three basins the SDS-based simulations of daily runoff were as good as runoff produced using the Best-Sta timeseries. The NCEP, DDS, and All-Sta timeseries were able to capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all three basins, the NCEP-, DDS-, and All-Sta-based simulations of runoff showed little skill on a daily basis. When the precipitation and temperature biases were corrected in the NCEP, DDS, and All-Sta timeseries, the accuracy of the daily runoff simulations improved dramatically, but, with the exception of the bias-corrected All-Sta data set, these simulations were never as accurate as the SDS-based simulations. This need for a bias correction may be somewhat troubling, but in the case of the large station-timeseries (All-Sta), the bias correction did indeed 'correct' for the change in scale. It is unknown if bias corrections to model output will be valid in a future climate. Future work is warranted to identify the causes for (and removal of) systematic biases in DDS simulations, and improve DDS simulations of daily variability in local climate. Until then, SDS based simulations of runoff appear to be the safer downscaling choice.

  4. An overlapped grid method for multigrid, finite volume/difference flow solvers: MaGGiE

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Lessard, Victor R.

    1990-01-01

    The objective is to develop a domain decomposition method via overlapping/embedding the component grids, which is to be used by upwind, multi-grid, finite volume solution algorithms. A computer code, given the name MaGGiE (Multi-Geometry Grid Embedder) is developed to meet this objective. MaGGiE takes independently generated component grids as input, and automatically constructs the composite mesh and interpolation data, which can be used by the finite volume solution methods with or without multigrid convergence acceleration. Six demonstrative examples showing various aspects of the overlap technique are presented and discussed. These cases are used for developing the procedure for overlapping grids of different topologies, and to evaluate the grid connection and interpolation data for finite volume calculations on a composite mesh. Time fluxes are transferred between mesh interfaces using a trilinear interpolation procedure. Conservation losses are minimal at the interfaces using this method. The multi-grid solution algorithm, using the coaser grid connections, improves the convergence time history as compared to the solution on composite mesh without multi-gridding.

  5. On the Estimation of Errors in Sparse Bathymetric Geophysical Data Sets

    NASA Astrophysics Data System (ADS)

    Jakobsson, M.; Calder, B.; Mayer, L.; Armstrong, A.

    2001-05-01

    There is a growing demand in the geophysical community for better regional representations of the world ocean's bathymetry. However, given the vastness of the oceans and the relative limited coverage of even the most modern mapping systems, it is likely that many of the older data sets will remain part of our cumulative database for several more decades. Therefore, regional bathymetrical compilations that are based on a mixture of historic and contemporary data sets will have to remain the standard. This raises the problem of assembling bathymetric compilations and utilizing data sets not only with a heterogeneous cover but also with a wide range of accuracies. In combining these data to regularly spaced grids of bathymetric values, which the majority of numerical procedures in earth sciences require, we are often forced to use a complex interpolation scheme due to the sparseness and irregularity of the input data points. Consequently, we are faced with the difficult task of assessing the confidence that we can assign to the final grid product, a task that is not usually addressed in most bathymetric compilations. We approach the problem of assessing the confidence via a direct-simulation Monte Carlo method. We start with a small subset of data from the International Bathymetric Chart of the Arctic Ocean (IBCAO) grid model [Jakobsson et al., 2000]. This grid is compiled from a mixture of data sources ranging from single beam soundings with available metadata to spot soundings with no available metadata, to digitized contours; the test dataset shows examples of all of these types. From this database, we assign a priori error variances based on available meta-data, and when this is not available, based on a worst-case scenario in an essentially heuristic manner. We then generate a number of synthetic datasets by randomly perturbing the base data using normally distributed random variates, scaled according to the predicted error model. These datasets are then re-gridded using the same methodology as the original product, generating a set of plausible grid models of the regional bathymetry that we can use for standard error estimates. Finally, we repeat the entire random estimation process and analyze each run's standard error grids in order to examine sampling bias and variance in the predictions. The final products of the estimation are a collection of standard error grids, which we combine with the source data density in order to create a grid that contains information about the bathymetry model's reliability. Jakobsson, M., Cherkis, N., Woodward, J., Coakley, B., and Macnab, R., 2000, A new grid of Arctic bathymetry: A significant resource for scientists and mapmakers, EOS Transactions, American Geophysical Union, v. 81, no. 9, p. 89, 93, 96.

  6. GRID2D/3D: A computer program for generating grid systems in complex-shaped two- and three-dimensional spatial domains. Part 1: Theory and method

    NASA Technical Reports Server (NTRS)

    Shih, T. I.-P.; Bailey, R. T.; Nguyen, H. L.; Roelke, R. J.

    1990-01-01

    An efficient computer program, called GRID2D/3D was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no modifications are needed in the grid generation part of the program. This technical memorandum describes the theory and method used in GRID2D/3D.

  7. Validation of the continuous glucose monitoring sensor in preterm infants.

    PubMed

    Beardsall, K; Vanhaesebrouck, S; Ogilvy-Stuart, A L; Vanhole, C; VanWeissenbruch, M; Midgley, P; Thio, M; Cornette, L; Ossuetta, I; Palmer, C R; Iglesias, I; de Jong, M; Gill, B; de Zegher, F; Dunger, D B

    2013-03-01

    Recent studies have highlighted the need for improved methods of monitoring glucose control in intensive care to reduce hyperglycaemia, without increasing the risk of hypoglycaemia. Continuous glucose monitoring is increasingly used in children with diabetes, but there are little data regarding its use in the preterm infant, particularly at extremes of glucose levels and over prolonged periods. This study aimed to assess the accuracy of the continuous glucose monitoring sensor (CGMS) across the glucose profile, and to determine whether there was any deterioration over a 7 day period. Prospectively collected CGMS data from the NIRTURE Trial was compared with the data obtained simultaneously using point of care glucose monitors. An international multicentre randomised controlled trial. One hundred and eighty-eight very low birth weight control infants. Optimal accuracy, performance goals (American Diabetes Association consensus), Bland Altman, Error Grid analyses and accuracy. The mean (SD) duration of CGMS recordings was 156.18 (29) h (6.5 days), with a total of 5207 paired glucose levels. CGMS data correlated well with point of care devices (r=0.94), with minimal bias. It met the Clarke Error Grid and Consensus Grid criteria for clinical significance. Accuracy of single readings to detect set thresholds of hypoglycaemia, or hyperglycaemia was poor. There was no deterioration over time from insertion. CGMS can provide information on trends in glucose control, and guidance on the need for blood glucose assessment. This highlights the potential use of CGMS in optimising glucose control in preterm infants.

  8. BOND: A quantum of solace for nebular abundance determinations

    NASA Astrophysics Data System (ADS)

    Vale Asari, N.; Stasińska, G.; Morisset, C.; Cid Fernandes, R.

    2017-11-01

    The abundances of chemical elements other than hydrogen and helium in a galaxy are the fossil record of its star formation history. Empirical relations such as mass-metallicity relation are thus seen as guides for studies on the history and chemical evolution of galaxies. Those relations usually rely on nebular metallicities measured with strong-line methods, which assume that H II regions are a one- (or at most two-) parameter family where the oxygen abundance is the driving quantity. Nature is however much more complex than that, and metallicities from strong lines may be strongly biased. We have developed the method BOND (Bayesian Oxygen and Nitrogen abundance Determinations) to simultaneously derive oxygen and nitrogen abundances in giant H II regions by comparing strong and semi-strong observed emission lines to a carefully-defined, finely-meshed grid of photoionization models. Our code and results are public and available at http://bond.ufsc.br.

  9. A Linear Bicharacteristic FDTD Method

    NASA Technical Reports Server (NTRS)

    Beggs, John H.

    2001-01-01

    The linear bicharacteristic scheme (LBS) was originally developed to improve unsteady solutions in computational acoustics and aeroacoustics [1]-[7]. It is a classical leapfrog algorithm, but is combined with upwind bias in the spatial derivatives. This approach preserves the time-reversibility of the leapfrog algorithm, which results in no dissipation, and it permits more flexibility by the ability to adopt a characteristic based method. The use of characteristic variables allows the LBS to treat the outer computational boundaries naturally using the exact compatibility equations. The LBS offers a central storage approach with lower dispersion than the Yee algorithm, plus it generalizes much easier to nonuniform grids. It has previously been applied to two and three-dimensional freespace electromagnetic propagation and scattering problems [3], [6], [7]. This paper extends the LBS to model lossy dielectric and magnetic materials. Results are presented for several one-dimensional model problems, and the FDTD algorithm is chosen as a convenient reference for comparison.

  10. LES study of microphysical variability bias in shallow cumulus

    NASA Astrophysics Data System (ADS)

    Kogan, Yefim

    2017-05-01

    Subgrid-scale (SGS) variability of cloud microphysical variables over the mesoscale numerical weather prediction (NWP) model has been evaluated by means of joint probability distribution functions (JPDFs). The latter were obtained using dynamically balanced Large Eddy Simulation (LES) model dataset from a case of marine trade cumulus initialized with soundings from Rain in Cumulus Over the Ocean (RICO) field project. Bias in autoconversion and accretion rates from different formulations of the JPDFs was analyzed. Approximating the 2-D PDF using a generic (fixed-in-time), but variable-in-height JPDFs give an acceptable level of accuracy, whereas neglecting the SGS variability altogether results in a substantial underestimate of the grid-mean total conversion rate and producing negative bias in rain water. Nevertheless the total effect on rain formation may be uncertain in the long run due to the fact that the negative bias in rain water may be counterbalanced by the positive bias in cloud water. Consequently, the overall effect of SGS neglect needs to be investigated in direct simulations with a NWP model.

  11. Probabilistic regional climate projection in Japan using a regression model with CMIP5 multi-model ensemble experiments

    NASA Astrophysics Data System (ADS)

    Ishizaki, N. N.; Dairaku, K.; Ueno, G.

    2016-12-01

    We have developed a statistical downscaling method for estimating probabilistic climate projection using CMIP5 multi general circulation models (GCMs). A regression model was established so that the combination of weights of GCMs reflects the characteristics of the variation of observations at each grid point. Cross validations were conducted to select GCMs and to evaluate the regression model to avoid multicollinearity. By using spatially high resolution observation system, we conducted statistically downscaled probabilistic climate projections with 20-km horizontal grid spacing. Root mean squared errors for monthly mean air surface temperature and precipitation estimated by the regression method were the smallest compared with the results derived from a simple ensemble mean of GCMs and a cumulative distribution function based bias correction method. Projected changes in the mean temperature and precipitation were basically similar to those of the simple ensemble mean of GCMs. Mean precipitation was generally projected to increase associated with increased temperature and consequent increased moisture content in the air. Weakening of the winter monsoon may affect precipitation decrease in some areas. Temperature increase in excess of 4 K was expected in most areas of Japan in the end of 21st century under RCP8.5 scenario. The estimated probability of monthly precipitation exceeding 300 mm would increase around the Pacific side during the summer and the Japan Sea side during the winter season. This probabilistic climate projection based on the statistical method can be expected to bring useful information to the impact studies and risk assessments.

  12. Implementation of perfectly matched layers in an arbitrary geometrical boundary for elastic wave modelling

    NASA Astrophysics Data System (ADS)

    Gao, Hongwei; Zhang, Jianfeng

    2008-09-01

    The perfectly matched layer (PML) absorbing boundary condition is incorporated into an irregular-grid elastic-wave modelling scheme, thus resulting in an irregular-grid PML method. We develop the irregular-grid PML method using the local coordinate system based PML splitting equations and integral formulation of the PML equations. The irregular-grid PML method is implemented under a discretization of triangular grid cells, which has the ability to absorb incident waves in arbitrary directions. This allows the PML absorbing layer to be imposed along arbitrary geometrical boundaries. As a result, the computational domain can be constructed with smaller nodes, for instance, to represent the 2-D half-space by a semi-circle rather than a rectangle. By using a smooth artificial boundary, the irregular-grid PML method can also avoid the special treatments to the corners, which lead to complex computer implementations in the conventional PML method. We implement the irregular-grid PML method in both 2-D elastic isotropic and anisotropic media. The numerical simulations of a VTI lamb's problem, wave propagation in an isotropic elastic medium with curved surface and in a TTI medium demonstrate the good behaviour of the irregular-grid PML method.

  13. A parametric approach for simultaneous bias correction and high-resolution downscaling of climate model rainfall

    NASA Astrophysics Data System (ADS)

    Mamalakis, Antonios; Langousis, Andreas; Deidda, Roberto; Marrocu, Marino

    2017-03-01

    Distribution mapping has been identified as the most efficient approach to bias-correct climate model rainfall, while reproducing its statistics at spatial and temporal resolutions suitable to run hydrologic models. Yet its implementation based on empirical distributions derived from control samples (referred to as nonparametric distribution mapping) makes the method's performance sensitive to sample length variations, the presence of outliers, the spatial resolution of climate model results, and may lead to biases, especially in extreme rainfall estimation. To address these shortcomings, we propose a methodology for simultaneous bias correction and high-resolution downscaling of climate model rainfall products that uses: (a) a two-component theoretical distribution model (i.e., a generalized Pareto (GP) model for rainfall intensities above a specified threshold u*, and an exponential model for lower rainrates), and (b) proper interpolation of the corresponding distribution parameters on a user-defined high-resolution grid, using kriging for uncertain data. We assess the performance of the suggested parametric approach relative to the nonparametric one, using daily raingauge measurements from a dense network in the island of Sardinia (Italy), and rainfall data from four GCM/RCM model chains of the ENSEMBLES project. The obtained results shed light on the competitive advantages of the parametric approach, which is proved more accurate and considerably less sensitive to the characteristics of the calibration period, independent of the GCM/RCM combination used. This is especially the case for extreme rainfall estimation, where the GP assumption allows for more accurate and robust estimates, also beyond the range of the available data.

  14. Electromagnetic radiation detector

    DOEpatents

    Benson, Jay L.; Hansen, Gordon J.

    1976-01-01

    An electromagnetic radiation detector including a collimating window, a cathode member having a photoelectric emissive material surface angularly disposed to said window whereby radiation is impinged thereon at acute angles, an anode, separated from the cathode member by an evacuated space, for collecting photoelectrons emitted from the emissive cathode surface, and a negatively biased, high transmissive grid disposed between the cathode member and anode.

  15. Regional Data Assimilation of AIRS Profiles and Radiances at the SPoRT Center

    NASA Technical Reports Server (NTRS)

    Zavodsky, Brad; Chou, Shih-hung; Jedlovec, Gary

    2009-01-01

    This slide presentation reviews the Short Term Prediction Research and Transition (SPoRT) Center's mission to improve short-term weather prediction at the regional and local scale. It includes information on the cold bias in Weather Research and Forcasting (WRF), troposphere recordings from the Atmospheric Infrared Sounder (AIRS), and vertical resolution of analysis grid.

  16. Addressing extreme precipitation change under future climates in the Upper Yangtze River Basin

    NASA Astrophysics Data System (ADS)

    Yang, Z.; Yuan, Z.; Gao, X.

    2017-12-01

    Investigating the impact of climate change on extreme precipitation accurately is of importance for application purposes such as flooding mitigation and urban drainage system design. In this paper, a systematical analysis framework to assess the impact of climate change on extreme precipitation events is developed and practiced in the Upper Yangtze River Basin (UYRB) in China. Firstly, the UYRB is gridded and five extreme precipitation indices (annual maximum 3- 5- 7- 15- and 30-day precipitation) are selected. Secondly, with observed precipitation from China's Ground Precipitation 0.5°×0.5° Gridded Dataset (V2.0) and simulated daily precipitation from ten general circulation models (GCMs) of CMIP5, A regionally efficient GCM is selected for each grid by the skill score (SS) method which maximizes the overlapped area of probability density functions of extreme precipitation indices between observations and simulations during the historical period. Then, simulations of assembled efficient GCMs are bias corrected by Equidistant Cumulative Distribution Function method. Finally, the impact of climate change on extreme precipitation is analyzed. The results show that: (1) the MRI-CGCM3 and MIROC-ESM perform better in the UYRB. There are 19.8 to 20.9% and 14.2 to 18.7% of all grids regard this two GCMs as regionally efficient GCM for the five indices, respectively. Moreover, the regionally efficient GCMs are spatially distributed. (2) The assembled GCM performs much better than any single GCM, with the SS>0.8 and SS>0.6 in more than 65 and 85 percent grids. (3) Under the RCP4.5 scenario, the extreme precipitation of 50-year and 100-year return period is projected to increase in most areas of the UYRB in the future period, with 55.0 to 61.3% of the UYRB increasing larger than 10 percent for the five indices. The changes are spatially and temporal distributed. The upstream region of the UYRB has a relatively significant increase compared to the downstream basin, while the increase for annual maximum 5- and 7-day precipitation are more significant than other indices. The results demonstrate the impact of climate change on extreme precipitation in the UYRB, which provides a support to manage the water resource in this area.

  17. A kinetic Monte Carlo model with improved charge injection model for the photocurrent characteristics of organic solar cells

    NASA Astrophysics Data System (ADS)

    Kipp, Dylan; Ganesan, Venkat

    2013-06-01

    We develop a kinetic Monte Carlo model for photocurrent generation in organic solar cells that demonstrates improved agreement with experimental illuminated and dark current-voltage curves. In our model, we introduce a charge injection rate prefactor to correct for the electrode grid-size and electrode charge density biases apparent in the coarse-grained approximation of the electrode as a grid of single occupancy, charge-injecting reservoirs. We use the charge injection rate prefactor to control the portion of dark current attributed to each of four kinds of charge injection. By shifting the dark current between electrode-polymer pairs, we align the injection timescales and expand the applicability of the method to accommodate ohmic energy barriers. We consider the device characteristics of the ITO/PEDOT/PSS:PPDI:PBTT:Al system and demonstrate the manner in which our model captures the device charge densities unique to systems with small injection energy barriers. To elucidate the defining characteristics of our model, we first demonstrate the manner in which charge accumulation and band bending affect the shape and placement of the various current-voltage regimes. We then discuss the influence of various model parameters upon the current-voltage characteristics.

  18. Practical estimate of gradient nonlinearity for implementation of apparent diffusion coefficient bias correction.

    PubMed

    Malkyarenko, Dariya I; Chenevert, Thomas L

    2014-12-01

    To describe an efficient procedure to empirically characterize gradient nonlinearity and correct for the corresponding apparent diffusion coefficient (ADC) bias on a clinical magnetic resonance imaging (MRI) scanner. Spatial nonlinearity scalars for individual gradient coils along superior and right directions were estimated via diffusion measurements of an isotropicic e-water phantom. Digital nonlinearity model from an independent scanner, described in the literature, was rescaled by system-specific scalars to approximate 3D bias correction maps. Correction efficacy was assessed by comparison to unbiased ADC values measured at isocenter. Empirically estimated nonlinearity scalars were confirmed by geometric distortion measurements of a regular grid phantom. The applied nonlinearity correction for arbitrarily oriented diffusion gradients reduced ADC bias from 20% down to 2% at clinically relevant offsets both for isotropic and anisotropic media. Identical performance was achieved using either corrected diffusion-weighted imaging (DWI) intensities or corrected b-values for each direction in brain and ice-water. Direction-average trace image correction was adequate only for isotropic medium. Empiric scalar adjustment of an independent gradient nonlinearity model adequately described DWI bias for a clinical scanner. Observed efficiency of implemented ADC bias correction quantitatively agreed with previous theoretical predictions and numerical simulations. The described procedure provides an independent benchmark for nonlinearity bias correction of clinical MRI scanners.

  19. Advanced Unstructured Grid Generation for Complex Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar

    2010-01-01

    A new approach for distribution of grid points on the surface and in the volume has been developed. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.

  20. Fast and precise dense grid size measurement method based on coaxial dual optical imaging system

    NASA Astrophysics Data System (ADS)

    Guo, Jiping; Peng, Xiang; Yu, Jiping; Hao, Jian; Diao, Yan; Song, Tao; Li, Ameng; Lu, Xiaowei

    2015-10-01

    Test sieves with dense grid structure are widely used in many fields, accurate gird size calibration is rather critical for success of grading analysis and test sieving. But traditional calibration methods suffer from the disadvantages of low measurement efficiency and shortage of sampling number of grids which could lead to quality judgment risk. Here, a fast and precise test sieve inspection method is presented. Firstly, a coaxial imaging system with low and high optical magnification probe is designed to capture the grid images of the test sieve. Then, a scaling ratio between low and high magnification probes can be obtained by the corresponding grids in captured images. With this, all grid dimensions in low magnification image can be obtained by measuring few corresponding grids in high magnification image with high accuracy. Finally, by scanning the stage of the tri-axis platform of the measuring apparatus, whole surface of the test sieve can be quickly inspected. Experiment results show that the proposed method can measure the test sieves with higher efficiency compare to traditional methods, which can measure 0.15 million grids (gird size 0.1mm) within only 60 seconds, and it can measure grid size range from 20μm to 5mm precisely. In a word, the presented method can calibrate the grid size of test sieve automatically with high efficiency and accuracy. By which, surface evaluation based on statistical method can be effectively implemented, and the quality judgment will be more reasonable.

  1. Improving and Understanding Climate Models: Scale-Aware Parameterization of Cloud Water Inhomogeneity and Sensitivity of MJO Simulation to Physical Parameters in a Convection Scheme

    NASA Astrophysics Data System (ADS)

    Xie, Xin

    Microphysics and convection parameterizations are two key components in a climate model to simulate realistic climatology and variability of cloud distribution and the cycles of energy and water. When a model has varying grid size or simulations have to be run with different resolutions, scale-aware parameterization is desirable so that we do not have to tune model parameters tailored to a particular grid size. The subgrid variability of cloud hydrometers is known to impact microphysics processes in climate models and is found to highly depend on spatial scale. A scale- aware liquid cloud subgrid variability parameterization is derived and implemented in the Community Earth System Model (CESM) in this study using long-term radar-based ground measurements from the Atmospheric Radiation Measurement (ARM) program. When used in the default CESM1 with the finite-volume dynamic core where a constant liquid inhomogeneity parameter was assumed, the newly developed parameterization reduces the cloud inhomogeneity in high latitudes and increases it in low latitudes. This is due to both the smaller grid size in high latitudes, and larger grid size in low latitudes in the longitude-latitude grid setting of CESM as well as the variation of the stability of the atmosphere. The single column model and general circulation model (GCM) sensitivity experiments show that the new parameterization increases the cloud liquid water path in polar regions and decreases it in low latitudes. Current CESM1 simulation suffers from the bias of both the pacific double ITCZ precipitation and weak Madden-Julian oscillation (MJO). Previous studies show that convective parameterization with multiple plumes may have the capability to alleviate such biases in a more uniform and physical way. A multiple-plume mass flux convective parameterization is used in Community Atmospheric Model (CAM) to investigate the sensitivity of MJO simulations. We show that MJO simulation is sensitive to entrainment rate specification. We found that shallow plumes can generate and sustain the MJO propagation in the model.

  2. Allocating emissions to 4 km and 1 km horizontal spatial resolutions and its impact on simulated NOx and O3 in Houston, TX

    NASA Astrophysics Data System (ADS)

    Pan, Shuai; Choi, Yunsoo; Roy, Anirban; Jeon, Wonbae

    2017-09-01

    A WRF-SMOKE-CMAQ air quality modeling system was used to investigate the impact of horizontal spatial resolution on simulated nitrogen oxides (NOx) and ozone (O3) in the Greater Houston area (a non-attainment area for O3). We employed an approach recommended by the United States Environmental Protection Agency to allocate county-based emissions to model grid cells in 1 km and 4 km horizontal grid resolutions. The CMAQ Integrated Process Rate analyses showed a substantial difference in emissions contributions between 1 and 4 km grids but similar NOx and O3 concentrations over urban and industrial locations. For example, the peak NOx emissions at an industrial and urban site differed by a factor of 20 for the 1 km and 8 for the 4 km grid, but simulated NOx concentrations changed only by a factor of 1.2 in both cases. Hence, due to the interplay of the atmospheric processes, we cannot expect a similar level of reduction of the gas-phase air pollutants as the reduction of emissions. Both simulations reproduced the variability of NASA P-3B aircraft measurements of NOy and O3 in the lower atmosphere (from 90 m to 4.5 km). Both simulations provided similar reasonable predictions at surface, while 1 km case depicted more detailed features of emissions and concentrations in heavily polluted areas, such as highways, airports, and industrial regions, which are useful in understanding the major causes of O3 pollution in such regions, and to quantify transport of O3 to populated communities in urban areas. The Integrated Reaction Rate analyses indicated a distinctive difference of chemistry processes between the model surface layer and upper layers, implying that correcting the meteorological conditions at the surface may not help to enhance the O3 predictions. The model-observation O3 bias in our studies (e.g., large over-prediction during the nighttime or along Gulf of Mexico coastline), were due to uncertainties in meteorology, chemistry or other processes. Horizontal grid resolution is unlikely the major contributor to these biases.

  3. Improvement of Systematic Bias of mean state and the intraseasonal variability of CFSv2 through superparameterization and revised cloud-convection-radiation parameterization

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, P.; Phani Murali Krishna, R.; Goswami, Bidyut B.; Abhik, S.; Ganai, Malay; Mahakur, M.; Khairoutdinov, Marat; Dudhia, Jimmy

    2016-05-01

    Inspite of significant improvement in numerical model physics, resolution and numerics, the general circulation models (GCMs) find it difficult to simulate realistic seasonal and intraseasonal variabilities over global tropics and particularly over Indian summer monsoon (ISM) region. The bias is mainly attributed to the improper representation of physical processes. Among all the processes, the cloud and convective processes appear to play a major role in modulating model bias. In recent times, NCEP CFSv2 model is being adopted under Monsoon Mission for dynamical monsoon forecast over Indian region. The analyses of climate free run of CFSv2 in two resolutions namely at T126 and T382, show largely similar bias in simulating seasonal rainfall, in capturing the intraseasonal variability at different scales over the global tropics and also in capturing tropical waves. Thus, the biases of CFSv2 indicate a deficiency in model's parameterization of cloud and convective processes. Keeping this in background and also for the need to improve the model fidelity, two approaches have been adopted. Firstly, in the superparameterization, 32 cloud resolving models each with a horizontal resolution of 4 km are embedded in each GCM (CFSv2) grid and the conventional sub-grid scale convective parameterization is deactivated. This is done to demonstrate the role of resolving cloud processes which otherwise remain unresolved. The superparameterized CFSv2 (SP-CFS) is developed on a coarser version T62. The model is integrated for six and half years in climate free run mode being initialised from 16 May 2008. The analyses reveal that SP-CFS simulates a significantly improved mean state as compared to default CFS. The systematic bias of lesser rainfall over Indian land mass, colder troposphere has substantially been improved. Most importantly the convectively coupled equatorial waves and the eastward propagating MJO has been found to be simulated with more fidelity in SP-CFS. The reason of such betterment in model mean state has been found to be due to the systematic improvement in moisture field, temperature profile and moist instability. The model also has better simulated the cloud and rainfall relation. This initiative demonstrates the role of cloud processes on the mean state of coupled GCM. As the superparameterization approach is computationally expensive, so in another approach, the conventional Simplified Arakawa Schubert (SAS) scheme is replaced by a revised SAS scheme (RSAS) and also the old and simplified cloud scheme of Zhao-Karr (1997) has been replaced by WSM6 in CFSV2 (hereafter CFS-CR). The primary objective of such modifications is to improve the distribution of convective rain in the model by using RSAS and the grid-scale or the large scale nonconvective rain by WSM6. The WSM6 computes the tendency of six class (water vapour, cloud water, ice, snow, graupel, rain water) hydrometeors at each of the model grid and contributes in the low, middle and high cloud fraction. By incorporating WSM6, for the first time in a global climate model, we are able to show a reasonable simulation of cloud ice and cloud liquid water distribution vertically and spatially as compared to Cloudsat observations. The CFS-CR has also showed improvement in simulating annual rainfall cycle and intraseasonal variability over the ISM region. These improvements in CFS-CR are likely to be associated with improvement of the convective and stratiform rainfall distribution in the model. These initiatives clearly address a long standing issue of resolving the cloud processes in climate model and demonstrate that the improved cloud and convective process paramterizations can eventually reduce the systematic bias and improve the model fidelity.

  4. Multi-Dimensional, Inviscid Flux Reconstruction for Simulation of Hypersonic Heating on Tetrahedral Grids

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2009-01-01

    The quality of simulated hypersonic stagnation region heating on tetrahedral meshes is investigated by using a three-dimensional, upwind reconstruction algorithm for the inviscid flux vector. Two test problems are investigated: hypersonic flow over a three-dimensional cylinder with special attention to the uniformity of the solution in the spanwise direction and hypersonic flow over a three-dimensional sphere. The tetrahedral cells used in the simulation are derived from a structured grid where cell faces are bisected across the diagonal resulting in a consistent pattern of diagonals running in a biased direction across the otherwise symmetric domain. This grid is known to accentuate problems in both shock capturing and stagnation region heating encountered with conventional, quasi-one-dimensional inviscid flux reconstruction algorithms. Therefore the test problem provides a sensitive test for algorithmic effects on heating. This investigation is believed to be unique in its focus on three-dimensional, rotated upwind schemes for the simulation of hypersonic heating on tetrahedral grids. This study attempts to fill the void left by the inability of conventional (quasi-one-dimensional) approaches to accurately simulate heating in a tetrahedral grid system. Results show significant improvement in spanwise uniformity of heating with some penalty of ringing at the captured shock. Issues with accuracy near the peak shear location are identified and require further study.

  5. An algebraic homotopy method for generating quasi-three-dimensional grids for high-speed configurations

    NASA Technical Reports Server (NTRS)

    Moitra, Anutosh

    1989-01-01

    A fast and versatile procedure for algebraically generating boundary conforming computational grids for use with finite-volume Euler flow solvers is presented. A semi-analytic homotopic procedure is used to generate the grids. Grids generated in two-dimensional planes are stacked to produce quasi-three-dimensional grid systems. The body surface and outer boundary are described in terms of surface parameters. An interpolation scheme is used to blend between the body surface and the outer boundary in order to determine the field points. The method, albeit developed for analytically generated body geometries is equally applicable to other classes of geometries. The method can be used for both internal and external flow configurations, the only constraint being that the body geometries be specified in two-dimensional cross-sections stationed along the longitudinal axis of the configuration. Techniques for controlling various grid parameters, e.g., clustering and orthogonality are described. Techniques for treating problems arising in algebraic grid generation for geometries with sharp corners are addressed. A set of representative grid systems generated by this method is included. Results of flow computations using these grids are presented for validation of the effectiveness of the method.

  6. Polymer space-charge-limited transistor as a solid-state vacuum tube triode

    NASA Astrophysics Data System (ADS)

    Chao, Yu-Chiang; Ku, Ming-Che; Tsai, Wu-Wei; Zan, Hsiao-Wen; Meng, Hsin-Fei; Tsai, Hung-Kuo; Horng, Sheng-Fu

    2010-11-01

    We report the construction of a polymer space-charge-limited transistor (SCLT), a solid-state version of vacuum tube triode. The SCLT achieves a high on/off ratio of 3×105 at a low operation voltage of 1.5 V by using high quality insulators both above and below the grid base electrode. Applying a greater bias to the base increases the barrier potential, and turns off the channel current, without introducing a large parasitic leakage current. Simulation result verifies the influence of base bias on channel potential distribution. The output current density is 1.7 mA/cm2 with current gain greater than 1000.

  7. Chimera Grid Tools

    NASA Technical Reports Server (NTRS)

    Chan, William M.; Rogers, Stuart E.; Nash, Steven M.; Buning, Pieter G.; Meakin, Robert

    2005-01-01

    Chimera Grid Tools (CGT) is a software package for performing computational fluid dynamics (CFD) analysis utilizing the Chimera-overset-grid method. For modeling flows with viscosity about geometrically complex bodies in relative motion, the Chimera-overset-grid method is among the most computationally cost-effective methods for obtaining accurate aerodynamic results. CGT contains a large collection of tools for generating overset grids, preparing inputs for computer programs that solve equations of flow on the grids, and post-processing of flow-solution data. The tools in CGT include grid editing tools, surface-grid-generation tools, volume-grid-generation tools, utility scripts, configuration scripts, and tools for post-processing (including generation of animated images of flows and calculating forces and moments exerted on affected bodies). One of the tools, denoted OVERGRID, is a graphical user interface (GUI) that serves to visualize the grids and flow solutions and provides central access to many other tools. The GUI facilitates the generation of grids for a new flow-field configuration. Scripts that follow the grid generation process can then be constructed to mostly automate grid generation for similar configurations. CGT is designed for use in conjunction with a computer-aided-design program that provides the geometry description of the bodies, and a flow-solver program.

  8. High extinction ratio terahertz wire-grid polarizers with connecting bridges on quartz substrates.

    PubMed

    Cetnar, John S; Vangala, Shivashankar; Zhang, Weidong; Pfeiffer, Carl; Brown, Elliott R; Guo, Junpeng

    2017-03-01

    A terahertz (THz) wire-grid polarizer with metallic bridges on a quartz substrate was simulated, fabricated, and tested. The device functions as a wide-band polarizer to incident THz radiation. In addition, the metallic bridges permit the device to function as a transparent electrode when a DC bias is applied to it. Three design variations of the polarizer with bridges and a polarizer without bridges were studied. Results show the devices with bridges have average s-polarization transmittance of less than -3  dB and average extinction ratios of approximately 40 dB across a frequency range of 220-990 GHz and thus are comparable to a polarizer without bridges.

  9. Optimal Control of Micro Grid Operation Mode Seamless Switching Based on Radau Allocation Method

    NASA Astrophysics Data System (ADS)

    Chen, Xiaomin; Wang, Gang

    2017-05-01

    The seamless switching process of micro grid operation mode directly affects the safety and stability of its operation. According to the switching process from island mode to grid-connected mode of micro grid, we establish a dynamic optimization model based on two grid-connected inverters. We use Radau allocation method to discretize the model, and use Newton iteration method to obtain the optimal solution. Finally, we implement the optimization mode in MATLAB and get the optimal control trajectory of the inverters.

  10. Efficient Unstructured Grid Adaptation Methods for Sonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Carter, Melissa B.; Deere, Karen A.; Waithe, Kenrick A.

    2008-01-01

    This paper examines the use of two grid adaptation methods to improve the accuracy of the near-to-mid field pressure signature prediction of supersonic aircraft computed using the USM3D unstructured grid flow solver. The first method (ADV) is an interactive adaptation process that uses grid movement rather than enrichment to more accurately resolve the expansion and compression waves. The second method (SSGRID) uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid with the pressure waves and reduce the cell count required to achieve an accurate signature prediction at a given distance from the vehicle. Both methods initially create negative volume cells that are repaired in a module in the ADV code. While both approaches provide significant improvements in the near field signature (< 3 body lengths) relative to a baseline grid without increasing the number of grid points, only the SSGRID approach allows the details of the signature to be accurately computed at mid-field distances (3-10 body lengths) for direct use with mid-field-to-ground boom propagation codes.

  11. Elliptic surface grid generation on minimal and parmetrized surfaces

    NASA Technical Reports Server (NTRS)

    Spekreijse, S. P.; Nijhuis, G. H.; Boerstoel, J. W.

    1995-01-01

    An elliptic grid generation method is presented which generates excellent boundary conforming grids in domains in 2D physical space. The method is based on the composition of an algebraic and elliptic transformation. The composite mapping obeys the familiar Poisson grid generation system with control functions specified by the algebraic transformation. New expressions are given for the control functions. Grid orthogonality at the boundary is achieved by modification of the algebraic transformation. It is shown that grid generation on a minimal surface in 3D physical space is in fact equivalent to grid generation in a domain in 2D physical space. A second elliptic grid generation method is presented which generates excellent boundary conforming grids on smooth surfaces. It is assumed that the surfaces are parametrized and that the grid only depends on the shape of the surface and is independent of the parametrization. Concerning surface modeling, it is shown that bicubic Hermite interpolation is an excellent method to generate a smooth surface which is passing through a given discrete set of control points. In contrast to bicubic spline interpolation, there is extra freedom to model the tangent and twist vectors such that spurious oscillations are prevented.

  12. The spectral element method (SEM) on variable-resolution grids: evaluating grid sensitivity and resolution-aware numerical viscosity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guba, O.; Taylor, M. A.; Ullrich, P. A.

    2014-11-27

    We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable-resolution grids using the shallow-water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance, implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution-dependent coefficient. For the spectral element method with variable-resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity is constructed so that, formore » regions of uniform resolution, it matches the traditional constant-coefficient hyperviscosity. With the tensor hyperviscosity, the large-scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications in which long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less

  13. Streamline integration as a method for two-dimensional elliptic grid generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiesenberger, M., E-mail: Matthias.Wiesenberger@uibk.ac.at; Held, M.; Einkemmer, L.

    We propose a new numerical algorithm to construct a structured numerical elliptic grid of a doubly connected domain. Our method is applicable to domains with boundaries defined by two contour lines of a two-dimensional function. Furthermore, we can adapt any analytically given boundary aligned structured grid, which specifically includes polar and Cartesian grids. The resulting coordinate lines are orthogonal to the boundary. Grid points as well as the elements of the Jacobian matrix can be computed efficiently and up to machine precision. In the simplest case we construct conformal grids, yet with the help of weight functions and monitor metricsmore » we can control the distribution of cells across the domain. Our algorithm is parallelizable and easy to implement with elementary numerical methods. We assess the quality of grids by considering both the distribution of cell sizes and the accuracy of the solution to elliptic problems. Among the tested grids these key properties are best fulfilled by the grid constructed with the monitor metric approach. - Graphical abstract: - Highlights: • Construct structured, elliptic numerical grids with elementary numerical methods. • Align coordinate lines with or make them orthogonal to the domain boundary. • Compute grid points and metric elements up to machine precision. • Control cell distribution by adaption functions or monitor metrics.« less

  14. The spectral element method on variable resolution grids: evaluating grid sensitivity and resolution-aware numerical viscosity

    DOE PAGES

    Guba, O.; Taylor, M. A.; Ullrich, P. A.; ...

    2014-06-25

    We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable resolution grids using the shallow water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution dependent coefficient. For the spectral element method with variable resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity ismore » constructed so that for regions of uniform resolution it matches the traditional constant coefficient hyperviscsosity. With the tensor hyperviscosity the large scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications where long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less

  15. Multiple-block grid adaption for an airplane geometry

    NASA Technical Reports Server (NTRS)

    Abolhassani, Jamshid Samareh; Smith, Robert E.

    1988-01-01

    Grid-adaption methods are developed with the capability of moving grid points in accordance with several variables for a three-dimensional multiple-block grid system. These methods are algebraic, and they are implemented for the computation of high-speed flow over an airplane configuration.

  16. Three-dimensional self-adaptive grid method for complex flows

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Deiwert, George S.

    1988-01-01

    A self-adaptive grid procedure for efficient computation of three-dimensional complex flow fields is described. The method is based on variational principles to minimize the energy of a spring system analogy which redistributes the grid points. Grid control parameters are determined by specifying maximum and minimum grid spacing. Multidirectional adaptation is achieved by splitting the procedure into a sequence of successive applications of a unidirectional adaptation. One-sided, two-directional constraints for orthogonality and smoothness are used to enhance the efficiency of the method. Feasibility of the scheme is demonstrated by application to a multinozzle, afterbody, plume flow field. Application of the algorithm for initial grid generation is illustrated by constructing a three-dimensional grid about a bump-like geometry.

  17. Resilience Metrics for the Electric Power System: A Performance-Based Approach.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vugrin, Eric D.; Castillo, Andrea R; Silva-Monroy, Cesar Augusto

    Grid resilience is a concept related to a power system's ability to continue operating and delivering power even in the event that low probability, high-consequence disruptions such as hurricanes, earthquakes, and cyber-attacks occur. Grid resilience objectives focus on managing and, ideally, minimizing potential consequences that occur as a result of these disruptions. Currently, no formal grid resilience definitions, metrics, or analysis methods have been universally accepted. This document describes an effort to develop and describe grid resilience metrics and analysis methods. The metrics and methods described herein extend upon the Resilience Analysis Process (RAP) developed by Watson et al. formore » the 2015 Quadrennial Energy Review. The extension allows for both outputs from system models and for historical data to serve as the basis for creating grid resilience metrics and informing grid resilience planning and response decision-making. This document describes the grid resilience metrics and analysis methods. Demonstration of the metrics and methods is shown through a set of illustrative use cases.« less

  18. Nonuniform grid implicit spatial finite difference method for acoustic wave modeling in tilted transversely isotropic media

    NASA Astrophysics Data System (ADS)

    Chu, Chunlei; Stoffa, Paul L.

    2012-01-01

    Discrete earth models are commonly represented by uniform structured grids. In order to ensure accurate numerical description of all wave components propagating through these uniform grids, the grid size must be determined by the slowest velocity of the entire model. Consequently, high velocity areas are always oversampled, which inevitably increases the computational cost. A practical solution to this problem is to use nonuniform grids. We propose a nonuniform grid implicit spatial finite difference method which utilizes nonuniform grids to obtain high efficiency and relies on implicit operators to achieve high accuracy. We present a simple way of deriving implicit finite difference operators of arbitrary stencil widths on general nonuniform grids for the first and second derivatives and, as a demonstration example, apply these operators to the pseudo-acoustic wave equation in tilted transversely isotropic (TTI) media. We propose an efficient gridding algorithm that can be used to convert uniformly sampled models onto vertically nonuniform grids. We use a 2D TTI salt model to demonstrate its effectiveness and show that the nonuniform grid implicit spatial finite difference method can produce highly accurate seismic modeling results with enhanced efficiency, compared to uniform grid explicit finite difference implementations.

  19. Automated analysis of lightning leader speed, local flash rates and electric charge structure in thunderstorms

    NASA Astrophysics Data System (ADS)

    Van Der Velde, O. A.; Montanya, J.; López, J. A.

    2017-12-01

    A Lightning Mapping Array (LMA) maps radio pulses emitted by lightning leaders, displaying lightning flash development in the cloud in three dimensions. Since the last 10 years about a dozen of these advanced systems have become operational in the United States and in Europe, often with the purpose of severe weather monitoring or lightning research. We introduce new methods for the analysis of complex three-dimensional lightning data produced by LMAs and illustrate them by cases of a mid-latitude severe weather producing thunderstorm and a tropical thunderstorm in Colombia. The method is based on the characteristics of bidrectional leader development as observed in LMA data (van der Velde and Montanyà, 2013, JGR-Atmospheres), where mapped positive leaders were found to propagate at characteristic speeds around 2 · 104 m s-1, while negative leaders typically propagate at speeds around 105 m s-1. Here, we determine leader speed for every 1.5 x 1.5 x 0.75 km grid box in 3 ms time steps, using two time intervals (e.g., 9 ms and 27 ms) and circles (4.5 km and 2.5 km wide) in which a robust Theil-Sen fitting of the slope is performed for fast and slow leaders. The two are then merged such that important speed characteristics are optimally maintained in negative and positive leaders, and labeled with positive or negative polarity according to the resulting velocity. The method also counts how often leaders from a lightning flash initiate or pass through each grid box. This "local flash rate" may be used in severe thunderstorm or NOx production studies and shall be more meaningful than LMA source density which is biased by the detection efficiency. Additionally, in each grid box the median x, y and z components of the leader propagation vectors of all flashes result in a 3D vector grid which can be compared to vectors in numerical models of leader propagation in response to cloud charge structure. Finally, the charge region altitudes, thickness and rates are summarized from vertical profiles of positive and negative leader rates where these exceed their 7-point averaged profiles. The summarized data can be used to follow charge structure evolution over time, and will be useful for climatological studies and statistical comparison against the parameters of the meteorological environment of storms.

  20. Eastern Wind Data Set | Grid Modernization | NREL

    Science.gov Websites

    cell was computed by combining these data sets with a composite turbine power curve. Wind power plants wind speed at the site. Adjustments were made for model biases, wake losses, wind gusts, turbine and conversion was also updated to better reflect future wind turbine technology. The 12-hour discontinuity was

  1. Ozone climatology series. Volume 1: Atlas of total ozone, April 1970 - December 1976

    NASA Technical Reports Server (NTRS)

    Heath, D. F.; Fleig, A. J.; Miller, A. J.; Rogers, T. G.; Nagatani, R. M.; Bowman, H. D., II; Kaveeshwar, V. G.; Klenk, K. F.; Bhartia, P. K.; Lee, K. D.

    1982-01-01

    Contours and gridded values are given for seven years of monthly mean total ozone data derived from observations with the Backscattered Ultraviolet instrument on Nimbus-4 for the Northern and Southern Hemispheres. The instrument, algorithm, uncertainties in derived ozone and systematic changes in the bias with respect to the international groundbased ozone network of Dobson instruments, are discussed.

  2. Simulating North American mesoscale convective systems with a convection-permitting climate model

    NASA Astrophysics Data System (ADS)

    Prein, Andreas F.; Liu, Changhai; Ikeda, Kyoko; Bullock, Randy; Rasmussen, Roy M.; Holland, Greg J.; Clark, Martyn

    2017-10-01

    Deep convection is a key process in the climate system and the main source of precipitation in the tropics, subtropics, and mid-latitudes during summer. Furthermore, it is related to high impact weather causing floods, hail, tornadoes, landslides, and other hazards. State-of-the-art climate models have to parameterize deep convection due to their coarse grid spacing. These parameterizations are a major source of uncertainty and long-standing model biases. We present a North American scale convection-permitting climate simulation that is able to explicitly simulate deep convection due to its 4-km grid spacing. We apply a feature-tracking algorithm to detect hourly precipitation from Mesoscale Convective Systems (MCSs) in the model and compare it with radar-based precipitation estimates east of the US Continental Divide. The simulation is able to capture the main characteristics of the observed MCSs such as their size, precipitation rate, propagation speed, and lifetime within observational uncertainties. In particular, the model is able to produce realistically propagating MCSs, which was a long-standing challenge in climate modeling. However, the MCS frequency is significantly underestimated in the central US during late summer. We discuss the origin of this frequency biases and suggest strategies for model improvements.

  3. Comparison of local grid refinement methods for MODFLOW

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.; Leake, S.A.

    2006-01-01

    Many ground water modeling efforts use a finite-difference method to solve the ground water flow equation, and many of these models require a relatively fine-grid discretization to accurately represent the selected process in limited areas of interest. Use of a fine grid over the entire domain can be computationally prohibitive; using a variably spaced grid can lead to cells with a large aspect ratio and refinement in areas where detail is not needed. One solution is to use local-grid refinement (LGR) whereby the grid is only refined in the area of interest. This work reviews some LGR methods and identifies advantages and drawbacks in test cases using MODFLOW-2000. The first test case is two dimensional and heterogeneous; the second is three dimensional and includes interaction with a meandering river. Results include simulations using a uniform fine grid, a variably spaced grid, a traditional method of LGR without feedback, and a new shared node method with feedback. Discrepancies from the solution obtained with the uniform fine grid are investigated. For the models tested, the traditional one-way coupled approaches produced discrepancies in head up to 6.8% and discrepancies in cell-to-cell fluxes up to 7.1%, while the new method has head and cell-to-cell flux discrepancies of 0.089% and 0.14%, respectively. Additional results highlight the accuracy, flexibility, and CPU time trade-off of these methods and demonstrate how the new method can be successfully implemented to model surface water-ground water interactions. Copyright ?? 2006 The Author(s).

  4. Divergence preserving discrete surface integral methods for Maxwell's curl equations using non-orthogonal unstructured grids

    NASA Technical Reports Server (NTRS)

    Madsen, Niel K.

    1992-01-01

    Several new discrete surface integral (DSI) methods for solving Maxwell's equations in the time-domain are presented. These methods, which allow the use of general nonorthogonal mixed-polyhedral unstructured grids, are direct generalizations of the canonical staggered-grid finite difference method. These methods are conservative in that they locally preserve divergence or charge. Employing mixed polyhedral cells, (hexahedral, tetrahedral, etc.) these methods allow more accurate modeling of non-rectangular structures and objects because the traditional stair-stepped boundary approximations associated with the orthogonal grid based finite difference methods can be avoided. Numerical results demonstrating the accuracy of these new methods are presented.

  5. Domain Decomposition By the Advancing-Partition Method for Parallel Unstructured Grid Generation

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.; Zagaris, George

    2009-01-01

    A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.

  6. Progress in Grid Generation: From Chimera to DRAGON Grids

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Kao, Kai-Hsiung

    1994-01-01

    Hybrid grids, composed of structured and unstructured grids, combines the best features of both. The chimera method is a major stepstone toward a hybrid grid from which the present approach is evolved. The chimera grid composes a set of overlapped structured grids which are independently generated and body-fitted, yielding a high quality grid readily accessible for efficient solution schemes. The chimera method has been shown to be efficient to generate a grid about complex geometries and has been demonstrated to deliver accurate aerodynamic prediction of complex flows. While its geometrical flexibility is attractive, interpolation of data in the overlapped regions - which in today's practice in 3D is done in a nonconservative fashion, is not. In the present paper we propose a hybrid grid scheme that maximizes the advantages of the chimera scheme and adapts the strengths of the unstructured grid while at the same time keeps its weaknesses minimal. Like the chimera method, we first divide up the physical domain by a set of structured body-fitted grids which are separately generated and overlaid throughout a complex configuration. To eliminate any pure data manipulation which does not necessarily follow governing equations, we use non-structured grids only to directly replace the region of the arbitrarily overlapped grids. This new adaptation to the chimera thinking is coined the DRAGON grid. The nonstructured grid region sandwiched between the structured grids is limited in size, resulting in only a small increase in memory and computational effort. The DRAGON method has three important advantages: (1) preserving strengths of the chimera grid; (2) eliminating difficulties sometimes encountered in the chimera scheme, such as the orphan points and bad quality of interpolation stencils; and (3) making grid communication in a fully conservative and consistent manner insofar as the governing equations are concerned. To demonstrate its use, the governing equations are discretized using the newly proposed flux scheme, AUSM+, which will be briefly described herein. Numerical tests on representative 2D inviscid flows are given for demonstration. Finally, extension to 3D is underway, only paced by the availability of the 3D unstructured grid generator.

  7. Grid-size dependence of Cauchy boundary conditions used to simulate stream-aquifer interactions

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2010-01-01

    This work examines the simulation of stream–aquifer interactions as grids are refined vertically and horizontally and suggests that traditional methods for calculating conductance can produce inappropriate values when the grid size is changed. Instead, different grid resolutions require different estimated values. Grid refinement strategies considered include global refinement of the entire model and local refinement of part of the stream. Three methods of calculating the conductance of the Cauchy boundary conditions are investigated. Single- and multi-layer models with narrow and wide streams produced stream leakages that differ by as much as 122% as the grid is refined. Similar results occur for globally and locally refined grids, but the latter required as little as one-quarter the computer execution time and memory and thus are useful for addressing some scale issues of stream–aquifer interactions. Results suggest that existing grid-size criteria for simulating stream–aquifer interactions are useful for one-layer models, but inadequate for three-dimensional models. The grid dependence of the conductance terms suggests that values for refined models using, for example, finite difference or finite-element methods, cannot be determined from previous coarse-grid models or field measurements. Our examples demonstrate the need for a method of obtaining conductances that can be translated to different grid resolutions and provide definitive test cases for investigating alternative conductance formulations.

  8. A solution-adaptive hybrid-grid method for the unsteady analysis of turbomachinery

    NASA Technical Reports Server (NTRS)

    Mathur, Sanjay R.; Madavan, Nateri K.; Rajagopalan, R. G.

    1993-01-01

    A solution-adaptive method for the time-accurate analysis of two-dimensional flows in turbomachinery is described. The method employs a hybrid structured-unstructured zonal grid topology in conjunction with appropriate modeling equations and solution techniques in each zone. The viscous flow region in the immediate vicinity of the airfoils is resolved on structured O-type grids while the rest of the domain is discretized using an unstructured mesh of triangular cells. Implicit, third-order accurate, upwind solutions of the Navier-Stokes equations are obtained in the inner regions. In the outer regions, the Euler equations are solved using an explicit upwind scheme that incorporates a second-order reconstruction procedure. An efficient and robust grid adaptation strategy, including both grid refinement and coarsening capabilities, is developed for the unstructured grid regions. Grid adaptation is also employed to facilitate information transfer at the interfaces between unstructured grids in relative motion. Results for grid adaptation to various features pertinent to turbomachinery flows are presented. Good comparisons between the present results and experimental measurements and earlier structured-grid results are obtained.

  9. Introducing the Global Fire WEather Database (GFWED)

    NASA Astrophysics Data System (ADS)

    Field, R. D.

    2015-12-01

    The Canadian Fire Weather Index (FWI) System is the mostly widely used fire danger rating system in the world. We have developed a global database of daily FWI System calculations beginning in 1980 called the Global Fire WEather Database (GFWED) gridded to a spatial resolution of 0.5° latitude by 2/3° longitude. Input weather data were obtained from the NASA Modern Era Retrospective-Analysis for Research (MERRA), and two different estimates of daily precipitation from rain gauges over land. FWI System Drought Code calculations from the gridded datasets were compared to calculations from individual weather station data for a representative set of 48 stations in North, Central and South America, Europe, Russia, Southeast Asia and Australia. Agreement between gridded calculations and the station-based calculations tended to be most different at low latitudes for strictly MERRA-based calculations. Strong biases could be seen in either direction: MERRA DC over the Mato Grosso in Brazil reached unrealistically high values exceeding DC=1500 during the dry season but was too low over Southeast Asia during the dry season. These biases are consistent with those previously-identified in MERRA's precipitation and reinforce the need to consider alternative sources of precipitation data. GFWED is being used by researchers around the world for analyzing historical relationships between fire weather and fire activity at large scales, in identifying large-scale atmosphere-ocean controls on fire weather, and calibration of FWI-based fire prediction models. These applications will be discussed. More information on GFWED can be found at http://data.giss.nasa.gov/impacts/gfwed/

  10. An adaptive grid algorithm for one-dimensional nonlinear equations

    NASA Technical Reports Server (NTRS)

    Gutierrez, William E.; Hills, Richard G.

    1990-01-01

    Richards' equation, which models the flow of liquid through unsaturated porous media, is highly nonlinear and difficult to solve. Step gradients in the field variables require the use of fine grids and small time step sizes. The numerical instabilities caused by the nonlinearities often require the use of iterative methods such as Picard or Newton interation. These difficulties result in large CPU requirements in solving Richards equation. With this in mind, adaptive and multigrid methods are investigated for use with nonlinear equations such as Richards' equation. Attention is focused on one-dimensional transient problems. To investigate the use of multigrid and adaptive grid methods, a series of problems are studied. First, a multigrid program is developed and used to solve an ordinary differential equation, demonstrating the efficiency with which low and high frequency errors are smoothed out. The multigrid algorithm and an adaptive grid algorithm is used to solve one-dimensional transient partial differential equations, such as the diffusive and convective-diffusion equations. The performance of these programs are compared to that of the Gauss-Seidel and tridiagonal methods. The adaptive and multigrid schemes outperformed the Gauss-Seidel algorithm, but were not as fast as the tridiagonal method. The adaptive grid scheme solved the problems slightly faster than the multigrid method. To solve nonlinear problems, Picard iterations are introduced into the adaptive grid and tridiagonal methods. Burgers' equation is used as a test problem for the two algorithms. Both methods obtain solutions of comparable accuracy for similar time increments. For the Burgers' equation, the adaptive grid method finds the solution approximately three times faster than the tridiagonal method. Finally, both schemes are used to solve the water content formulation of the Richards' equation. For this problem, the adaptive grid method obtains a more accurate solution in fewer work units and less computation time than required by the tridiagonal method. The performance of the adaptive grid method tends to degrade as the solution process proceeds in time, but still remains faster than the tridiagonal scheme.

  11. Three-dimensional elliptic grid generation technique with application to turbomachinery cascades

    NASA Technical Reports Server (NTRS)

    Chen, S. C.; Schwab, J. R.

    1988-01-01

    Described is a numerical method for generating 3-D grids for turbomachinery computational fluid dynamic codes. The basic method is general and involves the solution of a quasi-linear elliptic partial differential equation via pointwise relaxation with a local relaxation factor. It allows specification of the grid point distribution on the boundary surfaces, the grid spacing off the boundary surfaces, and the grid orthogonality at the boundary surfaces. A geometry preprocessor constructs the grid point distributions on the boundary surfaces for general turbomachinery cascades. Representative results are shown for a C-grid and an H-grid for a turbine rotor. Two appendices serve as user's manuals for the basic solver and the geometry preprocessor.

  12. Can we map the interannual variability of the whole upper Southern Ocean with the current database of hydrographic observations?

    NASA Astrophysics Data System (ADS)

    Heuzé, C.; Vivier, F.; Le Sommer, J.; Molines, J.-M.; Penduff, T.

    2015-12-01

    With the advent of Argo floats, it now seems feasible to study the interannual variations of upper ocean hydrographic properties of the historically undersampled Southern Ocean. To do so, scattered hydrographic profiles often first need to be mapped. To investigate biases and errors associated both with the limited space-time distribution of the profiles and with the mapping methods, we colocate the mixed-layer depth (MLD) output from a state-of-the-art 1/12° DRAKKAR simulation onto the latitude, longitude, and date of actual in situ profiles from 2005 to 2014. We compare the results obtained after remapping using a nearest neighbor (NN) interpolation and an objective analysis (OA) with different spatiotemporal grid resolutions and decorrelation scales. NN is improved with a coarser resolution. OA performs best with low decorrelation scales, avoiding too strong a smoothing, but returns values over larger areas with large decorrelation scales and low temporal resolution, as more points are available. For all resolutions OA represents better the annual extreme values than NN. Both methods underestimate the seasonal cycle in MLD. MLD biases are lower than 10 m on average but can exceed 250 m locally in winter. We argue that current Argo data should not be mapped to infer decadal trends in MLD, as all methods are unable to reproduce existing trends without creating unrealistic extra ones. We also show that regions of the subtropical Atlantic, Indian, and Pacific Oceans, and the whole ice-covered Southern Ocean, still cannot be mapped even by the best method because of the lack of observational data.

  13. Topography Modeling in Atmospheric Flows Using the Immersed Boundary Method

    NASA Technical Reports Server (NTRS)

    Ackerman, A. S.; Senocak, I.; Mansour, N. N.; Stevens, D. E.

    2004-01-01

    Numerical simulation of flow over complex geometry needs accurate and efficient computational methods. Different techniques are available to handle complex geometry. The unstructured grid and multi-block body-fitted grid techniques have been widely adopted for complex geometry in engineering applications. In atmospheric applications, terrain fitted single grid techniques have found common use. Although these are very effective techniques, their implementation, coupling with the flow algorithm, and efficient parallelization of the complete method are more involved than a Cartesian grid method. The grid generation can be tedious and one needs to pay special attention in numerics to handle skewed cells for conservation purposes. Researchers have long sought for alternative methods to ease the effort involved in simulating flow over complex geometry.

  14. Evaluating methods for estimating home ranges using GPS collars: A comparison using proboscis monkeys (Nasalis larvatus)

    PubMed Central

    Vaughan, Ian P.; Ramirez Saldivar, Diana A.; Nathan, Senthilvel K. S. S.; Goossens, Benoit

    2017-01-01

    The development of GPS tags for tracking wildlife has revolutionised the study of home ranges, habitat use and behaviour. Concomitantly, there have been rapid developments in methods for estimating habitat use from GPS data. In combination, these changes can cause challenges in choosing the best methods for estimating home ranges. In primatology, this issue has received little attention, as there have been few GPS collar-based studies to date. However, as advancing technology is making collaring studies more feasible, there is a need for the analysis to advance alongside the technology. Here, using a high quality GPS collaring data set from 10 proboscis monkeys (Nasalis larvatus), we aimed to: 1) compare home range estimates from the most commonly used method in primatology, the grid-cell method, with three recent methods designed for large and/or temporally correlated GPS data sets; 2) evaluate how well these methods identify known physical barriers (e.g. rivers); and 3) test the robustness of the different methods to data containing either less frequent or random losses of GPS fixes. Biased random bridges had the best overall performance, combining a high level of agreement between the raw data and estimated utilisation distribution with a relatively low sensitivity to reduced fixed frequency or loss of data. It estimated the home range of proboscis monkeys to be 24–165 ha (mean 80.89 ha). The grid-cell method and approaches based on local convex hulls had some advantages including simplicity and excellent barrier identification, respectively, but lower overall performance. With the most suitable model, or combination of models, it is possible to understand more fully the patterns, causes, and potential consequences that disturbances could have on an animal, and accordingly be used to assist in the management and restoration of degraded landscapes. PMID:28362872

  15. 3D Global Braginskii Simulations of Plasma Dynamics and Turbulence in LAPD

    NASA Astrophysics Data System (ADS)

    Fisher, Dustin; Rogers, Barrett

    2013-10-01

    3D global two-fluid simulations are presented in an ongoing effort to identify and understand the plasma dynamics in the Large Plasma Device (LAPD) at UCLA's Basic Science Facility. Modeling is done using a modified version of the Global Braginskii Solver (GBS) that models the plasma from source to edge region on a field-aligned grid using a finite difference method and 4th order Runge-Kutta time stepping. Progress has been made to account for the thermionic cathode emission of fast electrons at the source, the axial dependence of the plasma source, and biasing the front and side walls. Along with trying to understand the effect sheath's and neutrals have in setting the plasma potential, work is being done to model the biasable limiter recently used by colleagues at UCLA to better understand flow shear and particle transport in the LAPD. Comparisons of the zero bias case are presented along with analysis of the growth and dynamics of turbulent structures (such as drift waves) seen in the simulations. Supported through CICART under the auspices of the DOE's EPSCoR Grant No. DE-FG02-10ER46372.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zarzycki, Colin M.; Thatcher, Diana R.; Jablonowski, Christiane

    This paper describes an objective technique for detecting the extratropical transition (ET) of tropical cyclones (TCs) in high-resolution gridded climate data. The algorithm is based on previous observational studies using phase spaces to define the symmetry and vertical thermal structure of cyclones. Storm tracking is automated, allowing for direct analysis of climate data. Tracker performance in the North Atlantic is assessed using 23 years of data from the variable-resolution Community Atmosphere Model (CAM) at two different resolutions (DX 55 km and 28 km), the Climate Forecast System Reanalysis (CFSR, DX 38 km), and the ERA-Interim Reanalysis (ERA-I, DX 80 km).more » The mean spatiotemporal climatologies and seasonal cycles of objectively detected ET in the observationally constrained CFSR and ERA-I are well matched to previous observational studies, demonstrating the capability of the scheme to adequately find events. High resolution CAM reproduces TC and ET statistics that are in general agreement with reanalyses. One notable model bias, however, is significantly longer time between ET onset and ET completion in CAM, particularly for TCs that lose symmetry prior to developing a cold-core structure and becoming extratropical cyclones, demonstrating the capability of this method to expose model biases in simulated cyclones beyond the tropical phase.« less

  17. PULSE RATE DIVIDER

    DOEpatents

    McDonald, H.C. Jr.

    1962-12-18

    A compact pulse-rate divider circuit affording low impedance output and high input pulse repetition rates is described. The circuit features a single secondary emission tube having a capacitor interposed between its dynode and its control grid. An output pulse is produced at the anode of the tube each time an incoming pulse at the control grid drives the tube above cutoff and the duration of each output pulse corresponds to the charging time of the capacitor. Pulses incoming during the time the grid bias established by the discharging capacitor is sufficiently negative that the pulses are unable to drive the tube above cutoff do not produce output pulses at the anode; these pulses are lost and a dividing action is thus produced by the circuit. The time constant of the discharge path may be vanied to vary in turn the division ratio of the circuit; the time constant of the charging circuit may be varied to vary the width of the output pulses. (AEC)

  18. Cubic spline anchored grid pattern algorithm for high-resolution detection of subsurface cavities by the IR-CAT method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kassab, A.J.; Pollard, J.E.

    An algorithm is presented for the high-resolution detection of irregular-shaped subsurface cavities within irregular-shaped bodies by the IR-CAT method. The theoretical basis of the algorithm is rooted in the solution of an inverse geometric steady-state heat conduction problem. A Cauchy boundary condition is prescribed at the exposed surface, and the inverse geometric heat conduction problem is formulated by specifying the thermal condition at the inner cavities walls, whose unknown geometries are to be detected. The location of the inner cavities is initially estimated, and the domain boundaries are discretized. Linear boundary elements are used in conjunction with cubic splines formore » high resolution of the cavity walls. An anchored grid pattern (AGP) is established to constrain the cubic spline knots that control the inner cavity geometry to evolve along the AGP at each iterative step. A residual is defined measuring the difference between imposed and computed boundary conditions. A Newton-Raphson method with a Broyden update is used to automate the detection of inner cavity walls. During the iterative procedure, the movement of the inner cavity walls is restricted to physically realistic intermediate solutions. Numerical simulation demonstrates the superior resolution of the cubic spline AGP algorithm over the linear spline-based AGP in the detection of an irregular-shaped cavity. Numerical simulation is also used to test the sensitivity of the linear and cubic spline AGP algorithms by simulating bias and random error in measured surface temperature. The proposed AGP algorithm is shown to satisfactorily detect cavities with these simulated data.« less

  19. A 3-D chimera grid embedding technique

    NASA Technical Reports Server (NTRS)

    Benek, J. A.; Buning, P. G.; Steger, J. L.

    1985-01-01

    A three-dimensional (3-D) chimera grid-embedding technique is described. The technique simplifies the construction of computational grids about complex geometries. The method subdivides the physical domain into regions which can accommodate easily generated grids. Communication among the grids is accomplished by interpolation of the dependent variables at grid boundaries. The procedures for constructing the composite mesh and the associated data structures are described. The method is demonstrated by solution of the Euler equations for the transonic flow about a wing/body, wing/body/tail, and a configuration of three ellipsoidal bodies.

  20. PDEs on moving surfaces via the closest point method and a modified grid based particle method

    NASA Astrophysics Data System (ADS)

    Petras, A.; Ruuth, S. J.

    2016-05-01

    Partial differential equations (PDEs) on surfaces arise in a wide range of applications. The closest point method (Ruuth and Merriman (2008) [20]) is a recent embedding method that has been used to solve a variety of PDEs on smooth surfaces using a closest point representation of the surface and standard Cartesian grid methods in the embedding space. The original closest point method (CPM) was designed for problems posed on static surfaces, however the solution of PDEs on moving surfaces is of considerable interest as well. Here we propose solving PDEs on moving surfaces using a combination of the CPM and a modification of the grid based particle method (Leung and Zhao (2009) [12]). The grid based particle method (GBPM) represents and tracks surfaces using meshless particles and an Eulerian reference grid. Our modification of the GBPM introduces a reconstruction step into the original method to ensure that all the grid points within a computational tube surrounding the surface are active. We present a number of examples to illustrate the numerical convergence properties of our combined method. Experiments for advection-diffusion equations that are strongly coupled to the velocity of the surface are also presented.

  1. GRID2D/3D: A computer program for generating grid systems in complex-shaped two- and three-dimensional spatial domains. Part 2: User's manual and program listing

    NASA Technical Reports Server (NTRS)

    Bailey, R. T.; Shih, T. I.-P.; Nguyen, H. L.; Roelke, R. J.

    1990-01-01

    An efficient computer program, called GRID2D/3D, was developed to generate single and composite grid systems within geometrically complex two- and three-dimensional (2- and 3-D) spatial domains that can deform with time. GRID2D/3D generates single grid systems by using algebraic grid generation methods based on transfinite interpolation in which the distribution of grid points within the spatial domain is controlled by stretching functions. All single grid systems generated by GRID2D/3D can have grid lines that are continuous and differentiable everywhere up to the second-order. Also, grid lines can intersect boundaries of the spatial domain orthogonally. GRID2D/3D generates composite grid systems by patching together two or more single grid systems. The patching can be discontinuous or continuous. For continuous composite grid systems, the grid lines are continuous and differentiable everywhere up to the second-order except at interfaces where different single grid systems meet. At interfaces where different single grid systems meet, the grid lines are only differentiable up to the first-order. For 2-D spatial domains, the boundary curves are described by using either cubic or tension spline interpolation. For 3-D spatial domains, the boundary surfaces are described by using either linear Coon's interpolation, bi-hyperbolic spline interpolation, or a new technique referred to as 3-D bi-directional Hermite interpolation. Since grid systems generated by algebraic methods can have grid lines that overlap one another, GRID2D/3D contains a graphics package for evaluating the grid systems generated. With the graphics package, the user can generate grid systems in an interactive manner with the grid generation part of GRID2D/3D. GRID2D/3D is written in FORTRAN 77 and can be run on any IBM PC, XT, or AT compatible computer. In order to use GRID2D/3D on workstations or mainframe computers, some minor modifications must be made in the graphics part of the program; no modifications are needed in the grid generation part of the program. The theory and method used in GRID2D/3D is described.

  2. Extending High-Order Flux Operators on Spherical Icosahedral Grids and Their Applications in the Framework of a Shallow Water Model

    NASA Astrophysics Data System (ADS)

    Zhang, Yi

    2018-01-01

    This study extends a set of unstructured third/fourth-order flux operators on spherical icosahedral grids from two perspectives. First, the fifth-order and sixth-order flux operators of this kind are further extended, and the nominally second-order to sixth-order operators are then compared based on the solid body rotation and deformational flow tests. Results show that increasing the nominal order generally leads to smaller absolute errors. Overall, the standard fifth-order scheme generates the smallest errors in limited and unlimited tests, although it does not enhance the convergence rate. Even-order operators show higher limiter sensitivity than the odd-order operators. Second, a triangular version of these high-order operators is repurposed for transporting the potential vorticity in a space-time-split shallow water framework. Results show that a class of nominally third-order upwind-biased operators generates better results than second-order and fourth-order counterparts. The increase of the potential enstrophy over time is suppressed owing to the damping effect. The grid-scale noise in the vorticity is largely alleviated, and the total energy remains conserved. Moreover, models using high-order operators show smaller numerical errors in the vorticity field because of a more accurate representation of the nonlinear Coriolis term. This improvement is especially evident in the Rossby-Haurwitz wave test, in which the fluid is highly rotating. Overall, high-order flux operators with higher damping coefficients, which essentially behave like the Anticipated Potential Vorticity Method, present better results.

  3. A Coastal Seawater Temperature Dataset for Biogeographical Studies: Large Biases between In Situ and Remotely-Sensed Data Sets around the Coast of South Africa

    PubMed Central

    Smit, Albertus J.; Roberts, Michael; Anderson, Robert J.; Dufois, Francois; Dudley, Sheldon F. J.; Bornman, Thomas G.; Olbers, Jennifer; Bolton, John J.

    2013-01-01

    Gridded SST products developed particularly for offshore regions are increasingly being applied close to the coast for biogeographical applications. The purpose of this paper is to demonstrate the dangers of doing so through a comparison of reprocessed MODIS Terra and Pathfinder v5.2 SSTs, both at 4 km resolution, with instrumental in situ temperatures taken within 400 m from the coast. We report large biases of up to +6°C in places between satellite-derived and in situ climatological temperatures for 87 sites spanning the entire ca. 2 700 km of the South African coastline. Although biases are predominantly warm (i.e. the satellite SSTs being higher), smaller or even cold biases also appear in places, especially along the southern and western coasts of the country. We also demonstrate the presence of gradients in temperature biases along shore-normal transects — generally SSTs extracted close to the shore demonstrate a smaller bias with respect to the in situ temperatures. Contributing towards the magnitude of the biases are factors such as SST data source, proximity to the shore, the presence/absence of upwelling cells or coastal embayments. Despite the generally large biases, from a biogeographical perspective, species distribution retains a correlative relationship with underlying spatial patterns in SST, but in order to arrive at a causal understanding of the determinants of biogeographical patterns we suggest that in shallow, inshore marine habitats, temperature is best measured directly. PMID:24312609

  4. Grid related issues for static and dynamic geometry problems using systems of overset structured grids

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    1995-01-01

    Grid related issues of the Chimera overset grid method are discussed in the context of a method of solution and analysis of unsteady three-dimensional viscous flows. The state of maturity of the various pieces of support software required to use the approach is considered. Current limitations of the approach are identified.

  5. A New Objective Technique for Verifying Mesoscale Numerical Weather Prediction Models

    NASA Technical Reports Server (NTRS)

    Case, Jonathan L.; Manobianco, John; Lane, John E.; Immer, Christopher D.

    2003-01-01

    This report presents a new objective technique to verify predictions of the sea-breeze phenomenon over east-central Florida by the Regional Atmospheric Modeling System (RAMS) mesoscale numerical weather prediction (NWP) model. The Contour Error Map (CEM) technique identifies sea-breeze transition times in objectively-analyzed grids of observed and forecast wind, verifies the forecast sea-breeze transition times against the observed times, and computes the mean post-sea breeze wind direction and speed to compare the observed and forecast winds behind the sea-breeze front. The CEM technique is superior to traditional objective verification techniques and previously-used subjective verification methodologies because: It is automated, requiring little manual intervention, It accounts for both spatial and temporal scales and variations, It accurately identifies and verifies the sea-breeze transition times, and It provides verification contour maps and simple statistical parameters for easy interpretation. The CEM uses a parallel lowpass boxcar filter and a high-order bandpass filter to identify the sea-breeze transition times in the observed and model grid points. Once the transition times are identified, CEM fits a Gaussian histogram function to the actual histogram of transition time differences between the model and observations. The fitted parameters of the Gaussian function subsequently explain the timing bias and variance of the timing differences across the valid comparison domain. Once the transition times are all identified at each grid point, the CEM computes the mean wind direction and speed during the remainder of the day for all times and grid points after the sea-breeze transition time. The CEM technique performed quite well when compared to independent meteorological assessments of the sea-breeze transition times and results from a previously published subjective evaluation. The algorithm correctly identified a forecast or observed sea-breeze occurrence or absence 93% of the time during the two- month evaluation period from July and August 2000. Nearly all failures in CEM were the result of complex precipitation features (observed or forecast) that contaminated the wind field, resulting in a false identification of a sea-breeze transition. A qualitative comparison between the CEM timing errors and the subjectively determined observed and forecast transition times indicate that the algorithm performed very well overall. Most discrepancies between the CEM results and the subjective analysis were again caused by observed or forecast areas of precipitation that led to complex wind patterns. The CEM also failed on a day when the observed sea- breeze transition affected only a very small portion of the verification domain. Based on the results of CEM, the RAMS tended to predict the onset and movement of the sea-breeze transition too early and/or quickly. The domain-wide timing biases provided by CEM indicated an early bias on 30 out of 37 days when both an observed and forecast sea breeze occurred over the same portions of the analysis domain. These results are consistent with previous subjective verifications of the RAMS sea breeze predictions. A comparison of the mean post-sea breeze winds indicate that RAMS has a positive wind-speed bias for .all days, which is also consistent with the early bias in the sea-breeze transition time since the higher wind speeds resulted in a faster inland penetration of the sea breeze compared to reality.

  6. SU-F-T-436: A Method to Evaluate Dosimetric Properties of SFGRT in Eclipse TPS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, M; Tobias, R; Pankuch, M

    Purpose: The objective was to develop a method for dose distribution calculation of spatially-fractionated-GRID-radiotherapy (SFGRT) in Eclipse treatment-planning-system (TPS). Methods: Patient treatment-plans with SFGRT for bulky tumors were generated in Varian Eclipse version11. A virtual structure based on the GRID pattern was created and registered to a patient CT image dataset. The virtual GRID structure was positioned on the iso-center level together with matching beam geometries to simulate a commercially available GRID block made of brass. This method overcame the difficulty in treatment-planning and dose-calculation due to the lack o-the option to insert a GRID block add-on in Eclipse TPS.more » The patient treatment-planning displayed GRID effects on the target, critical structures, and dose distribution. The dose calculations were compared to the measurement results in phantom. Results: The GRID block structure was created to follow the beam divergence to the patient CT images. The inserted virtual GRID block made it possible to calculate the dose distributions and profiles at various depths in Eclipse. The virtual GRID block was added as an option to TPS. The 3D representation of the isodose distribution of the spatially-fractionated beam was generated in axial, coronal, and sagittal planes. Physics of GRID can be different from that for fields shaped by regular blocks because the charge-particle-equilibrium cannot be guaranteed for small field openings. Output factor (OF) measurement was required to calculate the MU to deliver the prescribed dose. The calculated OF based on the virtual GRID agreed well with the measured OF in phantom. Conclusion: The method to create the virtual GRID block has been proposed for the first time in Eclipse TPS. The dosedistributions, in-plane and cross-plane profiles in PTV can be displayed in 3D-space. The calculated OF’s based on the virtual GRID model compare well to the measured OF’s for SFGRT clinical use.« less

  7. Rapid Structured Volume Grid Smoothing and Adaption Technique

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    2006-01-01

    A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reductions in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.

  8. Rapid Structured Volume Grid Smoothing and Adaption Technique

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    2004-01-01

    A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reduction in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.

  9. Application of a multi-level grid method to transonic flow calculations

    NASA Technical Reports Server (NTRS)

    South, J. C., Jr.; Brandt, A.

    1976-01-01

    A multi-level grid method was studied as a possible means of accelerating convergence in relaxation calculations for transonic flows. The method employs a hierarchy of grids, ranging from very coarse to fine. The coarser grids are used to diminish the magnitude of the smooth part of the residuals. The method was applied to the solution of the transonic small disturbance equation for the velocity potential in conservation form. Nonlifting transonic flow past a parabolic arc airfoil is studied with meshes of both constant and variable step size.

  10. Production of single-walled carbon nanotube grids

    DOEpatents

    Hauge, Robert H; Xu, Ya-Qiong; Pheasant, Sean

    2013-12-03

    A method of forming a nanotube grid includes placing a plurality of catalyst nanoparticles on a grid framework, contacting the catalyst nanoparticles with a gas mixture that includes hydrogen and a carbon source in a reaction chamber, forming an activated gas from the gas mixture, heating the grid framework and activated gas, and controlling a growth time to generate a single-wall carbon nanotube array radially about the grid framework. A filter membrane may be produced by this method.

  11. Comments on numerical solution of boundary value problems of the Laplace equation and calculation of eigenvalues by the grid method

    NASA Technical Reports Server (NTRS)

    Lyusternik, L. A.

    1980-01-01

    The mathematics involved in numerically solving for the plane boundary value of the Laplace equation by the grid method is developed. The approximate solution of a boundary value problem for the domain of the Laplace equation by the grid method consists of finding u at the grid corner which satisfies the equation at the internal corners (u=Du) and certain boundary value conditions at the boundary corners.

  12. Topology and grid adaption for high-speed flow computations

    NASA Technical Reports Server (NTRS)

    Abolhassani, Jamshid S.; Tiwari, Surendra N.

    1989-01-01

    This study investigates the effects of grid topology and grid adaptation on numerical solutions of the Navier-Stokes equations. In the first part of this study, a general procedure is presented for computation of high-speed flow over complex three-dimensional configurations. The flow field is simulated on the surface of a Butler wing in a uniform stream. Results are presented for Mach number 3.5 and a Reynolds number of 2,000,000. The O-type and H-type grids have been used for this study, and the results are compared together and with other theoretical and experimental results. The results demonstrate that while the H-type grid is suitable for the leading and trailing edges, a more accurate solution can be obtained for the middle part of the wing with an O-type grid. In the second part of this study, methods of grid adaption are reviewed and a method is developed with the capability of adapting to several variables. This method is based on a variational approach and is an algebraic method. Also, the method has been formulated in such a way that there is no need for any matrix inversion. This method is used in conjunction with the calculation of hypersonic flow over a blunt-nose body. A movie has been produced which shows simultaneously the transient behavior of the solution and the grid adaption.

  13. A generic probability based model to derive regional patterns of crops in time and space

    NASA Astrophysics Data System (ADS)

    Wattenbach, Martin; Luedtke, Stefan; Redweik, Richard; van Oijen, Marcel; Balkovic, Juraj; Reinds, Gert Jan

    2015-04-01

    Croplands are not only the key to human food supply, they also change the biophysical and biogeochemical properties of the land surface leading to changes in the water cycle, energy portioning, they influence soil erosion and substantially contribute to the amount of greenhouse gases entering the atmosphere. The effects of croplands on the environment depend on the type of crop and the associated management which both are related to the site conditions, economic boundary settings as well as preferences of individual farmers. The method described here is designed to predict the most probable crop to appear at a given location and time. The method uses statistical crop area information on NUTS2 level from EUROSTAT and the Common Agricultural Policy Regionalized Impact Model (CAPRI) as observation. These crops are then spatially disaggregated to the 1 x 1 km grid scale within the region, using the assumption that the probability of a crop appearing at a given location and a given year depends on a) the suitability of the land for the cultivation of the crop derived from the MARS Crop Yield Forecast System (MCYFS) and b) expert knowledge of agricultural practices. The latter includes knowledge concerning the feasibility of one crop following another (e.g. a late-maturing crop might leave too little time for the establishment of a winter cereal crop) and the need to combat weed infestations or crop diseases. The model is implemented in R and PostGIS. The quality of the generated crop sequences per grid cell is evaluated on the basis of the given statistics reported by the joint EU/CAPRI database. The assessment is given on NUTS2 level using per cent bias as a measure with a threshold of 15% as minimum quality. The results clearly indicates that crops with a large relative share within the administrative unit are not as error prone as crops that allocate only minor parts of the unit. However, still roughly 40% show an absolute per cent bias above the 15% threshold. This highlights the discrepancy between the best practice given the soil properties within an administrative unit and the effectively cultivated crops.

  14. Quantifying the uncertainty introduced by discretization and time-averaging in two-fluid model predictions

    DOE PAGES

    Syamlal, Madhava; Celik, Ismail B.; Benyahia, Sofiane

    2017-07-12

    The two-fluid model (TFM) has become a tool for the design and troubleshooting of industrial fluidized bed reactors. To use TFM for scale up with confidence, the uncertainty in its predictions must be quantified. Here, we study two sources of uncertainty: discretization and time-averaging. First, we show that successive grid refinement may not yield grid-independent transient quantities, including cross-section–averaged quantities. Successive grid refinement would yield grid-independent time-averaged quantities on sufficiently fine grids. A Richardson extrapolation can then be used to estimate the discretization error, and the grid convergence index gives an estimate of the uncertainty. Richardson extrapolation may not workmore » for industrial-scale simulations that use coarse grids. We present an alternative method for coarse grids and assess its ability to estimate the discretization error. Second, we assess two methods (autocorrelation and binning) and find that the autocorrelation method is more reliable for estimating the uncertainty introduced by time-averaging TFM data.« less

  15. Using MERRA Gridded Innovations for Quantifying Uncertainties in Analysis Fields and Diagnosing Observing System Inhomogeneities

    NASA Technical Reports Server (NTRS)

    da Silva, Arlindo; Redder, Christopher

    2010-01-01

    MERRA is a NASA reanalysis for the satellite era using a major new version of the Goddard Earth Observing System Data Assimilation System Version 5 (GEOS-5). The project focuses on historical analyses of the hydrological cycle on a broad range of weather and climate time scales and places the NASA EOS suite of observations in a climate context. The characterization of uncertainty in reanalysis fields is a commonly requested feature by users of such data. While intercomparison with reference data sets is common practice for ascertaining the realism of the datasets, such studies typically are restricted to long term climatological statistics and seldom provide state dependent measures of the uncertainties involved. In principle, variational data assimilation algorithms have the ability of producing error estimates for the analysis variables (typically surface pressure, winds, temperature, moisture and ozone) consistent with the assumed background and observation error statistics. However, these "perceived error estimates" are expensive to obtain and are limited by the somewhat simplistic errors assumed in the algorithm. The observation minus forecast residuals (innovations) by-product of any assimilation system constitutes a powerful tool for estimating the systematic and random errors in the analysis fields. Unfortunately, such data is usually not readily available with reanalysis products, often requiring the tedious decoding of large datasets and not so-user friendly file formats. With MERRA we have introduced a gridded version of the observations/innovations used in the assimilation process, using the same grid and data formats as the regular datasets. Such dataset empowers the user with the ability of conveniently performing observing system related analysis and error estimates. The scope of this dataset will be briefly described. We will present a systematic analysis of MERRA innovation time series for the conventional observing system, including maximum-likelihood estimates of background and observation errors, as well as global bias estimates. Starting with the joint PDF of innovations and analysis increments at observation locations we propose a technique for diagnosing bias among the observing systems, and document how these contextual biases have evolved during the satellite era covered by MERRA.

  16. Field significance of performance measures in the context of regional climate model evaluation. Part 1: temperature

    NASA Astrophysics Data System (ADS)

    Ivanov, Martin; Warrach-Sagi, Kirsten; Wulfmeyer, Volker

    2018-04-01

    A new approach for rigorous spatial analysis of the downscaling performance of regional climate model (RCM) simulations is introduced. It is based on a multiple comparison of the local tests at the grid cells and is also known as "field" or "global" significance. New performance measures for estimating the added value of downscaled data relative to the large-scale forcing fields are developed. The methodology is exemplarily applied to a standard EURO-CORDEX hindcast simulation with the Weather Research and Forecasting (WRF) model coupled with the land surface model NOAH at 0.11 ∘ grid resolution. Monthly temperature climatology for the 1990-2009 period is analysed for Germany for winter and summer in comparison with high-resolution gridded observations from the German Weather Service. The field significance test controls the proportion of falsely rejected local tests in a meaningful way and is robust to spatial dependence. Hence, the spatial patterns of the statistically significant local tests are also meaningful. We interpret them from a process-oriented perspective. In winter and in most regions in summer, the downscaled distributions are statistically indistinguishable from the observed ones. A systematic cold summer bias occurs in deep river valleys due to overestimated elevations, in coastal areas due probably to enhanced sea breeze circulation, and over large lakes due to the interpolation of water temperatures. Urban areas in concave topography forms have a warm summer bias due to the strong heat islands, not reflected in the observations. WRF-NOAH generates appropriate fine-scale features in the monthly temperature field over regions of complex topography, but over spatially homogeneous areas even small biases can lead to significant deteriorations relative to the driving reanalysis. As the added value of global climate model (GCM)-driven simulations cannot be smaller than this perfect-boundary estimate, this work demonstrates in a rigorous manner the clear additional value of dynamical downscaling over global climate simulations. The evaluation methodology has a broad spectrum of applicability as it is distribution-free, robust to spatial dependence, and accounts for time series structure.

  17. Using MERRA Gridded Innovation for Quantifying Uncertainties in Analysis Fields and Diagnosing Observing System Inhomogeneities

    NASA Astrophysics Data System (ADS)

    da Silva, A.; Redder, C. R.

    2010-12-01

    MERRA is a NASA reanalysis for the satellite era using a major new version of the Goddard Earth Observing System Data Assimilation System Version 5 (GEOS-5). The Project focuses on historical analyses of the hydrological cycle on a broad range of weather and climate time scales and places the NASA EOS suite of observations in a climate context. The characterization of uncertainty in reanalysis fields is a commonly requested feature by users of such data. While intercomparison with reference data sets is common practice for ascertaining the realism of the datasets, such studies typically are restricted to long term climatological statistics and seldom provide state dependent measures of the uncertainties involved. In principle, variational data assimilation algorithms have the ability of producing error estimates for the analysis variables (typically surface pressure, winds, temperature, moisture and ozone) consistent with the assumed background and observation error statistics. However, these "perceived error estimates" are expensive to obtain and are limited by the somewhat simplistic errors assumed in the algorithm. The observation minus forecast residuals (innovations) by-product of any assimilation system constitutes a powerful tool for estimating the systematic and random errors in the analysis fields. Unfortunately, such data is usually not readily available with reanalysis products, often requiring the tedious decoding of large datasets and not so-user friendly file formats. With MERRA we have introduced a gridded version of the observations/innovations used in the assimilation process, using the same grid and data formats as the regular datasets. Such dataset empowers the user with the ability of conveniently performing observing system related analysis and error estimates. The scope of this dataset will be briefly described. We will present a systematic analysis of MERRA innovation time series for the conventional observing system, including maximum-likelihood estimates of background and observation errors, as well as global bias estimates. Starting with the joint PDF of innovations and analysis increments at observation locations we propose a technique for diagnosing bias among the observing systems, and document how these contextual biases have evolved during the satellite era covered by MERRA.

  18. Comprehensive evaluation of multisatellite precipitation estimates over India using gridded rainfall data

    NASA Astrophysics Data System (ADS)

    Sunilkumar, K.; Narayana Rao, T.; Saikranthi, K.; Purnachandra Rao, M.

    2015-09-01

    This study presents a comprehensive evaluation of five widely used multisatellite precipitation estimates (MPEs) against 1° × 1° gridded rain gauge data set as ground truth over India. One decade observations are used to assess the performance of various MPEs (Climate Prediction Center (CPC)-South Asia data set, CPC Morphing Technique (CMORPH), Precipitation Estimation From Remotely Sensed Information Using Artificial Neural Networks, Tropical Rainfall Measuring Mission's Multisatellite Precipitation Analysis (TMPA-3B42), and Global Precipitation Climatology Project). All MPEs have high detection skills of rain with larger probability of detection (POD) and smaller "missing" values. However, the detection sensitivity differs from one product (and also one region) to the other. While the CMORPH has the lowest sensitivity of detecting rain, CPC shows highest sensitivity and often overdetects rain, as evidenced by large POD and false alarm ratio and small missing values. All MPEs show higher rain sensitivity over eastern India than western India. These differential sensitivities are found to alter the biases in rain amount differently. All MPEs show similar spatial patterns of seasonal rain bias and root-mean-square error, but their spatial variability across India is complex and pronounced. The MPEs overestimate the rainfall over the dry regions (northwest and southeast India) and severely underestimate over mountainous regions (west coast and northeast India), whereas the bias is relatively small over the core monsoon zone. Higher occurrence of virga rain due to subcloud evaporation and possible missing of small-scale convective events by gauges over the dry regions are the main reasons for the observed overestimation of rain by MPEs. The decomposed components of total bias show that the major part of overestimation is due to false precipitation. The severe underestimation of rain along the west coast is attributed to the predominant occurrence of shallow rain and underestimation of moderate to heavy rain by MPEs. The decomposed components suggest that the missed precipitation and hit bias are the leading error sources for the total bias along the west coast. All evaluation metrics are found to be nearly equal in two contrasting monsoon seasons (southwest and northeast), indicating that the performance of MPEs does not change with the season, at least over southeast India. Among various MPEs, the performance of TMPA is found to be better than others, as it reproduced most of the spatial variability exhibited by the reference.

  19. Fine-scale application of WRF-CAM5 during a dust storm episode over East Asia: Sensitivity to grid resolutions and aerosol activation parameterizations

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Zhang, Yang; Zhang, Xin; Fan, Jiwen; Leung, L. Ruby; Zheng, Bo; Zhang, Qiang; He, Kebin

    2018-03-01

    An advanced online-coupled meteorology and chemistry model WRF-CAM5 has been applied to East Asia using triple-nested domains at different grid resolutions (i.e., 36-, 12-, and 4-km) to simulate a severe dust storm period in spring 2010. Analyses are performed to evaluate the model performance and investigate model sensitivity to different horizontal grid sizes and aerosol activation parameterizations and to examine aerosol-cloud interactions and their impacts on the air quality. A comprehensive model evaluation of the baseline simulations using the default Abdul-Razzak and Ghan (AG) aerosol activation scheme shows that the model can well predict major meteorological variables such as 2-m temperature (T2), water vapor mixing ratio (Q2), 10-m wind speed (WS10) and wind direction (WD10), and shortwave and longwave radiation across different resolutions with domain-average normalized mean biases typically within ±15%. The baseline simulations also show moderate biases for precipitation and moderate-to-large underpredictions for other major variables associated with aerosol-cloud interactions such as cloud droplet number concentration (CDNC), cloud optical thickness (COT), and cloud liquid water path (LWP) due to uncertainties or limitations in the aerosol-cloud treatments. The model performance is sensitive to grid resolutions, especially for surface meteorological variables such as T2, Q2, WS10, and WD10, with the performance generally improving at finer grid resolutions for those variables. Comparison of the sensitivity simulations with an alternative (i.e., the Fountoukis and Nenes (FN) series scheme) and the default (i.e., AG scheme) aerosol activation scheme shows that the former predicts larger values for cloud variables such as CDNC and COT across all grid resolutions and improves the overall domain-average model performance for many cloud/radiation variables and precipitation. Sensitivity simulations using the FN series scheme also have large impacts on radiations, T2, precipitation, and air quality (e.g., decreasing O3) through complex aerosol-radiation-cloud-chemistry feedbacks. The inclusion of adsorptive activation of dust particles in the FN series scheme has similar impacts on the meteorology and air quality but to lesser extent as compared to differences between the FN series and AG schemes. Compared to the overall differences between the FN series and AG schemes, impacts of adsorptive activation of dust particles can contribute significantly to the increase of total CDNC (∼45%) during dust storm events and indicate their importance in modulating regional climate over East Asia.

  20. Grid Research | Grid Modernization | NREL

    Science.gov Websites

    Grid Research Grid Research NREL addresses the challenges of today's electric grid through high researcher in a lab Integrated Devices and Systems Developing and evaluating grid technologies and integrated Controls Developing methods for real-time operations and controls of power systems at any scale Photo of

  1. A Novel Method for Estimating Shortwave Direct Radiative Effect of Above-Cloud Aerosols Using CALIOP and MODIS Data

    NASA Technical Reports Server (NTRS)

    Zhang, Zhibo; Meyer, Kerry G.; Platnick, Steven; Oreopoulos, Lazaros; Lee, Dongmin; Yu, Hongbin

    2014-01-01

    This paper describes an efficient and unique method for computing the shortwave direct radiative effect (DRE) of aerosol residing above low-level liquid-phase clouds using CALIOP and MODIS data. It addresses the overlap of aerosol and cloud rigorously by utilizing the joint histogram of cloud optical depth and cloud top pressure while also accounting for subgrid-scale variations of aerosols. The method is computationally efficient because of its use of grid-level cloud and aerosol statistics, instead of pixel-level products, and a pre-computed look-up table based on radiative transfer calculations. We verify that for smoke over the southeast Atlantic Ocean the method yields a seasonal mean instantaneous (approximately 1:30PM local time) shortwave DRE of above cloud aerosol (ACA) that generally agrees with more rigorous pixel-level computation within 4 percent. We also estimate the impact of potential CALIOP aerosol optical depth (AOD) retrieval bias of ACA on DRE. We find that the regional and seasonal mean instantaneous DRE of ACA over southeast Atlantic Ocean would increase, from the original value of 6.4 W m(-2) based on operational CALIOP AOD to 9.6 W m(-2) if CALIOP AOD retrieval are biased low by a factor of 1.5 (Meyer et al., 2013) and further to 30.9 W m(-2) if CALIOP AOD retrieval are biased low by a factor of 5 as suggested in (Jethva et al., 2014). In contrast, the instantaneous ACA radiative forcing efficiency (RFE) remains relatively invariant in all cases at about 53 W m(-2) AOD(-1), suggesting a near linear relation between the instantaneous RFE and AOD. We also compute the annual mean instantaneous shortwave DRE of light-absorbing aerosols (i.e., smoke and polluted dust) over global oceans based on 4 years of CALIOP and MODIS data. We find that the variability of the annual mean shortwave DRE of above-cloud light-absorbing aerosol is mainly driven by the optical depth of the underlying clouds. While we demonstrate our method using CALIOP and MODIS data, it can also be extended to other satellite data sets, as well as climate model outputs.

  2. A propagation method with adaptive mesh grid based on wave characteristics for wave optics simulation

    NASA Astrophysics Data System (ADS)

    Tang, Qiuyan; Wang, Jing; Lv, Pin; Sun, Quan

    2015-10-01

    Propagation simulation method and choosing mesh grid are both very important to get the correct propagation results in wave optics simulation. A new angular spectrum propagation method with alterable mesh grid based on the traditional angular spectrum method and the direct FFT method is introduced. With this method, the sampling space after propagation is not limited to propagation methods no more, but freely alterable. However, choosing mesh grid on target board influences the validity of simulation results directly. So an adaptive mesh choosing method based on wave characteristics is proposed with the introduced propagation method. We can calculate appropriate mesh grids on target board to get satisfying results. And for complex initial wave field or propagation through inhomogeneous media, we can also calculate and set the mesh grid rationally according to above method. Finally, though comparing with theoretical results, it's shown that the simulation result with the proposed method coinciding with theory. And by comparing with the traditional angular spectrum method and the direct FFT method, it's known that the proposed method is able to adapt to a wider range of Fresnel number conditions. That is to say, the method can simulate propagation results efficiently and correctly with propagation distance of almost zero to infinity. So it can provide better support for more wave propagation applications such as atmospheric optics, laser propagation and so on.

  3. Enhanced Elliptic Grid Generation

    NASA Technical Reports Server (NTRS)

    Kaul, Upender K.

    2007-01-01

    An enhanced method of elliptic grid generation has been invented. Whereas prior methods require user input of certain grid parameters, this method provides for these parameters to be determined automatically. "Elliptic grid generation" signifies generation of generalized curvilinear coordinate grids through solution of elliptic partial differential equations (PDEs). Usually, such grids are fitted to bounding bodies and used in numerical solution of other PDEs like those of fluid flow, heat flow, and electromagnetics. Such a grid is smooth and has continuous first and second derivatives (and possibly also continuous higher-order derivatives), grid lines are appropriately stretched or clustered, and grid lines are orthogonal or nearly so over most of the grid domain. The source terms in the grid-generating PDEs (hereafter called "defining" PDEs) make it possible for the grid to satisfy requirements for clustering and orthogonality properties in the vicinity of specific surfaces in three dimensions or in the vicinity of specific lines in two dimensions. The grid parameters in question are decay parameters that appear in the source terms of the inhomogeneous defining PDEs. The decay parameters are characteristic lengths in exponential- decay factors that express how the influences of the boundaries decrease with distance from the boundaries. These terms govern the rates at which distance between adjacent grid lines change with distance from nearby boundaries. Heretofore, users have arbitrarily specified decay parameters. However, the characteristic lengths are coupled with the strengths of the source terms, such that arbitrary specification could lead to conflicts among parameter values. Moreover, the manual insertion of decay parameters is cumbersome for static grids and infeasible for dynamically changing grids. In the present method, manual insertion and user specification of decay parameters are neither required nor allowed. Instead, the decay parameters are determined automatically as part of the solution of the defining PDEs. Depending on the shape of the boundary segments and the physical nature of the problem to be solved on the grid, the solution of the defining PDEs may provide for rates of decay to vary along and among the boundary segments and may lend itself to interpretation in terms of one or more physical quantities associated with the problem.

  4. Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary

    NASA Astrophysics Data System (ADS)

    Anugu, N.; Garcia, P.

    2016-04-01

    Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its computational efficiency. In the first step, the cross-correlation is implemented at the original image spatial resolution grid (1 pixel). In the second step, the cross-correlation is performed using a sub-pixel level grid by limiting the field of search to 4 × 4 pixels centered at the first step delivered initial position. The generation of these sub-pixel grid based region of interest images is achieved with the bi-cubic interpolation. The correlation matching with sub-pixel grid technique was previously reported in electronic speckle photography Sjö'dahl (1994). This technique is applied here for the solar wavefront sensing. A large dynamic range and a better accuracy in the measurements are achieved with the combination of the original pixel grid based correlation matching in a large field of view and a sub-pixel interpolated image grid based correlation matching within a small field of view. The results revealed that the proposed method outperforms all the different peak-finding algorithms studied in the first approach. It reduces both the systematic error and the RMS error by a factor of 5 (i.e., 75% systematic error reduction), when 5 times improved image sampling was used. This measurement is achieved at the expense of twice the computational cost. With the 5 times improved image sampling, the wave front accuracy is increased by a factor of 5. The proposed solution is strongly recommended for wave front sensing in the solar telescopes, particularly, for measuring large dynamic image shifts involved open loop adaptive optics. Also, by choosing an appropriate increment of image sampling in trade-off between the computational speed limitation and the aimed sub-pixel image shift accuracy, it can be employed in closed loop adaptive optics. The study is extended to three other class of sub-aperture images (a point source; a laser guide star; a Galactic Center extended scene). The results are planned to submit for the Optical Express journal.

  5. Accuracy of Gradient Reconstruction on Grids with High Aspect Ratio

    NASA Technical Reports Server (NTRS)

    Thomas, James

    2008-01-01

    Gradient approximation methods commonly used in unstructured-grid finite-volume schemes intended for solutions of high Reynolds number flow equations are studied comprehensively. The accuracy of gradients within cells and within faces is evaluated systematically for both node-centered and cell-centered formulations. Computational and analytical evaluations are made on a series of high-aspect-ratio grids with different primal elements, including quadrilateral, triangular, and mixed element grids, with and without random perturbations to the mesh. Both rectangular and cylindrical geometries are considered; the latter serves to study the effects of geometric curvature. The study shows that the accuracy of gradient reconstruction on high-aspect-ratio grids is determined by a combination of the grid and the solution. The contributors to the error are identified and approaches to reduce errors are given, including the addition of higher-order terms in the direction of larger mesh spacing. A parameter GAMMA characterizing accuracy on curved high-aspect-ratio grids is discussed and an approximate-mapped-least-square method using a commonly-available distance function is presented; the method provides accurate gradient reconstruction on general grids. The study is intended to be a reference guide accompanying the construction of accurate and efficient methods for high Reynolds number applications

  6. Distributed processing of a GPS receiver network for a regional ionosphere map

    NASA Astrophysics Data System (ADS)

    Choi, Kwang Ho; Hoo Lim, Joon; Yoo, Won Jae; Lee, Hyung Keun

    2018-01-01

    This paper proposes a distributed processing method applicable to GPS receivers in a network to generate a regional ionosphere map accurately and reliably. For accuracy, the proposed method is operated by multiple local Kalman filters and Kriging estimators. Each local Kalman filter is applied to a dual-frequency receiver to estimate the receiver’s differential code bias and vertical ionospheric delays (VIDs) at different ionospheric pierce points. The Kriging estimator selects and combines several VID estimates provided by the local Kalman filters to generate the VID estimate at each ionospheric grid point. For reliability, the proposed method uses receiver fault detectors and satellite fault detectors. Each receiver fault detector compares the VID estimates of the same local area provided by different local Kalman filters. Each satellite fault detector compares the VID estimate of each local area with that projected from the other local areas. Compared with the traditional centralized processing method, the proposed method is advantageous in that it considerably reduces the computational burden of each single Kalman filter and enables flexible fault detection, isolation, and reconfiguration capability. To evaluate the performance of the proposed method, several experiments with field collected measurements were performed.

  7. New ghost-node method for linking different models with varied grid refinement

    USGS Publications Warehouse

    James, S.C.; Dickinson, J.E.; Mehl, S.W.; Hill, M.C.; Leake, S.A.; Zyvoloski, G.A.; Eddebbarh, A.-A.

    2006-01-01

    A flexible, robust method for linking grids of locally refined ground-water flow models constructed with different numerical methods is needed to address a variety of hydrologic problems. This work outlines and tests a new ghost-node model-linking method for a refined "child" model that is contained within a larger and coarser "parent" model that is based on the iterative method of Steffen W. Mehl and Mary C. Hill (2002, Advances in Water Res., 25, p. 497-511; 2004, Advances in Water Res., 27, p. 899-912). The method is applicable to steady-state solutions for ground-water flow. Tests are presented for a homogeneous two-dimensional system that has matching grids (parent cells border an integer number of child cells) or nonmatching grids. The coupled grids are simulated by using the finite-difference and finite-element models MODFLOW and FEHM, respectively. The simulations require no alteration of the MODFLOW or FEHM models and are executed using a batch file on Windows operating systems. Results indicate that when the grids are matched spatially so that nodes and child-cell boundaries are aligned, the new coupling technique has error nearly equal to that when coupling two MODFLOW models. When the grids are nonmatching, model accuracy is slightly increased compared to that for matching-grid cases. Overall, results indicate that the ghost-node technique is a viable means to couple distinct models because the overall head and flow errors relative to the analytical solution are less than if only the regional coarse-grid model was used to simulate flow in the child model's domain.

  8. Robust Control of Wide Bandgap Power Electronics Device Enabled Smart Grid

    NASA Astrophysics Data System (ADS)

    Yao, Tong

    In recent years, wide bandgap (WBG) devices enable power converters with higher power density and higher efficiency. On the other hand, smart grid technologies are getting mature due to new battery technology and computer technology. In the near future, the two technologies will form the next generation of smart grid enabled by WBG devices. This dissertation deals with two applications: silicon carbide (SiC) device used for medium voltage level interface (7.2 kV to 240 V) and gallium nitride (GaN) device used for low voltage level interface (240 V/120 V). A 20 kW solid state transformer (SST) is designed with 6 kHz switching frequency SiC rectifier. Then three robust control design methods are proposed for each of its smart grid operation modes. In grid connected mode, a new LCL filter design method is proposed considering grid voltage THD, grid current THD and current regulation loop robust stability with respect to the grid impedance change. In grid islanded mode, micro synthesis method combined with variable structure control is used to design a robust controller for grid voltage regulation. For grid emergency mode, multivariable controller designed using Hinfinity synthesis method is proposed for accurate power sharing. Controller-hardware-in-the-loop (CHIL) testbed considering 7-SST system is setup with Real Time Digital Simulator (RTDS). The real TMS320F28335 DSP and Spartan 6 FPGA control board is used to interface a switching model SST in RTDS. And the proposed control methods are tested. For low voltage level application, a 3.3 kW smart grid hardware is built with 3 GaN inverters. The inverters are designed with the GaN device characterized using the proposed multi-function double pulse tester. The inverter is controlled by onboard TMS320F28379D dual core DSP with 200 kHz sampling frequency. Each inverter is tested to process 2.2 kW power with overall efficiency of 96.5 % at room temperature. The smart grid monitor system and fault interrupt devices (FID) based on Arduino Mega2560 are built and tested. The smart grid cooperates with GaN inverters through CAN bus communication. At last, the three GaN inverters smart grid achieved the function of grid connected to islanded mode smooth transition.

  9. Collocated electrodynamic FDTD schemes using overlapping Yee grids and higher-order Hodge duals

    NASA Astrophysics Data System (ADS)

    Deimert, C.; Potter, M. E.; Okoniewski, M.

    2016-12-01

    The collocated Lebedev grid has previously been proposed as an alternative to the Yee grid for electromagnetic finite-difference time-domain (FDTD) simulations. While it performs better in anisotropic media, it performs poorly in isotropic media because it is equivalent to four overlapping, uncoupled Yee grids. We propose to couple the four Yee grids and fix the Lebedev method using discrete exterior calculus (DEC) with higher-order Hodge duals. We find that higher-order Hodge duals do improve the performance of the Lebedev grid, but they also improve the Yee grid by a similar amount. The effectiveness of coupling overlapping Yee grids with a higher-order Hodge dual is thus questionable. However, the theoretical foundations developed to derive these methods may be of interest in other problems.

  10. Single block three-dimensional volume grids about complex aerodynamic vehicles

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.; Weilmuenster, K. James

    1993-01-01

    This paper presents an alternate approach for the generation of volumetric grids for supersonic and hypersonic flows about complex configurations. The method uses parametric two dimensional block face grid definition within the framework of GRIDGEN2D. The incorporation of face decomposition reduces complex surfaces to simple shapes. These simple shapes are combined to obtain the final face definition. The advantages of this method include the reduction of overall grid generation time through the use of vectorized computer code, the elimination of the need to generate matching block faces, and the implementation of simplified boundary conditions. A simple axisymmetric grid is used to illustrate this method. In addition, volume grids for two complex configurations, the Langley Lifting Body (HL-20) and the Space Shuttle Orbiter, are shown.

  11. Preliminary evaluation of the Community Multiscale Air Quality model for 2002 over the Southeastern United States.

    PubMed

    Morris, Ralph E; McNally, Dennis E; Tesche, Thomas W; Tonnesen, Gail; Boylan, James W; Brewer, Patricia

    2005-11-01

    The Visibility Improvement State and Tribal Association of the Southeast (VISTAS) is one of five Regional Planning Organizations that is charged with the management of haze, visibility, and other regional air quality issues in the United States. The VISTAS Phase I work effort modeled three episodes (January 2002, July 1999, and July 2001) to identify the optimal model configuration(s) to be used for the 2002 annual modeling in Phase II. Using model configurations recommended in the Phase I analysis, 2002 annual meteorological (Mesoscale Meterological Model [MM5]), emissions (Sparse Matrix Operator Kernal Emissions [SMOKE]), and air quality (Community Multiscale Air Quality [CMAQ]) simulations were performed on a 36-km grid covering the continental United States and a 12-km grid covering the Eastern United States. Model estimates were then compared against observations. This paper presents the results of the preliminary CMAQ model performance evaluation for the initial 2002 annual base case simulation. Model performance is presented for the Eastern United States using speciated fine particle concentration and wet deposition measurements from several monitoring networks. Initial results indicate fairly good performance for sulfate with fractional bias values generally within +/-20%. Nitrate is overestimated in the winter by approximately +50% and underestimated in the summer by more than -100%. Organic carbon exhibits a large summer underestimation bias of approximately -100% with much improved performance seen in the winter with a bias near zero. Performance for elemental carbon is reasonable with fractional bias values within +/- 40%. Other fine particulate (soil) and coarse particular matter exhibit large (80-150%) overestimation in the winter but improved performance in the summer. The preliminary 2002 CMAQ runs identified several areas of enhancements to improve model performance, including revised temporal allocation factors for ammonia emissions to improve nitrate performance and addressing missing processes in the secondary organic aerosol module to improve OC performance.

  12. Downscaling GISS ModelE Boreal Summer Climate over Africa

    NASA Technical Reports Server (NTRS)

    Druyan, Leonard M.; Fulakeza, Matthew

    2015-01-01

    The study examines the perceived added value of downscaling atmosphere-ocean global climate model simulations over Africa and adjacent oceans by a nested regional climate model. NASA/Goddard Institute for Space Studies (GISS) coupled ModelE simulations for June- September 1998-2002 are used to form lateral boundary conditions for synchronous simulations by the GISS RM3 regional climate model. The ModelE computational grid spacing is 2deg latitude by 2.5deg longitude and the RM3 grid spacing is 0.44deg. ModelE precipitation climatology for June-September 1998-2002 is shown to be a good proxy for 30-year means so results based on the 5-year sample are presumed to be generally representative. Comparison with observational evidence shows several discrepancies in ModelE configuration of the boreal summer inter-tropical convergence zone (ITCZ). One glaring shortcoming is that ModelE simulations do not advance the West African rain band northward during the summer to represent monsoon precipitation onset over the Sahel. Results for 1998-2002 show that onset simulation is an important added value produced by downscaling with RM3. ModelE Eastern South Atlantic Ocean computed sea-surface temperatures (SST) are some 4 K warmer than reanalysis, contributing to large positive biases in overlying surface air temperatures (Tsfc). ModelE Tsfc are also too warm over most of Africa. RM3 downscaling somewhat mitigates the magnitude of Tsfc biases over the African continent, it eliminates the ModelE double ITCZ over the Atlantic and it produces more realistic orographic precipitation maxima. Parallel ModelE and RM3 simulations with observed SST forcing (in place of the predicted ocean) lower Tsfc errors but have mixed impacts on circulation and precipitation biases. Downscaling improvements of the meridional movement of the rain band over West Africa and the configuration of orographic precipitation maxima are realized irrespective of the SST biases.

  13. Satellite Data Assimilation within KIAPS-LETKF system

    NASA Astrophysics Data System (ADS)

    Jo, Y.; Lee, S., Sr.; Cho, K.

    2016-12-01

    Korea Institute of Atmospheric Prediction Systems (KIAPS) has been developing an ensemble data assimilation system using four-dimensional local ensemble transform kalman filter (LETKF; Hunt et al., 2007) within KIAPS Integrated Model (KIM), referred to as "KIAPS-LETKF". KIAPS-LETKF system was successfully evaluated with various Observing System Simulation Experiments (OSSEs) with NCAR Community Atmospheric Model - Spectral Element (Kang et al., 2013), which has fully unstructured quadrilateral meshes based on the cubed-sphere grid as the same grid system of KIM. Recently, assimilation of real observations has been conducted within the KIAPS-LETKF system with four-dimensional covariance functions over the 6-hr assimilation window. Then, conventional (e.g., sonde, aircraft, and surface) and satellite (e.g., AMSU-A, IASI, GPS-RO, and AMV) observations have been provided by the KIAPS Package for Observation Processing (KPOP). Wind speed prediction was found most beneficial due to ingestion of AMV and for the temperature prediction the improvement in assimilation is mostly due to ingestion of AMSU-A and IASI. However, some degradation in the simulation of the GPS-RO is presented in the upper stratosphere, even though GPS-RO leads positive impacts on the analysis and forecasts. We plan to test the bias correction method and several vertical localization strategies for radiance observations to improve analysis and forecast impacts.

  14. An assessment of unstructured grid technology for timely CFD analysis

    NASA Technical Reports Server (NTRS)

    Kinard, Tom A.; Schabowski, Deanne M.

    1995-01-01

    An assessment of two unstructured methods is presented in this paper. A tetrahedral unstructured method USM3D, developed at NASA Langley Research Center is compared to a Cartesian unstructured method, SPLITFLOW, developed at Lockheed Fort Worth Company. USM3D is an upwind finite volume solver that accepts grids generated primarily from the Vgrid grid generator. SPLITFLOW combines an unstructured grid generator with an implicit flow solver in one package. Both methods are exercised on three test cases, a wing, and a wing body, and a fully expanded nozzle. The results for the first two runs are included here and compared to the structured grid method TEAM and to available test data. On each test case, the set up procedure are described, including any difficulties that were encountered. Detailed descriptions of the solvers are not included in this paper.

  15. Energy stable and high-order-accurate finite difference methods on staggered grids

    NASA Astrophysics Data System (ADS)

    O'Reilly, Ossian; Lundquist, Tomas; Dunham, Eric M.; Nordström, Jan

    2017-10-01

    For wave propagation over distances of many wavelengths, high-order finite difference methods on staggered grids are widely used due to their excellent dispersion properties. However, the enforcement of boundary conditions in a stable manner and treatment of interface problems with discontinuous coefficients usually pose many challenges. In this work, we construct a provably stable and high-order-accurate finite difference method on staggered grids that can be applied to a broad class of boundary and interface problems. The staggered grid difference operators are in summation-by-parts form and when combined with a weak enforcement of the boundary conditions, lead to an energy stable method on multiblock grids. The general applicability of the method is demonstrated by simulating an explosive acoustic source, generating waves reflecting against a free surface and material discontinuity.

  16. An improved synchronous reference frame current control strategy for a photovoltaic grid-connected inverter under unbalanced and nonlinear load conditions.

    PubMed

    Naderipour, Amirreza; Asuhaimi Mohd Zin, Abdullah; Bin Habibuddin, Mohd Hafiz; Miveh, Mohammad Reza; Guerrero, Josep M

    2017-01-01

    In recent years, renewable energy sources have been considered the most encouraging resources for grid and off-grid power generation. This paper presents an improved current control strategy for a three-phase photovoltaic grid-connected inverter (GCI) under unbalanced and nonlinear load conditions. It is challenging to suppress the harmonic content in the output current below a pre-set value in the GCI. It is also difficult to compensate for unbalanced loads even when the grid is under disruption due to total harmonic distortion (THD) and unbalanced loads. The primary advantage and objective of this method is to effectively compensate for the harmonic current content of the grid current and microgrid without the use of any compensation devices, such as active and passive filters. This method leads to a very low THD in both the GCI currents and the current exchanged with the grid. The control approach is designed to control the active and reactive power and harmonic current compensation, and it also corrects the system unbalance. The proposed control method features the synchronous reference frame (SRF) method. Simulation results are presented to demonstrate the effective performance of the proposed method.

  17. An improved synchronous reference frame current control strategy for a photovoltaic grid-connected inverter under unbalanced and nonlinear load conditions

    PubMed Central

    Naderipour, Amirreza; Asuhaimi Mohd Zin, Abdullah; Bin Habibuddin, Mohd Hafiz; Miveh, Mohammad Reza; Guerrero, Josep M.

    2017-01-01

    In recent years, renewable energy sources have been considered the most encouraging resources for grid and off-grid power generation. This paper presents an improved current control strategy for a three-phase photovoltaic grid-connected inverter (GCI) under unbalanced and nonlinear load conditions. It is challenging to suppress the harmonic content in the output current below a pre-set value in the GCI. It is also difficult to compensate for unbalanced loads even when the grid is under disruption due to total harmonic distortion (THD) and unbalanced loads. The primary advantage and objective of this method is to effectively compensate for the harmonic current content of the grid current and microgrid without the use of any compensation devices, such as active and passive filters. This method leads to a very low THD in both the GCI currents and the current exchanged with the grid. The control approach is designed to control the active and reactive power and harmonic current compensation, and it also corrects the system unbalance. The proposed control method features the synchronous reference frame (SRF) method. Simulation results are presented to demonstrate the effective performance of the proposed method. PMID:28192436

  18. Increasing the Extracted Beam Current Density in Ion Thrusters

    NASA Astrophysics Data System (ADS)

    Arthur, Neil Anderson

    Ion thrusters have seen application on space science missions and numerous satellite missions. Ion engines offer higher electrical efficiency and specific impulse capability coupled with longer demonstrated lifetime as compared to other space propulsion technologies. However, ion engines are considered to have low thrust. This work aims to address the low thrust conception; whereby improving ion thruster performance and thrust density will lead to expanded mission capabilities for ion thruster technology. This goal poses a challenge because the mechanism for accelerating ions, the ion optics, is space charge limited according to the Child-Langmuir law-there is a finite number of ions that can be extracted through the grids for a given voltage. Currently, ion thrusters operate at only 40% of this limit, suggesting there is another limit artificially constraining beam current. Experimental evidence suggests the beam current can become source limited-the ion density within the plasma is not large enough to sustain high beam currents. Increasing the discharge current will increase ion density, but ring cusp ion engines become anode area limited at high discharge currents. The ring cusp magnetic field increases ionization efficiency but limits the anode area available for electron collection. Above a threshold current, the plasma becomes unstable. Increasing the engine size is one approach to increasing the operational discharge current, ion density, and thus the beam current, but this presents engineering challenges. The ion optics are a pair of closely spaced grids. As the engine diameter increases, it becomes difficult to maintain a constant grid gap. Span-to-gap considerations for high perveance optics limit ion engines to 50 cm in diameter. NASA designed the annular ion engine to address the anode area limit and scale-up problems by changing the discharge chamber geometry. The annular engine provides a central mounting structure for the optics, allowing the beam area to increase while maintaining a fixed span-to-gap. The central stalk also provides additional surface area for electron collection. Circumventing the anode area limitation, the annular ion engine can operate closer to the Child-Langmuir limit as compared to a conventional cylindrical ion thruster. Preliminary discharge characterization of a 65 cm annular ion engine shows >90% uniformity and validates the scalability of the technology. Operating beyond the Child-Langmuir limit would allow for even larger performance gains. This classic law does not consider the ion injection velocity into the grid sheath. The Child-Langmuir limit shifts towards higher current as the ion velocity increases. Ion drift velocity can be created by enhancing the axially-directed electric field. One method for creating this field is to modify the plasma potential distribution. This can be accomplished by biasing individual magnetic cusps, through isolated, conformal electrodes placed on each magnet ring. Experiments on a 15 cm ion thruster have shown that plasma potential in the bulk can be modified by as much as 5 V and establish ion drift towards the grid plane. Increases in ion current density at the grid by up to 20% are demonstrated. Performance implications are also considered, and increases in simulated beam current of 15% and decreases in discharge losses of 5% are observed. Electron density measurements within the magnetic cusps revealed, surprisingly, as cusp current draw increases, the leak width does not change. This suggests that instead of increasing the electron collection area, cusp bias enhances electron mobility along field lines.

  19. Simulated building energy demand biases resulting from the use of representative weather stations

    DOE PAGES

    Burleyson, Casey D.; Voisin, Nathalie; Taylor, Z. Todd; ...

    2017-11-06

    Numerical building models are typically forced with weather data from a limited number of “representative cities” or weather stations representing different climate regions. The use of representative weather stations reduces computational costs, but often fails to capture spatial heterogeneity in weather that may be important for simulations aimed at understanding how building stocks respond to a changing climate. Here, we quantify the potential reduction in temperature and load biases from using an increasing number of weather stations over the western U.S. Our novel approach is based on deriving temperature and load time series using incrementally more weather stations, ranging frommore » 8 to roughly 150, to evaluate the ability to capture weather patterns across different seasons. Using 8 stations across the western U.S., one from each IECC climate zone, results in an average absolute summertime temperature bias of ~4.0 °C with respect to a high-resolution gridded dataset. The mean absolute bias drops to ~1.5 °C using all available weather stations. Temperature biases of this magnitude could translate to absolute summertime mean simulated load biases as high as 13.5%. Increasing the size of the domain over which biases are calculated reduces their magnitude as positive and negative biases may cancel out. Using 8 representative weather stations can lead to a 20–40% bias of peak building loads during both summer and winter, a significant error for capacity expansion planners who may use these types of simulations. Using weather stations close to population centers reduces both mean and peak load biases. Our approach could be used by others designing aggregate building simulations to understand the sensitivity to their choice of weather stations used to drive the models.« less

  20. Simulated building energy demand biases resulting from the use of representative weather stations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burleyson, Casey D.; Voisin, Nathalie; Taylor, Z. Todd

    Numerical building models are typically forced with weather data from a limited number of “representative cities” or weather stations representing different climate regions. The use of representative weather stations reduces computational costs, but often fails to capture spatial heterogeneity in weather that may be important for simulations aimed at understanding how building stocks respond to a changing climate. Here, we quantify the potential reduction in temperature and load biases from using an increasing number of weather stations over the western U.S. Our novel approach is based on deriving temperature and load time series using incrementally more weather stations, ranging frommore » 8 to roughly 150, to evaluate the ability to capture weather patterns across different seasons. Using 8 stations across the western U.S., one from each IECC climate zone, results in an average absolute summertime temperature bias of ~4.0 °C with respect to a high-resolution gridded dataset. The mean absolute bias drops to ~1.5 °C using all available weather stations. Temperature biases of this magnitude could translate to absolute summertime mean simulated load biases as high as 13.5%. Increasing the size of the domain over which biases are calculated reduces their magnitude as positive and negative biases may cancel out. Using 8 representative weather stations can lead to a 20–40% bias of peak building loads during both summer and winter, a significant error for capacity expansion planners who may use these types of simulations. Using weather stations close to population centers reduces both mean and peak load biases. Our approach could be used by others designing aggregate building simulations to understand the sensitivity to their choice of weather stations used to drive the models.« less

  1. Daily precipitation grids for Austria since 1961—development and evaluation of a spatial dataset for hydroclimatic monitoring and modelling

    NASA Astrophysics Data System (ADS)

    Hiebl, Johann; Frei, Christoph

    2018-04-01

    Spatial precipitation datasets that are long-term consistent, highly resolved and extend over several decades are an increasingly popular basis for modelling and monitoring environmental processes and planning tasks in hydrology, agriculture, energy resources management, etc. Here, we present a grid dataset of daily precipitation for Austria meant to promote such applications. It has a grid spacing of 1 km, extends back till 1961 and is continuously updated. It is constructed with the classical two-tier analysis, involving separate interpolations for mean monthly precipitation and daily relative anomalies. The former was accomplished by kriging with topographic predictors as external drift utilising 1249 stations. The latter is based on angular distance weighting and uses 523 stations. The input station network was kept largely stationary over time to avoid artefacts on long-term consistency. Example cases suggest that the new analysis is at least as plausible as previously existing datasets. Cross-validation and comparison against experimental high-resolution observations (WegenerNet) suggest that the accuracy of the dataset depends on interpretation. Users interpreting grid point values as point estimates must expect systematic overestimates for light and underestimates for heavy precipitation as well as substantial random errors. Grid point estimates are typically within a factor of 1.5 from in situ observations. Interpreting grid point values as area mean values, conditional biases are reduced and the magnitude of random errors is considerably smaller. Together with a similar dataset of temperature, the new dataset (SPARTACUS) is an interesting basis for modelling environmental processes, studying climate change impacts and monitoring the climate of Austria.

  2. Sensitivity of Atlantic meridional overturning circulation to the dynamical framework in an ocean general circulation model

    NASA Astrophysics Data System (ADS)

    Li, Xiaolan; Yu, Yongqiang; Liu, Hailong; Lin, Pengfei

    2017-06-01

    The horizontal coordinate systems commonly used in most global ocean models are the spherical latitude-longitude grid and displaced poles, such as a tripolar grid. The effect of the horizontal coordinate system on Atlantic meridional overturning circulation (AMOC) is evaluated by using an OGCM (ocean general circulation model). Two experiments are conducted with the model—one using a latitude-longitude grid (referred to as Lat_1) and the other using a tripolar grid (referred to as Tri). The results show that Tri simulates a stronger North Atlantic deep water (NADW) than Lat_1, as more saline water masses enter the Greenland-Iceland-Norwegian (GIN) seas in Tri. The stronger NADW can be attributed to two factors. One is the removal of the zonal filter in Tri, which leads to an increasing of the zonal gradient of temperature and salinity, thus strengthening the north geostrophic flow. In turn, it decreases the positive subsurface temperature and salinity biases in the subtropical regions. The other may be associated with topography at the North Pole, because realistic topography is applied in the tripolar grid while the latitude-longitude grid employs an artificial island around the North Pole. In order to evaluate the effect of the filter on AMOC, three enhanced filter experiments are carried out. Compared to Lat_1, an enhanced filter can also augment NADW formation, since more saline water is suppressed in the GIN seas, but accumulated in the Labrador Sea, especially in experiment Lat_2_S, which is the experiment with an enhanced filter on salinity.

  3. CFD Methods and Tools for Multi-Element Airfoil Analysis

    NASA Technical Reports Server (NTRS)

    Rogers, Stuart E.; George, Michael W. (Technical Monitor)

    1995-01-01

    This lecture will discuss the computational tools currently available for high-lift multi-element airfoil analysis. It will present an overview of a number of different numerical approaches, their current capabilities, short-comings, and computational costs. The lecture will be limited to viscous methods, including inviscid/boundary layer coupling methods, and incompressible and compressible Reynolds-averaged Navier-Stokes methods. Both structured and unstructured grid generation approaches will be presented. Two different structured grid procedures are outlined, one which uses multi-block patched grids, the other uses overset chimera grids. Turbulence and transition modeling will be discussed.

  4. Stability and error estimation for Component Adaptive Grid methods

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph; Zhu, Xiaolei

    1994-01-01

    Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.

  5. Smart Grid Information Clearinghouse (SGIC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rahman, Saifur

    Since the Energy Independence and Security Act of 2007 was enacted, there has been a large number of websites that discusses smart grid and relevant information, including those from government, academia, industry, private sector and regulatory. These websites collect information independently. Therefore, smart grid information was quite scattered and dispersed. The objective of this work was to develop, populate, manage and maintain the public Smart Grid Information Clearinghouse (SGIC) web portal. The information in the SGIC website is comprehensive that includes smart grid information, research & development, demonstration projects, technical standards, costs & benefit analyses, business cases, legislation, policy &more » regulation, and other information on lesson learned and best practices. The content in the SGIC website is logically grouped to allow easily browse, search and sort. In addition to providing the browse and search feature, the SGIC web portal also allow users to share their smart grid information with others though our online content submission platform. The Clearinghouse web portal, therefore, serves as the first stop shop for smart grid information that collects smart grid information in a non-bias, non-promotional manner and can provide a missing link from information sources to end users and better serve users’ needs. The web portal is available at www.sgiclearinghouse.org. This report summarizes the work performed during the course of the project (September 2009 – August 2014). Section 2.0 lists SGIC Advisory Committee and User Group members. Section 3.0 discusses SGIC information architecture and web-based database application functionalities. Section 4.0 summarizes SGIC features and functionalities, including its search, browse and sort capabilities, web portal social networking, online content submission platform and security measures implemented. Section 5.0 discusses SGIC web portal contents, including smart grid 101, smart grid projects, deployment experience (i.e., use cases, lessons learned, cost-benefit analyses and business cases), in-depth information (i.e., standards, technology, cyber security, legislation, education and training and demand response), as well as international information. Section 6.0 summarizes SGIC statistics from the launch of the portal on July 07, 2010 to August 31, 2014. Section 7.0 summarizes publicly available information as a result of this work.« less

  6. To Grid or Not to Grid… Precipitation Data and Hydrological Modeling in the Khangai Mountain Region of Mongolia

    NASA Astrophysics Data System (ADS)

    Venable, N. B. H.; Fassnacht, S. R.; Adyabadam, G.

    2014-12-01

    Precipitation data in semi-arid and mountainous regions is often spatially and temporally sparse, yet it is a key variable needed to drive hydrological models. Gridded precipitation datasets provide a spatially and temporally coherent alternative to the use of point-based station data, but in the case of Mongolia, may not be constructed from all data available from government data sources, or may only be available at coarse resolutions. To examine the uncertainty associated with the use of gridded and/or point precipitation data, monthly water balance models of three river basins across forest steppe (the Khoid Tamir River at Ikhtamir), steppe (the Baidrag River at Bayanburd), and desert steppe (the Tuin River at Bogd) ecozones in the Khangai Mountain Region of Mongolia were compared. The models were forced over a 10-year period from 2001-2010, with gridded temperature and precipitation data at a 0.5 x 0.5 degree resolution. These results were compared to modeling using an interpolated hybrid of the gridded data and additional point data recently gathered from government sources; and with point data from the nearest meteorological station to the streamflow gage of choice. Goodness-of-fit measures including the Nash-Sutcliff Efficiency statistic, the percent bias, and the RMSE-observations standard deviation ratio were used to assess model performance. The results were mixed with smaller differences between the two gridded products as compared to the differences between gridded products and station data. The largest differences in precipitation inputs and modeled runoff amounts occurred between the two gridded datasets and station data in the desert steppe (Tuin), and the smallest differences occurred in the forest steppe (Khoid Tamir) and steppe (Baidrag). Mean differences between water balance model results are generally smaller than mean differences in the initial input data over the period of record. Seasonally, larger differences in gridded versus station-based precipitation products and modeled outputs occur in summer in the desert-steppe, and in spring in the forest steppe. Choice of precipitation data source in terms of gridded or point-based data directly affects model outcomes with greater uncertainty noted on a seasonal basis across ecozones of the Khangai.

  7. A Two-Dimensional Linear Bicharacteristic FDTD Method

    NASA Technical Reports Server (NTRS)

    Beggs, John H.

    2002-01-01

    The linear bicharacteristic scheme (LBS) was originally developed to improve unsteady solutions in computational acoustics and aeroacoustics. The LBS has previously been extended to treat lossy materials for one-dimensional problems. It is a classical leapfrog algorithm, but is combined with upwind bias in the spatial derivatives. This approach preserves the time-reversibility of the leapfrog algorithm, which results in no dissipation, and it permits more flexibility by the ability to adopt a characteristic based method. The use of characteristic variables allows the LBS to include the Perfectly Matched Layer boundary condition with no added storage or complexity. The LBS offers a central storage approach with lower dispersion than the Yee algorithm, plus it generalizes much easier to nonuniform grids. It has previously been applied to two and three-dimensional free-space electromagnetic propagation and scattering problems. This paper extends the LBS to the two-dimensional case. Results are presented for point source radiation problems, and the FDTD algorithm is chosen as a convenient reference for comparison.

  8. A Comprehensive Study of Gridding Methods for GPS Horizontal Velocity Fields

    NASA Astrophysics Data System (ADS)

    Wu, Yanqiang; Jiang, Zaisen; Liu, Xiaoxia; Wei, Wenxin; Zhu, Shuang; Zhang, Long; Zou, Zhenyu; Xiong, Xiaohui; Wang, Qixin; Du, Jiliang

    2017-03-01

    Four gridding methods for GPS velocities are compared in terms of their precision, applicability and robustness by analyzing simulated data with uncertainties from 0.0 to ±3.0 mm/a. When the input data are 1° × 1° grid sampled and the uncertainty of the additional error is greater than ±1.0 mm/a, the gridding results show that the least-squares collocation method is highly robust while the robustness of the Kriging method is low. In contrast, the spherical harmonics and the multi-surface function are moderately robust, and the regional singular values for the multi-surface function method and the edge effects for the spherical harmonics method become more significant with increasing uncertainty of the input data. When the input data (with additional errors of ±2.0 mm/a) are decimated by 50% from the 1° × 1° grid data and then erased in three 6° × 12° regions, the gridding results in these three regions indicate that the least-squares collocation and the spherical harmonics methods have good performances, while the multi-surface function and the Kriging methods may lead to singular values. The gridding techniques are also applied to GPS horizontal velocities with an average error of ±0.8 mm/a over the Chinese mainland and the surrounding areas, and the results show that the least-squares collocation method has the best performance, followed by the Kriging and multi-surface function methods. Furthermore, the edge effects of the spherical harmonics method are significantly affected by the sparseness and geometric distribution of the input data. In general, the least-squares collocation method is superior in terms of its robustness, edge effect, error distribution and stability, while the other methods have several positive features.

  9. Evaluation of multisectional and two-section particulate matter photochemical grid models in the Western United States.

    PubMed

    Morris, Ralph; Koo, Bonyoung; Yarwood, Greg

    2005-11-01

    Version 4.10s of the comprehensive air-quality model with extensions (CAMx) photochemical grid model has been developed, which includes two options for representing particulate matter (PM) size distribution: (1) a two-section representation that consists of fine (PM2.5) and coarse (PM2.5-10) modes that has no interactions between the sections and assumes all of the secondary PM is fine; and (2) a multisectional representation that divides the PM size distribution into N sections (e.g., N = 10) and simulates the mass transfer between sections because of coagulation, accumulation, evaporation, and other processes. The model was applied to Southern California using the two-section and multisection representation of PM size distribution, and we found that allowing secondary PM to grow into the coarse mode had a substantial effect on PM concentration estimates. CAMx was then applied to the Western United States for the 1996 annual period with a 36-km grid resolution using both the two-section and multisection PM representation. The Community Multiscale Air Quality (CMAQ) and Regional Modeling for Aerosol and Deposition (REMSAD) models were also applied to the 1996 annual period. Similar model performance was exhibited by the four models across the Interagency Monitoring of Protected Visual Environments (IMPROVE) and Clean Air Status and Trends Network monitoring networks. All four of the models exhibited fairly low annual bias for secondary PM sulfate and nitrate but with a winter overestimation and summer underestimation bias. The CAMx multisectional model estimated that coarse mode secondary sulfate and nitrate typically contribute <10% of the total sulfate and nitrate when averaged across the more rural IMPROVE monitoring network.

  10. Middle atmosphere simulated with high vertical and horizontal resolution versions of a GCM: Improvements in the cold pole bias and generation of a QBO-like oscillation in the tropics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, K.; Wilson, R.J.; Hemler, R.S.

    1999-11-15

    The large-scale circulation in the Geophysical Fluid Dynamics Laboratory SKYHI troposphere-stratosphere-mesosphere finite-difference general circulation model is examined as a function of vertical and horizontal resolution. The experiments examined include one with horizontal grid spacing of {approximately}35 km and another with {approximately}100 km horizontal grid spacing but very high vertical resolution (160 levels between the ground and about 85 km). The simulation of the middle-atmospheric zonal-mean winds and temperatures in the extratropics is found to be very sensitive to horizontal resolution. For example, in the early Southern Hemisphere winter the South Pole near 1 mb in the model is colder thanmore » observed, but the bias is reduced with improved horizontal resolution (from {approximately}70 C in a version with {approximately}300 km grid spacing to less than 10 C in the {approximately}35 km version). The extratropical simulation is found to be only slightly affected by enhancements of the vertical resolution. By contrast, the tropical middle-atmospheric simulation is extremely dependent on the vertical resolution employed. With level spacing in the lower stratosphere {approximately}1.5 km, the lower stratospheric zonal-mean zonal winds in the equatorial region are nearly constant in time. When the vertical resolution is doubled, the simulated stratospheric zonal winds exhibit a strong equatorially centered oscillation with downward propagation of the wind reversals and with formation of strong vertical shear layers. This appears to be a spontaneous internally generated oscillation and closely resembles the observed QBO in many respects, although the simulated oscillation has a period less than half that of the real QBO.« less

  11. Energy Management and Optimization Methods for Grid Energy Storage Systems

    DOE PAGES

    Byrne, Raymond H.; Nguyen, Tu A.; Copp, David A.; ...

    2017-08-24

    Today, the stability of the electric power grid is maintained through real time balancing of generation and demand. Grid scale energy storage systems are increasingly being deployed to provide grid operators the flexibility needed to maintain this balance. Energy storage also imparts resiliency and robustness to the grid infrastructure. Over the last few years, there has been a significant increase in the deployment of large scale energy storage systems. This growth has been driven by improvements in the cost and performance of energy storage technologies and the need to accommodate distributed generation, as well as incentives and government mandates. Energymore » management systems (EMSs) and optimization methods are required to effectively and safely utilize energy storage as a flexible grid asset that can provide multiple grid services. The EMS needs to be able to accommodate a variety of use cases and regulatory environments. In this paper, we provide a brief history of grid-scale energy storage, an overview of EMS architectures, and a summary of the leading applications for storage. These serve as a foundation for a discussion of EMS optimization methods and design.« less

  12. Energy Management and Optimization Methods for Grid Energy Storage Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byrne, Raymond H.; Nguyen, Tu A.; Copp, David A.

    Today, the stability of the electric power grid is maintained through real time balancing of generation and demand. Grid scale energy storage systems are increasingly being deployed to provide grid operators the flexibility needed to maintain this balance. Energy storage also imparts resiliency and robustness to the grid infrastructure. Over the last few years, there has been a significant increase in the deployment of large scale energy storage systems. This growth has been driven by improvements in the cost and performance of energy storage technologies and the need to accommodate distributed generation, as well as incentives and government mandates. Energymore » management systems (EMSs) and optimization methods are required to effectively and safely utilize energy storage as a flexible grid asset that can provide multiple grid services. The EMS needs to be able to accommodate a variety of use cases and regulatory environments. In this paper, we provide a brief history of grid-scale energy storage, an overview of EMS architectures, and a summary of the leading applications for storage. These serve as a foundation for a discussion of EMS optimization methods and design.« less

  13. Domain Decomposition By the Advancing-Partition Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2008-01-01

    A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.

  14. Dreaming Up a New Grid: Two Lecturers' Reflections on Challenging Traditional Notions of Identity and Privilege in a South African Classroom

    ERIC Educational Resources Information Center

    Ngoasheng, Asanda; Gachago, Daniela

    2017-01-01

    One of the biggest debates in South Africa is the use and usefulness of apartheid categories when analysing society and societal behaviour. This paper examines the process of learning and unlearning that took place when a political reporting lecturer and an academic staff developer sought to explain racially biased voting in South Africa and its…

  15. Junction-side illuminated silicon detector arrays

    DOEpatents

    Iwanczyk, Jan S.; Patt, Bradley E.; Tull, Carolyn

    2004-03-30

    A junction-side illuminated detector array of pixelated detectors is constructed on a silicon wafer. A junction contact on the front-side may cover the whole detector array, and may be used as an entrance window for light, x-ray, gamma ray and/or other particles. The back-side has an array of individual ohmic contact pixels. Each of the ohmic contact pixels on the back-side may be surrounded by a grid or a ring of junction separation implants. Effective pixel size may be changed by separately biasing different sections of the grid. A scintillator may be coupled directly to the entrance window while readout electronics may be coupled directly to the ohmic contact pixels. The detector array may be used as a radiation hardened detector for high-energy physics research or as avalanche imaging arrays.

  16. Improving Subtropical Boundary Layer Cloudiness in the 2011 NCEP GFS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fletcher, J. K.; Bretherton, Christopher S.; Xiao, Heng

    2014-09-23

    The current operational version of National Centers for Environmental Prediction (NCEP) Global Forecasting System (GFS) shows significant low cloud bias. These biases also appear in the Coupled Forecast System (CFS), which is developed from the GFS. These low cloud biases degrade seasonal and longer climate forecasts, particularly of short-wave cloud radiative forcing, and affect predicted sea surface temperature. Reducing this bias in the GFS will aid the development of future CFS versions and contributes to NCEP's goal of unified weather and climate modelling. Changes are made to the shallow convection and planetary boundary layer parameterisations to make them more consistentmore » with current knowledge of these processes and to reduce the low cloud bias. These changes are tested in a single-column version of GFS and in global simulations with GFS coupled to a dynamical ocean model. In the single-column model, we focus on changing parameters that set the following: the strength of shallow cumulus lateral entrainment, the conversion of updraught liquid water to precipitation and grid-scale condensate, shallow cumulus cloud top, and the effect of shallow convection in stratocumulus environments. Results show that these changes improve the single-column simulations when compared to large eddy simulations, in particular through decreasing the precipitation efficiency of boundary layer clouds. These changes, combined with a few other model improvements, also reduce boundary layer cloud and albedo biases in global coupled simulations.« less

  17. Arc Length Based Grid Distribution For Surface and Volume Grids

    NASA Technical Reports Server (NTRS)

    Mastin, C. Wayne

    1996-01-01

    Techniques are presented for distributing grid points on parametric surfaces and in volumes according to a specified distribution of arc length. Interpolation techniques are introduced which permit a given distribution of grid points on the edges of a three-dimensional grid block to be propagated through the surface and volume grids. Examples demonstrate how these methods can be used to improve the quality of grids generated by transfinite interpolation.

  18. New method: calculation of magnification factor from an intracardiac marker.

    PubMed

    Cha, S D; Incarvito, J; Maranhao, V

    1983-01-01

    In order to calculate a magnification factor (MF), an intracardiac marker (pigtail catheter with markers) was evaluated using a new formula and correlated with the conventional grid method. By applying the Pythagorean theorem and trigonometry, a new formula was developed, which is (formula; see text) In an experimental study, MF by the intracardiac markers was 0.71 +/- 0.15 (M +/- SD) and one by the grid method was 0.72 +/- 0.15, with a correlation coefficient of 0.96. In patients study, MF by the intracardiac markers was 0.77 +/- 0.06 and one by the grid method was 0.77 +/- 0.05. We conclude that this new method is simple and the results were comparable to the conventional grid method at mid-chest level.

  19. A Quadtree-gridding LBM with Immersed Boundary for Two-dimension Viscous Flows

    NASA Astrophysics Data System (ADS)

    Yao, Jieke; Feng, Wenliang; Chen, Bin; Zhou, Wei; Cao, Shikun

    2017-07-01

    An un-uniform quadtree grids lattice Boltzmann method (LBM) with immersed boundary is presented in this paper. In overlapping for different level grids, temporal and spatial interpolation are necessary to ensure the continuity of physical quantity. In order to take advantage of the equation for temporal and spatial step in the same level grids, equal interval interpolation, which is simple to apply to any refined boundary grids in the LBM, is adopted in temporal and spatial aspects to obtain second-order accuracy. The velocity correction, which can guarantee more preferably no-slip boundary condition than the direct forcing method and the momentum exchange method in the traditional immersed-boundary LBM, is used for solid boundary to make the best of Cartesian grid. In present quadtree-gridding immersed-boundary LBM, large eddy simulation (LES) is adopted to simulate the flows over obstacle in higher Reynolds number (Re). The incompressible viscous flows over circular cylinder are carried out, and a great agreement is obtained.

  20. On the application of Chimera/unstructured hybrid grids for conjugate heat transfer

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing

    1995-01-01

    A hybrid grid system that combines the Chimera overset grid scheme and an unstructured grid method is developed to study fluid flow and heat transfer problems. With the proposed method, the solid structural region, in which only the heat conduction is considered, can be easily represented using an unstructured grid method. As for the fluid flow region external to the solid material, the Chimera overset grid scheme has been shown to be very flexible and efficient in resolving complex configurations. The numerical analyses require the flow field solution and material thermal response to be obtained simultaneously. A continuous transfer of temperature and heat flux is specified at the interface, which connects the solid structure and the fluid flow as an integral system. Numerical results are compared with analytical and experimental data for a flat plate and a C3X cooled turbine cascade. A simplified drum-disk system is also simulated to show the effectiveness of this hybrid grid system.

  1. Can we map the interannual variability of the whole upper Southern Ocean with the current database of hydrographic observations?

    NASA Astrophysics Data System (ADS)

    Heuzé, Céline; Vivier, Frédéric; Le Sommer, Julien; Molines, Jean-Marc; Penduff, Thierry

    2016-04-01

    With the advent of Argo floats, it now seems feasible to study the interannual variations of upper ocean hydrographic properties of the historically undersampled Southern Ocean. To do so, scattered hydrographic profiles often first need to be mapped. To investigate biases and errors associated both with the limited space-time distribution of the profiles and with the mapping methods, we colocate the mixed layer depth (MLD) output from a state-of-the-art 1/12° DRAKKAR simulation onto the latitude, longitude and date of actual in-situ profiles from 2005 to 2014. We compare the results obtained after remapping using a nearest-neighbor (NN) interpolation and an objective analysis (OA) with different spatio-temporal grid resolutions and decorrelation scales. NN is improved with a coarser resolution. OA performs best with low decorrelation scales, avoiding too strong a smoothing, but returns values over larger areas with large decorrelation scales and low temporal resolution, as more points are available. For all resolutions OA represents better the annual extreme values than NN. Both methods underestimate the seasonal cycle in MLD. MLD biases are lower than 10 m on average but can exceed 250 m locally in winter. We argue that current Argo data should not be mapped to infer decadal trends in MLD, as all methods are unable to reproduce existing trends without creating unrealistic extra ones. We also show that regions of the subtropical Atlantic, Indian and Pacific Oceans, and the whole ice-covered Southern Ocean, still cannot be mapped even by the best method because of the lack of observational data. This article is protected by copyright. All rights reserved.

  2. Locally refined block-centred finite-difference groundwater models: Evaluation of parameter sensitivity and the consequences for inverse modelling

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and the performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are: (a) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed, and (b) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.

  3. Locally refined block-centered finite-difference groundwater models: Evaluation of parameter sensitivity and the consequences for inverse modelling and predictions

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are (1) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed and (2) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.

  4. Study of the key factors affecting the triple grid lifetime of the LIPS-300 ion thruster

    NASA Astrophysics Data System (ADS)

    Mingming, SUN; Liang, WANG; Juntai, YANG; Xiaodong, WEN; Yongjie, HUANG; Meng, WANG

    2018-04-01

    In order to ascertain the key factors affecting the lifetime of the triple grids in the LIPS-300 ion thruster, the thermal deformation, upstream ion density and component lifetime of the grids are simulated with finite element analysis, fluid simulation and charged-particle tracing simulation methods on the basis of a 1500 h short lifetime test. The key factor affecting the lifetime of the triple grids in the LIPS-300 ion thruster is obtained and analyzed through the test results. The results show that ion sputtering erosion of the grids in 5 kW operation mode is greater than in the case of 3 kW. In 5 kW mode, the decelerator grid shows the most serious corrosion, the accelerator grid shows moderate corrosion, and the screen grid shows the least amount of corrosion. With the serious corrosion of the grids in 5 kW operation mode, the intercept current of the acceleration and deceleration grids increases substantially. Meanwhile, the cold gap between the accelerator grid and the screen grid decreases from 1 mm to 0.7 mm, while the cold gap between the accelerator grid and the decelerator grid increases from 1 mm to 1.25 mm after 1500 h of thruster operation. At equilibrium temperature with 5 kW power, the finite element method (FEM) simulation results show that the hot gap between the screen grid and the accelerator grid reduces to 0.2 mm. Accordingly, the hot gap between the accelerator grid and the decelerator grid increases to 1.5 mm. According to the fluid method, the plasma density simulated in most regions of the discharge chamber is 1 × 1018‑8 × 1018 m‑3. The upstream plasma density of the screen grid is in the range 6 × 1017‑6 × 1018 m‑3 and displays a parabolic characteristic. The charged particle tracing simulation method results show that the ion beam current without the thermal deformation of triple grids has optimal perveance status. The ion sputtering rates of the accelerator grid hole and the decelerator hole are 5.5 × 10‑14 kg s‑1 and 4.28 × 10‑14 kg s‑1, respectively, while after the thermal deformation of the triple grids, the ion beam current has over-perveance status. The ion sputtering rates of the accelerator grid hole and the decelerator hole are 1.41 × 10‑13 kg s‑1 and 4.1 × 10‑13 kg s‑1, respectively. The anode current is a key factor for the triple grid lifetime in situations where the structural strength of the grids does not change with temperature variation. The average sputtering rates of the accelerator grid and the decelerator grid, which were measured during the 1500 h lifetime test in 5 kW operating conditions, are 2.2 × 10‑13 kg s‑1 and 7.3 × 10‑13 kg s‑1, respectively. These results are in accordance with the simulation, and the error comes mainly from the calculation distribution of the upstream plasma density of the grids.

  5. Method for the depth corrected detection of ionizing events from a co-planar grids sensor

    DOEpatents

    De Geronimo, Gianluigi [Syosset, NY; Bolotnikov, Aleksey E [South Setauket, NY; Carini, Gabriella [Port Jefferson, NY

    2009-05-12

    A method for the detection of ionizing events utilizing a co-planar grids sensor comprising a semiconductor substrate, cathode electrode, collecting grid and non-collecting grid. The semiconductor substrate is sensitive to ionizing radiation. A voltage less than 0 Volts is applied to the cathode electrode. A voltage greater than the voltage applied to the cathode is applied to the non-collecting grid. A voltage greater than the voltage applied to the non-collecting grid is applied to the collecting grid. The collecting grid and the non-collecting grid are summed and subtracted creating a sum and difference respectively. The difference and sum are divided creating a ratio. A gain coefficient factor for each depth (distance between the ionizing event and the collecting grid) is determined, whereby the difference between the collecting electrode and the non-collecting electrode multiplied by the corresponding gain coefficient is the depth corrected energy of an ionizing event. Therefore, the energy of each ionizing event is the difference between the collecting grid and the non-collecting grid multiplied by the corresponding gain coefficient. The depth of the ionizing event can also be determined from the ratio.

  6. Multi-grid finite element method used for enhancing the reconstruction accuracy in Cerenkov luminescence tomography

    NASA Astrophysics Data System (ADS)

    Guo, Hongbo; He, Xiaowei; Liu, Muhan; Zhang, Zeyu; Hu, Zhenhua; Tian, Jie

    2017-03-01

    Cerenkov luminescence tomography (CLT), as a promising optical molecular imaging modality, can be applied to cancer diagnostic and therapeutic. Most researches about CLT reconstruction are based on the finite element method (FEM) framework. However, the quality of FEM mesh grid is still a vital factor to restrict the accuracy of the CLT reconstruction result. In this paper, we proposed a multi-grid finite element method framework, which was able to improve the accuracy of reconstruction. Meanwhile, the multilevel scheme adaptive algebraic reconstruction technique (MLS-AART) based on a modified iterative algorithm was applied to improve the reconstruction accuracy. In numerical simulation experiments, the feasibility of our proposed method were evaluated. Results showed that the multi-grid strategy could obtain 3D spatial information of Cerenkov source more accurately compared with the traditional single-grid FEM.

  7. An adaptive grid scheme using the boundary element method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munipalli, R.; Anderson, D.A.

    1996-09-01

    A technique to solve the Poisson grid generation equations by Green`s function related methods has been proposed, with the source terms being purely position dependent. The use of distributed singularities in the flow domain coupled with the boundary element method (BEM) formulation is presented in this paper as a natural extension of the Green`s function method. This scheme greatly simplifies the adaption process. The BEM reduces the dimensionality of the given problem by one. Internal grid-point placement can be achieved for a given boundary distribution by adding continuous and discrete source terms in the BEM formulation. A distribution of vortexmore » doublets is suggested as a means of controlling grid-point placement and grid-line orientation. Examples for sample adaption problems are presented and discussed. 15 refs., 20 figs.« less

  8. Simulated building energy demand biases resulting from the use of representative weather stations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burleyson, Casey D.; Voisin, Nathalie; Taylor, Z. Todd

    Numerical building models are typically forced with weather data from a limited number of “representative cities” or weather stations representing different climate regions. The use of representative weather stations reduces computational costs, but often fails to capture spatial heterogeneity in weather that may be important for simulations aimed at understanding how building stocks respond to a changing climate. We quantify the potential reduction in bias from using an increasing number of weather stations over the western U.S. The approach is based on deriving temperature and load time series using incrementally more weather stations, ranging from 8 to roughly 150, tomore » capture weather across different seasons. Using 8 stations, one from each climate zone, across the western U.S. results in an average absolute summertime temperature bias of 7.2°F with respect to a spatially-resolved gridded dataset. The mean absolute bias drops to 2.8°F using all available weather stations. Temperature biases of this magnitude could translate to absolute summertime mean simulated load biases as high as 13.8%, a significant error for capacity expansion planners who may use these types of simulations. Increasing the size of the domain over which biases are calculated reduces their magnitude as positive and negative biases may cancel out. Using 8 representative weather stations can lead to a 20-40% overestimation of peak building loads during both summer and winter. Using weather stations close to population centers reduces both mean and peak load biases. This approach could be used by others designing aggregate building simulations to understand the sensitivity to their choice of weather stations used to drive the models.« less

  9. Evaluation of Grid Modification Methods for On- and Off-Track Sonic Boom Analysis

    NASA Technical Reports Server (NTRS)

    Nayani, Sudheer N.; Campbell, Richard L.

    2013-01-01

    Grid modification methods have been under development at NASA to enable better predictions of low boom pressure signatures from supersonic aircraft. As part of this effort, two new codes, Stretched and Sheared Grid - Modified (SSG) and Boom Grid (BG), have been developed in the past year. The CFD results from these codes have been compared with ones from the earlier grid modification codes Stretched and Sheared Grid (SSGRID) and Mach Cone Aligned Prism (MCAP) and also with the available experimental results. NASA's unstructured grid suite of software TetrUSS and the automatic sourcing code AUTOSRC were used for base grid generation and flow solutions. The BG method has been evaluated on three wind tunnel models. Pressure signatures have been obtained up to two body lengths below a Gulfstream aircraft wind tunnel model. Good agreement with the wind tunnel results have been obtained for both on-track and off-track (up to 53 degrees) cases. On-track pressure signatures up to ten body lengths below a Straight Line Segmented Leading Edge (SLSLE) wind tunnel model have been extracted. Good agreement with the wind tunnel results have been obtained. Pressure signatures have been obtained at 1.5 body lengths below a Lockheed Martin aircraft wind tunnel model. Good agreement with the wind tunnel results have been obtained for both on-track and off-track (up to 40 degrees) cases. Grid sensitivity studies have been carried out to investigate any grid size related issues. Methods have been evaluated for fully turbulent, mixed laminar/turbulent and fully laminar flow conditions.

  10. Efficient parallel seismic simulations including topography and 3-D material heterogeneities on locally refined composite grids

    NASA Astrophysics Data System (ADS)

    Petersson, Anders; Rodgers, Arthur

    2010-05-01

    The finite difference method on a uniform Cartesian grid is a highly efficient and easy to implement technique for solving the elastic wave equation in seismic applications. However, the spacing in a uniform Cartesian grid is fixed throughout the computational domain, whereas the resolution requirements in realistic seismic simulations usually are higher near the surface than at depth. This can be seen from the well-known formula h ≤ L-P which relates the grid spacing h to the wave length L, and the required number of grid points per wavelength P for obtaining an accurate solution. The compressional and shear wave lengths in the earth generally increase with depth and are often a factor of ten larger below the Moho discontinuity (at about 30 km depth), than in sedimentary basins near the surface. A uniform grid must have a grid spacing based on the small wave lengths near the surface, which results in over-resolving the solution at depth. As a result, the number of points in a uniform grid is unnecessarily large. In the wave propagation project (WPP) code, we address the over-resolution-at-depth issue by generalizing our previously developed single grid finite difference scheme to work on a composite grid consisting of a set of structured rectangular grids of different spacings, with hanging nodes on the grid refinement interfaces. The computational domain in a regional seismic simulation often extends to depth 40-50 km. Hence, using a refinement ratio of two, we need about three grid refinements from the bottom of the computational domain to the surface, to keep the local grid size in approximate parity with the local wave lengths. The challenge of the composite grid approach is to find a stable and accurate method for coupling the solution across the grid refinement interface. Of particular importance is the treatment of the solution at the hanging nodes, i.e., the fine grid points which are located in between coarse grid points. WPP implements a new, energy conserving, coupling procedure for the elastic wave equation at grid refinement interfaces. When used together with our single grid finite difference scheme, it results in a method which is provably stable, without artificial dissipation, for arbitrary heterogeneous isotropic elastic materials. The new coupling procedure is based on satisfying the summation-by-parts principle across refinement interfaces. From a practical standpoint, an important advantage of the proposed method is the absence of tunable numerical parameters, which seldom are appreciated by application experts. In WPP, the composite grid discretization is combined with a curvilinear grid approach that enables accurate modeling of free surfaces on realistic (non-planar) topography. The overall method satisfies the summation-by-parts principle and is stable under a CFL time step restriction. A feature of great practical importance is that WPP automatically generates the composite grid based on the user provided topography and the depths of the grid refinement interfaces. The WPP code has been verified extensively, for example using the method of manufactured solutions, by solving Lamb's problem, by solving various layer over half- space problems and comparing to semi-analytic (FK) results, and by simulating scenario earthquakes where results from other seismic simulation codes are available. WPP has also been validated against seismographic recordings of moderate earthquakes. WPP performs well on large parallel computers and has been run on up to 32,768 processors using about 26 Billion grid points (78 Billion DOF) and 41,000 time steps. WPP is an open source code that is available under the Gnu general public license.

  11. Multi-criteria evaluation of CMIP5 GCMs for climate change impact analysis

    NASA Astrophysics Data System (ADS)

    Ahmadalipour, Ali; Rana, Arun; Moradkhani, Hamid; Sharma, Ashish

    2017-04-01

    Climate change is expected to have severe impacts on global hydrological cycle along with food-water-energy nexus. Currently, there are many climate models used in predicting important climatic variables. Though there have been advances in the field, there are still many problems to be resolved related to reliability, uncertainty, and computing needs, among many others. In the present work, we have analyzed performance of 20 different global climate models (GCMs) from Climate Model Intercomparison Project Phase 5 (CMIP5) dataset over the Columbia River Basin (CRB) in the Pacific Northwest USA. We demonstrate a statistical multicriteria approach, using univariate and multivariate techniques, for selecting suitable GCMs to be used for climate change impact analysis in the region. Univariate methods includes mean, standard deviation, coefficient of variation, relative change (variability), Mann-Kendall test, and Kolmogorov-Smirnov test (KS-test); whereas multivariate methods used were principal component analysis (PCA), singular value decomposition (SVD), canonical correlation analysis (CCA), and cluster analysis. The analysis is performed on raw GCM data, i.e., before bias correction, for precipitation and temperature climatic variables for all the 20 models to capture the reliability and nature of the particular model at regional scale. The analysis is based on spatially averaged datasets of GCMs and observation for the period of 1970 to 2000. Ranking is provided to each of the GCMs based on the performance evaluated against gridded observational data on various temporal scales (daily, monthly, and seasonal). Results have provided insight into each of the methods and various statistical properties addressed by them employed in ranking GCMs. Further; evaluation was also performed for raw GCM simulations against different sets of gridded observational dataset in the area.

  12. A hybrid structured-unstructured grid method for unsteady turbomachinery flow computations

    NASA Technical Reports Server (NTRS)

    Mathur, Sanjay R.; Madavan, Nateri K.; Rajagopalan, R. G.

    1993-01-01

    A hybrid grid technique for the solution of 2D, unsteady flows is developed. This technique is capable of handling complex, multiple component geometries in relative motion, such as those encountered in turbomachinery. The numerical approach utilizes a mixed structured-unstructured zonal grid topology along with modeling equations and solution methods that are most appropriate in the individual domains, therefore combining the advantages of both structured and unstructured grid techniques.

  13. Numerical methods for the simulation of complex multi-body flows with applications for the integrated Space Shuttle vehicle

    NASA Technical Reports Server (NTRS)

    Chan, William M.

    1992-01-01

    The following papers are presented: (1) numerical methods for the simulation of complex multi-body flows with applications for the Integrated Space Shuttle vehicle; (2) a generalized scheme for 3-D hyperbolic grid generation; (3) collar grids for intersecting geometric components within the Chimera overlapped grid scheme; and (4) application of the Chimera overlapped grid scheme to simulation of Space Shuttle ascent flows.

  14. Improved nuclear fuel assembly grid spacer

    DOEpatents

    Marshall, John; Kaplan, Samuel

    1977-01-01

    An improved fuel assembly grid spacer and method of retaining the basic fuel rod support elements in position within the fuel assembly containment channel. The improvement involves attachment of the grids to the hexagonal channel and of forming the basic fuel rod support element into a grid structure, which provides a design which is insensitive to potential channel distortion (ballooning) at high fluence levels. In addition the improved method eliminates problems associated with component fabrication and assembly.

  15. Smart electric vehicle (EV) charging and grid integration apparatus and methods

    DOEpatents

    Gadh, Rajit; Mal, Siddhartha; Prabhu, Shivanand; Chu, Chi-Cheng; Sheikh, Omar; Chung, Ching-Yen; He, Lei; Xiao, Bingjun; Shi, Yiyu

    2015-05-05

    An expert system manages a power grid wherein charging stations are connected to the power grid, with electric vehicles connected to the charging stations, whereby the expert system selectively backfills power from connected electric vehicles to the power grid through a grid tie inverter (if present) within the charging stations. In more traditional usage, the expert system allows for electric vehicle charging, coupled with user preferences as to charge time, charge cost, and charging station capabilities, without exceeding the power grid capacity at any point. A robust yet accurate state of charge (SOC) calculation method is also presented, whereby initially an open circuit voltage (OCV) based on sampled battery voltages and currents is calculated, and then the SOC is obtained based on a mapping between a previously measured reference OCV (ROCV) and SOC. The OCV-SOC calculation method accommodates likely any battery type with any current profile.

  16. Optimal variable-grid finite-difference modeling for porous media

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yin, Xingyao; Li, Haishan

    2014-12-01

    Numerical modeling of poroelastic waves by the finite-difference (FD) method is more expensive than that of acoustic or elastic waves. To improve the accuracy and computational efficiency of seismic modeling, variable-grid FD methods have been developed. In this paper, we derived optimal staggered-grid finite difference schemes with variable grid-spacing and time-step for seismic modeling in porous media. FD operators with small grid-spacing and time-step are adopted for low-velocity or small-scale geological bodies, while FD operators with big grid-spacing and time-step are adopted for high-velocity or large-scale regions. The dispersion relations of FD schemes were derived based on the plane wave theory, then the FD coefficients were obtained using the Taylor expansion. Dispersion analysis and modeling results demonstrated that the proposed method has higher accuracy with lower computational cost for poroelastic wave simulation in heterogeneous reservoirs.

  17. Network gateway security method for enterprise Grid: a literature review

    NASA Astrophysics Data System (ADS)

    Sujarwo, A.; Tan, J.

    2017-03-01

    The computational Grid has brought big computational resources closer to scientists. It enables people to do a large computational job anytime and anywhere without any physical border anymore. However, the massive and spread of computer participants either as user or computational provider arise problems in security. The challenge is on how the security system, especially the one which filters data in the gateway could works in flexibility depends on the registered Grid participants. This paper surveys what people have done to approach this challenge, in order to find the better and new method for enterprise Grid. The findings of this paper is the dynamically controlled enterprise firewall to secure the Grid resources from unwanted connections with a new firewall controlling method and components.

  18. Ion Engine Grid Gap Measurements

    NASA Technical Reports Server (NTRS)

    Soulas, Gerge C.; Frandina, Michael M.

    2004-01-01

    A simple technique for measuring the grid gap of an ion engine s ion optics during startup and steady-state operation was demonstrated with beam extraction. The grid gap at the center of the ion optics assembly was measured with a long distance microscope that was focused onto an alumina pin that protruded through the center accelerator grid aperture and was mechanically attached to the screen grid. This measurement technique was successfully applied to a 30 cm titanium ion optics assembly mounted onto an NSTAR engineering model ion engine. The grid gap and each grid s movement during startup from room temperature to both full and low power were measured. The grid gaps with and without beam extraction were found to be significantly different. The grid gaps at the ion optics center were both significantly smaller than the cold grid gap and different at the two power levels examined. To avoid issues associated with a small grid gap during thruster startup with titanium ion optics, a simple method was to operate the thruster initially without beam extraction to heat the ion optics. Another possible method is to apply high voltage to the grids prior to igniting the discharge because power deposition to the grids from the plasma is lower with beam extraction than without. Further testing would be required to confirm this approach.

  19. Methods for prismatic/tetrahedral grid generation and adaptation

    NASA Technical Reports Server (NTRS)

    Kallinderis, Y.

    1995-01-01

    The present work involves generation of hybrid prismatic/tetrahedral grids for complex 3-D geometries including multi-body domains. The prisms cover the region close to each body's surface, while tetrahedra are created elsewhere. Two developments are presented for hybrid grid generation around complex 3-D geometries. The first is a new octree/advancing front type of method for generation of the tetrahedra of the hybrid mesh. The main feature of the present advancing front tetrahedra generator that is different from previous such methods is that it does not require the creation of a background mesh by the user for the determination of the grid-spacing and stretching parameters. These are determined via an automatically generated octree. The second development is a method for treating the narrow gaps in between different bodies in a multiply-connected domain. This method is applied to a two-element wing case. A High Speed Civil Transport (HSCT) type of aircraft geometry is considered. The generated hybrid grid required only 170 K tetrahedra instead of an estimated two million had a tetrahedral mesh been used in the prisms region as well. A solution adaptive scheme for viscous computations on hybrid grids is also presented. A hybrid grid adaptation scheme that employs both h-refinement and redistribution strategies is developed to provide optimum meshes for viscous flow computations. Grid refinement is a dual adaptation scheme that couples 3-D, isotropic division of tetrahedra and 2-D, directional division of prisms.

  20. Method and apparatus for detecting cyber attacks on an alternating current power grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McEachern, Alexander; Hofmann, Ronald

    A method and apparatus for detecting cyber attacks on remotely-operable elements of an alternating current distribution grid. Two state estimates of the distribution grid are prepared, one of which uses micro-synchrophasors. A difference between the two state estimates indicates a possible cyber attack.

  1. Model based on GRID-derived descriptors for estimating CYP3A4 enzyme stability of potential drug candidates

    NASA Astrophysics Data System (ADS)

    Crivori, Patrizia; Zamora, Ismael; Speed, Bill; Orrenius, Christian; Poggesi, Italo

    2004-03-01

    A number of computational approaches are being proposed for an early optimization of ADME (absorption, distribution, metabolism and excretion) properties to increase the success rate in drug discovery. The present study describes the development of an in silico model able to estimate, from the three-dimensional structure of a molecule, the stability of a compound with respect to the human cytochrome P450 (CYP) 3A4 enzyme activity. Stability data were obtained by measuring the amount of unchanged compound remaining after a standardized incubation with human cDNA-expressed CYP3A4. The computational method transforms the three-dimensional molecular interaction fields (MIFs) generated from the molecular structure into descriptors (VolSurf and Almond procedures). The descriptors were correlated to the experimental metabolic stability classes by a partial least squares discriminant procedure. The model was trained using a set of 1800 compounds from the Pharmacia collection and was validated using two test sets: the first one including 825 compounds from the Pharmacia collection and the second one consisting of 20 known drugs. This model correctly predicted 75% of the first and 85% of the second test set and showed a precision above 86% to correctly select metabolically stable compounds. The model appears a valuable tool in the design of virtual libraries to bias the selection toward more stable compounds. Abbreviations: ADME - absorption, distribution, metabolism and excretion; CYP - cytochrome P450; MIFs - molecular interaction fields; HTS - high throughput screening; DDI - drug-drug interactions; 3D - three-dimensional; PCA - principal components analysis; CPCA - consensus principal components analysis; PLS - partial least squares; PLSD - partial least squares discriminant; GRIND - grid independent descriptors; GRID - software originally created and developed by Professor Peter Goodford.

  2. Extending high-order flux operators on spherical icosahedral grids and their application in a Shallow Water Model for transporting the Potential Vorticity

    NASA Astrophysics Data System (ADS)

    Zhang, Y.

    2017-12-01

    The unstructured formulation of the third/fourth-order flux operators used by the Advanced Research WRF is extended twofold on spherical icosahedral grids. First, the fifth- and sixth-order flux operators of WRF are further extended, and the nominally second- to sixth-order operators are then compared based on the solid body rotation and deformational flow tests. Results show that increasing the nominal order generally leads to smaller absolute errors. Overall, the fifth-order scheme generates the smallest errors in limited and unlimited tests, although it does not enhance the convergence rate. The fifth-order scheme also exhibits smaller sensitivity to the damping coefficient than the third-order scheme. Overall, the even-order schemes have higher limiter sensitivity than the odd-order schemes. Second, a triangular version of these high-order operators is repurposed for transporting the potential vorticity in a space-time-split shallow water framework. Results show that a class of nominally third-order upwind-biased operators generates better results than second- and fourth-order counterparts. The increase of the potential enstrophy over time is suppressed owing to the damping effect. The grid-scale noise in the vorticity is largely alleviated, and the total energy remains conserved. Moreover, models using high-order operators show smaller numerical errors in the vorticity field because of a more accurate representation of the nonlinear Coriolis term. This improvement is especially evident in the Rossby-Haurwitz wave test, in which the fluid is highly rotating. Overall, flux operators with higher damping coefficients, which essentially behaves like the Anticipated Potential Vorticity Method, present optimal results.

  3. Arbitrary Lagrangian-Eulerian Method with Local Structured Adaptive Mesh Refinement for Modeling Shock Hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, R W; Pember, R B; Elliott, N S

    2001-10-22

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. This method facilitates the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required through dynamic adaption. Many of the core issues involved in the development of the combined ALEAMR method hinge upon the integration of AMR with a staggered grid Lagrangian integration method. The novel components of the method are mainly driven by the need to reconcile traditionalmore » AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. Numerical examples are presented which demonstrate the accuracy and efficiency of the method.« less

  4. Estimating scatter in cone beam CT with striped ratio grids: A preliminary investigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsieh, Scott, E-mail: sshsieh@stanford.edu

    2016-09-15

    Purpose: To propose a new method for estimating scatter in x-ray imaging. Conventional antiscatter grids reject scatter at an efficiency that is constant or slowly varying over the surface of the grid. A striped ratio antiscatter grid, composed of stripes that alternate between high and low grid ratio, could be used instead. Such a striped ratio grid would reduce scatter-to-primary ratio as a conventional grid would, but more importantly, the signal discontinuities at the boundaries of stripes can be used to estimate local scatter content. Methods: Signal discontinuities provide information on scatter, but are contaminated by variation in primary radiation.more » A nonlinear image processing algorithm is used to estimate the scatter content in the presence of primary variation. We emulated a striped ratio grid by imaging phantoms with two sequential CT scans, one with and one without a conventional grid. These two scans are processed together to mimic a striped ratio grid. This represents a best case limit of the striped ratio grid, in that the extent of grid ratio modulation is very high and the scatter contrast is maximized. Results: In a uniform cylinder, the striped ratio grid virtually eliminates cupping. Artifacts from scatter are improved in an anthropomorphic phantom. Some banding artifacts are induced by the striped ratio grid. Conclusions: Striped ratio grids could be a simple and effective evolution of conventional antiscatter grids. Construction and validation of a physical prototype remains an important future step.« less

  5. High-order central ENO finite-volume scheme for hyperbolic conservation laws on three-dimensional cubed-sphere grids

    NASA Astrophysics Data System (ADS)

    Ivan, L.; De Sterck, H.; Susanto, A.; Groth, C. P. T.

    2015-02-01

    A fourth-order accurate finite-volume scheme for hyperbolic conservation laws on three-dimensional (3D) cubed-sphere grids is described. The approach is based on a central essentially non-oscillatory (CENO) finite-volume method that was recently introduced for two-dimensional compressible flows and is extended to 3D geometries with structured hexahedral grids. Cubed-sphere grids feature hexahedral cells with nonplanar cell surfaces, which are handled with high-order accuracy using trilinear geometry representations in the proposed approach. Varying stencil sizes and slope discontinuities in grid lines occur at the boundaries and corners of the six sectors of the cubed-sphere grid where the grid topology is unstructured, and these difficulties are handled naturally with high-order accuracy by the multidimensional least-squares based 3D CENO reconstruction with overdetermined stencils. A rotation-based mechanism is introduced to automatically select appropriate smaller stencils at degenerate block boundaries, where fewer ghost cells are available and the grid topology changes, requiring stencils to be modified. Combining these building blocks results in a finite-volume discretization for conservation laws on 3D cubed-sphere grids that is uniformly high-order accurate in all three grid directions. While solution-adaptivity is natural in the multi-block setting of our code, high-order accurate adaptive refinement on cubed-sphere grids is not pursued in this paper. The 3D CENO scheme is an accurate and robust solution method for hyperbolic conservation laws on general hexahedral grids that is attractive because it is inherently multidimensional by employing a K-exact overdetermined reconstruction scheme, and it avoids the complexity of considering multiple non-central stencil configurations that characterizes traditional ENO schemes. Extensive numerical tests demonstrate fourth-order convergence for stationary and time-dependent Euler and magnetohydrodynamic flows on cubed-sphere grids, and robustness against spurious oscillations at 3D shocks. Performance tests illustrate efficiency gains that can be potentially achieved using fourth-order schemes as compared to second-order methods for the same error level. Applications on extended cubed-sphere grids incorporating a seventh root block that discretizes the interior of the inner sphere demonstrate the versatility of the spatial discretization method.

  6. Direct numerical simulation of particulate flows with an overset grid method

    NASA Astrophysics Data System (ADS)

    Koblitz, A. R.; Lovett, S.; Nikiforakis, N.; Henshaw, W. D.

    2017-08-01

    We evaluate an efficient overset grid method for two-dimensional and three-dimensional particulate flows for small numbers of particles at finite Reynolds number. The rigid particles are discretised using moving overset grids overlaid on a Cartesian background grid. This allows for strongly-enforced boundary conditions and local grid refinement at particle surfaces, thereby accurately capturing the viscous boundary layer at modest computational cost. The incompressible Navier-Stokes equations are solved with a fractional-step scheme which is second-order-accurate in space and time, while the fluid-solid coupling is achieved with a partitioned approach including multiple sub-iterations to increase stability for light, rigid bodies. Through a series of benchmark studies we demonstrate the accuracy and efficiency of this approach compared to other boundary conformal and static grid methods in the literature. In particular, we find that fully resolving boundary layers at particle surfaces is crucial to obtain accurate solutions to many common test cases. With our approach we are able to compute accurate solutions using as little as one third the number of grid points as uniform grid computations in the literature. A detailed convergence study shows a 13-fold decrease in CPU time over a uniform grid test case whilst maintaining comparable solution accuracy.

  7. Use of Moderate-Resolution Imaging Spectroradiometer bidirectional reflectance distribution function products to enhance simulated surface albedos

    NASA Astrophysics Data System (ADS)

    Roesch, Andreas; Schaaf, Crystal; Gao, Feng

    2004-06-01

    Moderate-Resolution Imaging Spectroradiometer (MODIS) surface albedo at high spatial and spectral resolution is compared with other remotely sensed climatologies, ground-based data, and albedos simulated with the European Center/Hamburg 4 (ECHAM4) global climate model at T42 resolution. The study demonstrates the importance of MODIS data in assessing and improving albedo parameterizations in weather forecast and climate models. The remotely sensed PINKER surface albedo climatology follows the MODIS estimates fairly well in both the visible and near-infrared spectra, whereas ECHAM4 simulates high positive albedo biases over snow-covered boreal forests and the Himalayas. In contrast, the ECHAM4 albedo is probably too low over the Sahara sand desert and adjacent steppes. The study clearly indicates that neglecting albedo variations within T42 grid boxes leads to significant errors in the simulated regional climate and horizontal fluxes, mainly in mountainous and/or snow-covered regions. MODIS surface albedo at 0.05 resolution agrees quite well with in situ field measurements collected at Baseline Surface Radiation Network (BSRN) sites during snow-free periods, while significant positive biases are found under snow-covered conditions, mainly due to differences in the vegetation cover at the BSRN site (short grass) and the vegetation within the larger MODIS grid box. Black sky (direct beam) albedo from the MODIS bidirectional reflectance distribution function model captures the diurnal albedo cycle at BSRN sites with sufficient accuracy. The greatest negative biases are generally found when the Sun is low. A realistic approach for relating albedo and zenith angle has been proposed. Detailed evaluations have demonstrated that ignoring the zenith angle dependence may lead to significant errors in the surface energy balance.

  8. Fast calculation method of computer-generated hologram using a depth camera with point cloud gridding

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam

    2018-03-01

    We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.

  9. Reanalysis comparisons of upper tropospheric-lower stratospheric jets and multiple tropopauses

    NASA Astrophysics Data System (ADS)

    Manney, Gloria L.; Hegglin, Michaela I.; Lawrence, Zachary D.; Wargan, Krzysztof; Millán, Luis F.; Schwartz, Michael J.; Santee, Michelle L.; Lambert, Alyn; Pawson, Steven; Knosp, Brian W.; Fuller, Ryan A.; Daffer, William H.

    2017-09-01

    The representation of upper tropospheric-lower stratospheric (UTLS) jet and tropopause characteristics is compared in five modern high-resolution reanalyses for 1980 through 2014. Climatologies of upper tropospheric jet, subvortex jet (the lowermost part of the stratospheric vortex), and multiple tropopause frequency distributions in MERRA (Modern-Era Retrospective analysis for Research and Applications), ERA-I (ERA-Interim; the European Centre for Medium-Range Weather Forecasts, ECMWF, interim reanalysis), JRA-55 (the Japanese 55-year Reanalysis), and CFSR (the Climate Forecast System Reanalysis) are compared with those in MERRA-2. Differences between alternate products from individual reanalysis systems are assessed; in particular, a comparison of CFSR data on model and pressure levels highlights the importance of vertical grid spacing. Most of the differences in distributions of UTLS jets and multiple tropopauses are consistent with the differences in assimilation model grids and resolution - for example, ERA-I (with coarsest native horizontal resolution) typically shows a significant low bias in upper tropospheric jets with respect to MERRA-2, and JRA-55 (the Japanese 55-year Reanalysis) a more modest one, while CFSR (with finest native horizontal resolution) shows a high bias with respect to MERRA-2 in both upper tropospheric jets and multiple tropopauses. Vertical temperature structure and grid spacing are especially important for multiple tropopause characterizations. Substantial differences between MERRA and MERRA-2 are seen in mid- to high-latitude Southern Hemisphere (SH) winter upper tropospheric jets and multiple tropopauses as well as in the upper tropospheric jets associated with tropical circulations during the solstice seasons; some of the largest differences from the other reanalyses are seen in the same times and places. Very good qualitative agreement among the reanalyses is seen between the large-scale climatological features in UTLS jet and multiple tropopause distributions. Quantitative differences may, however, have important consequences for transport and variability studies. Our results highlight the importance of considering reanalyses differences in UTLS studies, especially in relation to resolution and model grids; this is particularly critical when using high-resolution reanalyses as an observational reference for evaluating global chemistry-climate models.

  10. The physics, performance and predictions of the PEGASES ion-ion thruster

    NASA Astrophysics Data System (ADS)

    Aanesland, Ane

    2014-10-01

    Electric propulsion (EP) is now used systematically in space applications (due to the fuel and lifetime economy) to the extent that EP is now recognized as the next generation space technology. The uses of EP systems have though been limited to attitude control of GEO-stationary satellites and scientific missions. Now, the community envisages the use of EP for a variety of other applications as well; such as orbit transfer maneuvers, satellites in low altitudes, space debris removal, cube-sat control, challenging scientific missions close to and far from earth etc. For this we need a platform of EP systems providing much more variety in performance than what classical Hall and Gridded thrusters can provide alone. PEGASES is a gridded thruster that can be an alternative for some new applications in space, in particular for space debris removal. Unlike classical ion thrusters, here positive and negative ions are alternately accelerated to produce thrust. In this presentation we will look at the fundamental aspects of PEGASES. The emphasis will be put on our current understanding, obtained via analytical models, PIC simulations and experimental measurements, of the alternate extraction and acceleration process. We show that at low grid bias frequencies (10 s of kHz), the system can be described as a sequence of negative and positive ions accelerated as packets within a classical DC mode. Here secondary electrons created in the downstream chamber play an important role in the beam space charge compensation. At higher frequencies (100 s of kHz) the transit time of the ions in the grid gap becomes comparable to the bias period, leading to an ``AC acceleration mode.'' Here the beam is fully space charge compensated and the ion energy and current are functions of the applied frequency and waveform. A generalization of the Child-Langmuir space charge limited law is developed for pulsed voltages and allows evaluating the optimal parameter space and performance of PEGASES. This work received financial state aid managed by the Agence Nationale de la Recherche under the reference ANR-2011-BS09-40 (EPIC) and ANR-11-IDEX-0004-02 (Plas@Par).

  11. A high-order staggered meshless method for elliptic problems

    DOE PAGES

    Trask, Nathaniel; Perego, Mauro; Bochev, Pavel Blagoveston

    2017-03-21

    Here, we present a new meshless method for scalar diffusion equations, which is motivated by their compatible discretizations on primal-dual grids. Unlike the latter though, our approach is truly meshless because it only requires the graph of nearby neighbor connectivity of the discretization points. This graph defines a local primal-dual grid complex with a virtual dual grid, in the sense that specification of the dual metric attributes is implicit in the method's construction. Our method combines a topological gradient operator on the local primal grid with a generalized moving least squares approximation of the divergence on the local dual grid. We show that the resulting approximation of the div-grad operator maintains polynomial reproduction to arbitrary orders and yields a meshless method, which attainsmore » $$O(h^{m})$$ convergence in both $L^2$- and $H^1$-norms, similar to mixed finite element methods. We demonstrate this convergence on curvilinear domains using manufactured solutions in two and three dimensions. Application of the new method to problems with discontinuous coefficients reveals solutions that are qualitatively similar to those of compatible mesh-based discretizations.« less

  12. A coarse-grid-projection acceleration method for finite-element incompressible flow computations

    NASA Astrophysics Data System (ADS)

    Kashefi, Ali; Staples, Anne; FiN Lab Team

    2015-11-01

    Coarse grid projection (CGP) methodology provides a framework for accelerating computations by performing some part of the computation on a coarsened grid. We apply the CGP to pressure projection methods for finite element-based incompressible flow simulations. Based on it, the predicted velocity field data is restricted to a coarsened grid, the pressure is determined by solving the Poisson equation on the coarse grid, and the resulting data are prolonged to the preset fine grid. The contributions of the CGP method to the pressure correction technique are twofold: first, it substantially lessens the computational cost devoted to the Poisson equation, which is the most time-consuming part of the simulation process. Second, it preserves the accuracy of the velocity field. The velocity and pressure spaces are approximated by Galerkin spectral element using piecewise linear basis functions. A restriction operator is designed so that fine data are directly injected into the coarse grid. The Laplacian and divergence matrices are driven by taking inner products of coarse grid shape functions. Linear interpolation is implemented to construct a prolongation operator. A study of the data accuracy and the CPU time for the CGP-based versus non-CGP computations is presented. Laboratory for Fluid Dynamics in Nature.

  13. Sources of method bias in social science research and recommendations on how to control it.

    PubMed

    Podsakoff, Philip M; MacKenzie, Scott B; Podsakoff, Nathan P

    2012-01-01

    Despite the concern that has been expressed about potential method biases, and the pervasiveness of research settings with the potential to produce them, there is disagreement about whether they really are a problem for researchers in the behavioral sciences. Therefore, the purpose of this review is to explore the current state of knowledge about method biases. First, we explore the meaning of the terms "method" and "method bias" and then we examine whether method biases influence all measures equally. Next, we review the evidence of the effects that method biases have on individual measures and on the covariation between different constructs. Following this, we evaluate the procedural and statistical remedies that have been used to control method biases and provide recommendations for minimizing method bias.

  14. The power grid monitoring promotion of Liaoning December 14th accident

    NASA Astrophysics Data System (ADS)

    Zhou, Zhi; Gao, Ziji; He, Xiaoyang; Li, Tie; Jin, Xiaoming; Wang, Mingkai; Qu, Zhi; Sun, Chenguang

    2018-02-01

    This paper introduces the main responsibilities of power grid monitoring and the accident of Liaoning Power Grid 500kV Xujia transformer substation at December 14th, 2016. This paper analyzes the problems exposed in this accident from the aspects of abnormal information judgment, fault information collection, auxiliary video monitoring, online monitoring of substation equipment, puts forward the corresponding improvement methods and summarizes the methods of improving the professional level of power grid equipment monitoring.

  15. Construction method of pre assembled unit of bolt sphere grid

    NASA Astrophysics Data System (ADS)

    Hu, L. W.; Guo, F. L.; Wang, J. L.; Bu, F. M.

    2018-03-01

    The traditional construction of bolt sphere grid has many disadvantages, such as high cost, large amount of work at high altitude and long construction period, in order to make up for these shortcomings, in this paper, a new and applicable construction method is explored: setting up local scaffolding, installing the bolt sphere grid starting frame on the local scaffolding, then the pre assembled unit of bolt sphere grid is assembled on the ground, using small hoisting equipment to lift pre assembled unit to high altitude and install. Compared with the traditional installation method, the construction method has strong practicability and high economic efficiency, and has achieved good social and economic benefits.

  16. Objective tropical cyclone extratropical transition detection in high-resolution reanalysis and climate model data

    DOE PAGES

    Zarzycki, Colin M.; Thatcher, Diana R.; Jablonowski, Christiane

    2017-01-22

    This paper describes an objective technique for detecting the extratropical transition (ET) of tropical cyclones (TCs) in high-resolution gridded climate data. The algorithm is based on previous observational studies using phase spaces to define the symmetry and vertical thermal structure of cyclones. Storm tracking is automated, allowing for direct analysis of climate data. Tracker performance in the North Atlantic is assessed using 23 years of data from the variable-resolution Community Atmosphere Model (CAM) at two different resolutions (DX 55 km and 28 km), the Climate Forecast System Reanalysis (CFSR, DX 38 km), and the ERA-Interim Reanalysis (ERA-I, DX 80 km).more » The mean spatiotemporal climatologies and seasonal cycles of objectively detected ET in the observationally constrained CFSR and ERA-I are well matched to previous observational studies, demonstrating the capability of the scheme to adequately find events. High resolution CAM reproduces TC and ET statistics that are in general agreement with reanalyses. One notable model bias, however, is significantly longer time between ET onset and ET completion in CAM, particularly for TCs that lose symmetry prior to developing a cold-core structure and becoming extratropical cyclones, demonstrating the capability of this method to expose model biases in simulated cyclones beyond the tropical phase.« less

  17. Elliptic generation of composite three-dimensional grids about realistic aircraft

    NASA Technical Reports Server (NTRS)

    Sorenson, R. L.

    1986-01-01

    An elliptic method for generating composite grids about realistic aircraft is presented. A body-conforming grid is first generated about the entire aircraft by the solution of Poisson's differential equation. This grid has relatively coarse spacing, and it covers the entire physical domain. At boundary surfaces, cell size is controlled and cell skewness is nearly eliminated by inhomogeneous terms, which are found automatically by the program. Certain regions of the grid in which high gradients are expected, and which map into rectangular solids in the computational domain, are then designated for zonal refinement. Spacing in the zonal grids is reduced by adding points with a simple, algebraic scheme. Details of the grid generation method are presented along with results of the present application, a wing-body configuration based on the F-16 fighter aircraft.

  18. High voltage spark carbon fiber detection system

    NASA Technical Reports Server (NTRS)

    Yang, L. C.

    1980-01-01

    The pulse discharge technique was used to determine the length and density of carbon fibers released from fiber composite materials during a fire or aircraft accident. Specifications are given for the system which uses the ability of a carbon fiber to initiate spark discharge across a high voltage biased grid to achieve accurate counting and sizing of fibers. The design of the system was optimized, and prototype hardware proved satisfactory in laboratory and field tests.

  19. Method of making dished ion thruster grids

    NASA Technical Reports Server (NTRS)

    Banks, B. A. (Inventor)

    1975-01-01

    A pair of flat grid blanks are clamped together at their edges with an impervious metal sheet on top. All of the blanks and sheets are dished simultaneously by forcing fluid to inflate an elastic sheet which contacts the bottom grid blank. A second impervious metal sheet is inserted between the two grid blanks if the grids have high percentage open areas. The dished grids are stress relieved simultaneously.

  20. CHASE-PL Climate Projection dataset over Poland - bias adjustment of EURO-CORDEX simulations

    NASA Astrophysics Data System (ADS)

    Mezghani, Abdelkader; Dobler, Andreas; Haugen, Jan Erik; Benestad, Rasmus E.; Parding, Kajsa M.; Piniewski, Mikołaj; Kardel, Ignacy; Kundzewicz, Zbigniew W.

    2017-11-01

    The CHASE-PL (Climate change impact assessment for selected sectors in Poland) Climate Projections - Gridded Daily Precipitation and Temperature dataset 5 km (CPLCP-GDPT5) consists of projected daily minimum and maximum air temperatures and precipitation totals of nine EURO-CORDEX regional climate model outputs bias corrected and downscaled to a 5 km × 5 km grid. Simulations of one historical period (1971-2000) and two future horizons (2021-2050 and 2071-2100) assuming two representative concentration pathways (RCP4.5 and RCP8.5) were produced. We used the quantile mapping method and corrected any systematic seasonal bias in these simulations before assessing the changes in annual and seasonal means of precipitation and temperature over Poland. Projected changes estimated from the multi-model ensemble mean showed that annual means of temperature are expected to increase steadily by 1 °C until 2021-2050 and by 2 °C until 2071-2100 assuming the RCP4.5 emission scenario. Assuming the RCP8.5 emission scenario, this can reach up to almost 4 °C by 2071-2100. Similarly to temperature, projected changes in regional annual means of precipitation are expected to increase by 6 to 10 % and by 8 to 16 % for the two future horizons and RCPs, respectively. Similarly, individual model simulations also exhibited warmer and wetter conditions on an annual scale, showing an intensification of the magnitude of the change at the end of the 21st century. The same applied for projected changes in seasonal means of temperature showing a higher winter warming rate by up to 0.5 °C compared to the other seasons. However, projected changes in seasonal means of precipitation by the individual models largely differ and are sometimes inconsistent, exhibiting spatial variations which depend on the selected season, location, future horizon, and RCP. The overall range of the 90 % confidence interval predicted by the ensemble of multi-model simulations was found to likely vary between -7 % (projected for summer assuming the RCP4.5 emission scenario) and +40 % (projected for winter assuming the RCP8.5 emission scenario) by the end of the 21st century. Finally, this high-resolution bias-corrected product can serve as a basis for climate change impact and adaptation studies for many sectors over Poland. The CPLCP-GDPT5 dataset is publicly available at http://dx.doi.org/10.4121/uuid:e940ec1a-71a0-449e-bbe3-29217f2ba31d.

  1. Probabilistic precipitation nowcasting based on an extrapolation of radar reflectivity and an ensemble approach

    NASA Astrophysics Data System (ADS)

    Sokol, Zbyněk; Mejsnar, Jan; Pop, Lukáš; Bližňák, Vojtěch

    2017-09-01

    A new method for the probabilistic nowcasting of instantaneous rain rates (ENS) based on the ensemble technique and extrapolation along Lagrangian trajectories of current radar reflectivity is presented. Assuming inaccurate forecasts of the trajectories, an ensemble of precipitation forecasts is calculated and used to estimate the probability that rain rates will exceed a given threshold in a given grid point. Although the extrapolation neglects the growth and decay of precipitation, their impact on the probability forecast is taken into account by the calibration of forecasts using the reliability component of the Brier score (BS). ENS forecasts the probability that the rain rates will exceed thresholds of 0.1, 1.0 and 3.0 mm/h in squares of 3 km by 3 km. The lead times were up to 60 min, and the forecast accuracy was measured by the BS. The ENS forecasts were compared with two other methods: combined method (COM) and neighbourhood method (NEI). NEI considered the extrapolated values in the square neighbourhood of 5 by 5 grid points of the point of interest as ensemble members, and the COM ensemble was comprised of united ensemble members of ENS and NEI. The results showed that the calibration technique significantly improves bias of the probability forecasts by including additional uncertainties that correspond to neglected processes during the extrapolation. In addition, the calibration can also be used for finding the limits of maximum lead times for which the forecasting method is useful. We found that ENS is useful for lead times up to 60 min for thresholds of 0.1 and 1 mm/h and approximately 30 to 40 min for a threshold of 3 mm/h. We also found that a reasonable size of the ensemble is 100 members, which provided better scores than ensembles with 10, 25 and 50 members. In terms of the BS, the best results were obtained by ENS and COM, which are comparable. However, ENS is better calibrated and thus preferable.

  2. Efficient grid-based techniques for density functional theory

    NASA Astrophysics Data System (ADS)

    Rodriguez-Hernandez, Juan Ignacio

    Understanding the chemical and physical properties of molecules and materials at a fundamental level often requires quantum-mechanical models for these substance's electronic structure. This type of many body quantum mechanics calculation is computationally demanding, hindering its application to substances with more than a few hundreds atoms. The supreme goal of many researches in quantum chemistry---and the topic of this dissertation---is to develop more efficient computational algorithms for electronic structure calculations. In particular, this dissertation develops two new numerical integration techniques for computing molecular and atomic properties within conventional Kohn-Sham-Density Functional Theory (KS-DFT) of molecular electronic structure. The first of these grid-based techniques is based on the transformed sparse grid construction. In this construction, a sparse grid is generated in the unit cube and then mapped to real space according to the pro-molecular density using the conditional distribution transformation. The transformed sparse grid was implemented in program deMon2k, where it is used as the numerical integrator for the exchange-correlation energy and potential in the KS-DFT procedure. We tested our grid by computing ground state energies, equilibrium geometries, and atomization energies. The accuracy on these test calculations shows that our grid is more efficient than some previous integration methods: our grids use fewer points to obtain the same accuracy. The transformed sparse grids were also tested for integrating, interpolating and differentiating in different dimensions (n = 1,2,3,6). The second technique is a grid-based method for computing atomic properties within QTAIM. It was also implemented in deMon2k. The performance of the method was tested by computing QTAIM atomic energies, charges, dipole moments, and quadrupole moments. For medium accuracy, our method is the fastest one we know of.

  3. Computing Aerodynamic Performance of a 2D Iced Airfoil: Blocking Topology and Grid Generation

    NASA Technical Reports Server (NTRS)

    Chi, X.; Zhu, B.; Shih, T. I.-P.; Slater, J. W.; Addy, H. E.; Choo, Yung K.; Lee, Chi-Ming (Technical Monitor)

    2002-01-01

    The ice accrued on airfoils can have enormously complicated shapes with multiple protruded horns and feathers. In this paper, several blocking topologies are proposed and evaluated on their ability to produce high-quality structured multi-block grid systems. A transition layer grid is introduced to ensure that jaggedness on the ice-surface geometry do not to propagate into the domain. This is important for grid-generation methods based on hyperbolic PDEs (Partial Differential Equations) and algebraic transfinite interpolation. A 'thick' wrap-around grid is introduced to ensure that grid lines clustered next to solid walls do not propagate as streaks of tightly packed grid lines into the interior of the domain along block boundaries. For ice shapes that are not too complicated, a method is presented for generating high-quality single-block grids. To demonstrate the usefulness of the methods developed, grids and CFD solutions were generated for two iced airfoils: the NLF0414 airfoil with and without the 623-ice shape and the B575/767 airfoil with and without the 145m-ice shape. To validate the computations, the computed lift coefficients as a function of angle of attack were compared with available experimental data. The ice shapes and the blocking topologies were prepared by NASA Glenn's SmaggIce software. The grid systems were generated by using a four-boundary method based on Hermite interpolation with controls on clustering, orthogonality next to walls, and C continuity across block boundaries. The flow was modeled by the ensemble-averaged compressible Navier-Stokes equations, closed by the shear-stress transport turbulence model in which the integration is to the wall. All solutions were generated by using the NPARC WIND code.

  4. Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd

    2015-01-01

    Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.

  5. A coarse-grid projection method for accelerating incompressible flow computations

    NASA Astrophysics Data System (ADS)

    San, Omer; Staples, Anne E.

    2013-01-01

    We present a coarse-grid projection (CGP) method for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. The CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. After solving the Poisson equation on a coarsened grid, an interpolation scheme is used to obtain the fine data for subsequent time stepping on the full grid. A particular version of the method is applied here to the vorticity-stream function, primitive variable, and vorticity-velocity formulations of incompressible Navier-Stokes equations. We compute several benchmark flow problems on two-dimensional Cartesian and non-Cartesian grids, as well as a three-dimensional flow problem. The method is found to accelerate these computations while retaining a level of accuracy close to that of the fine resolution field, which is significantly better than the accuracy obtained for a similar computation performed solely using a coarse grid. A linear acceleration rate is obtained for all the cases we consider due to the linear-cost elliptic Poisson solver used, with reduction factors in computational time between 2 and 42. The computational savings are larger when a suboptimal Poisson solver is used. We also find that the computational savings increase with increasing distortion ratio on non-Cartesian grids, making the CGP method a useful tool for accelerating generalized curvilinear incompressible flow solvers.

  6. A pseudospectral Legendre method for hyperbolic equations with an improved stability condition

    NASA Technical Reports Server (NTRS)

    Tal-Ezer, Hillel

    1986-01-01

    A new pseudospectral method is introduced for solving hyperbolic partial differential equations. This method uses different grid points than previously used pseudospectral methods: in fact the grid points are related to the zeroes of the Legendre polynomials. The main advantage of this method is that the allowable time step is proportional to the inverse of the number of grid points 1/N rather than to 1/n(2) (as in the case of other pseudospectral methods applied to mixed initial boundary value problems). A highly accurate time discretization suitable for these spectral methods is discussed.

  7. A pseudospectral Legendre method for hyperbolic equations with an improved stability condition

    NASA Technical Reports Server (NTRS)

    Tal-Ezer, H.

    1984-01-01

    A new pseudospectral method is introduced for solving hyperbolic partial differential equations. This method uses different grid points than previously used pseudospectral methods: in fact the grid are related to the zeroes of the Legendre polynomials. The main advantage of this method is that the allowable time step is proportional to the inverse of the number of grid points 1/N rather than to 1/n(2) (as in the case of other pseudospectral methods applied to mixed initial boundary value problems). A highly accurate time discretization suitable for these spectral methods is discussed.

  8. Qualitative Life-Grids: A Proposed Method for Comparative European Educational Research

    ERIC Educational Resources Information Center

    Abbas, Andrea; Ashwin, Paul; McLean, Monica

    2013-01-01

    Drawing upon their large three-year mixed-method study comparing four English university sociology departments, the authors demonstrate the benefits to be gained from concisely recording biographical stories on life-grids. They argue that life-grids have key benefits which are important for comparative European educational research. Some of these…

  9. Moving and adaptive grid methods for compressible flows

    NASA Technical Reports Server (NTRS)

    Trepanier, Jean-Yves; Camarero, Ricardo

    1995-01-01

    This paper describes adaptive grid methods developed specifically for compressible flow computations. The basic flow solver is a finite-volume implementation of Roe's flux difference splitting scheme or arbitrarily moving unstructured triangular meshes. The grid adaptation is performed according to geometric and flow requirements. Some results are included to illustrate the potential of the methodology.

  10. Sampling Scattered Data Onto Rectangular Grids for Volume Visualization

    DTIC Science & Technology

    1989-12-01

    30 4.4 Building A Rectangular Grid ..... ................ 30 4.5 Sampling Methds ...... ...................... 34 4.6...dimensional data have been developed recently. In computational fluid flow analysis, methods for constructing three dimen- sional numerical grids are...structure of rectangular grids. Because finite element analysis is useful in fields other than fluid flow analysis and the numerical grid has promising

  11. Method of assembly of molecular-sized nets and scaffolding

    DOEpatents

    Michl, Josef; Magnera, Thomas F.; David, Donald E.; Harrison, Robin M.

    1999-01-01

    The present invention relates to methods and starting materials for forming molecular-sized grids or nets, or other structures based on such grids and nets, by creating molecular links between elementary molecular modules constrained to move in only two directions on an interface or surface by adhesion or bonding to that interface or surface. In the methods of this invention, monomers are employed as the building blocks of grids and more complex structures. Monomers are introduced onto and allowed to adhere or bond to an interface. The connector groups of adjacent adhered monomers are then polymerized with each other to form a regular grid in two dimensions above the interface. Modules that are not bound or adhered to the interface are removed prior to reaction of the connector groups to avoid undesired three-dimensional cross-linking and the formation of non-grid structures. Grids formed by the methods of this invention are useful in a variety of applications, including among others, for separations technology, as masks for forming regular surface structures (i.e., metal deposition) and as templates for three-dimensional molecular-sized structures.

  12. Development of a pressure based multigrid solution method for complex fluid flows

    NASA Technical Reports Server (NTRS)

    Shyy, Wei

    1991-01-01

    In order to reduce the computational difficulty associated with a single grid (SG) solution procedure, the multigrid (MG) technique was identified as a useful means for improving the convergence rate of iterative methods. A full MG full approximation storage (FMG/FAS) algorithm is used to solve the incompressible recirculating flow problems in complex geometries. The algorithm is implemented in conjunction with a pressure correction staggered grid type of technique using the curvilinear coordinates. In order to show the performance of the method, two flow configurations, one a square cavity and the other a channel, are used as test problems. Comparisons are made between the iterations, equivalent work units, and CPU time. Besides showing that the MG method can yield substantial speed-up with wide variations in Reynolds number, grid distributions, and geometry, issues such as the convergence characteristics of different grid levels, the choice of convection schemes, and the effectiveness of the basic iteration smoothers are studied. An adaptive grid scheme is also combined with the MG procedure to explore the effects of grid resolution on the MG convergence rate as well as the numerical accuracy.

  13. Method of assembly of molecular-sized nets and scaffolding

    DOEpatents

    Michl, J.; Magnera, T.F.; David, D.E.; Harrison, R.M.

    1999-03-02

    The present invention relates to methods and starting materials for forming molecular-sized grids or nets, or other structures based on such grids and nets, by creating molecular links between elementary molecular modules constrained to move in only two directions on an interface or surface by adhesion or bonding to that interface or surface. In the methods of this invention, monomers are employed as the building blocks of grids and more complex structures. Monomers are introduced onto and allowed to adhere or bond to an interface. The connector groups of adjacent adhered monomers are then polymerized with each other to form a regular grid in two dimensions above the interface. Modules that are not bound or adhered to the interface are removed prior to reaction of the connector groups to avoid undesired three-dimensional cross-linking and the formation of non-grid structures. Grids formed by the methods of this invention are useful in a variety of applications, including among others, for separations technology, as masks for forming regular surface structures (i.e., metal deposition) and as templates for three-dimensional molecular-sized structures. 9 figs.

  14. Acceleration of incremental-pressure-correction incompressible flow computations using a coarse-grid projection method

    NASA Astrophysics Data System (ADS)

    Kashefi, Ali; Staples, Anne

    2016-11-01

    Coarse grid projection (CGP) methodology is a novel multigrid method for systems involving decoupled nonlinear evolution equations and linear elliptic equations. The nonlinear equations are solved on a fine grid and the linear equations are solved on a corresponding coarsened grid. Mapping functions transfer data between the two grids. Here we propose a version of CGP for incompressible flow computations using incremental pressure correction methods, called IFEi-CGP (implicit-time-integration, finite-element, incremental coarse grid projection). Incremental pressure correction schemes solve Poisson's equation for an intermediate variable and not the pressure itself. This fact contributes to IFEi-CGP's efficiency in two ways. First, IFEi-CGP preserves the velocity field accuracy even for a high level of pressure field grid coarsening and thus significant speedup is achieved. Second, because incremental schemes reduce the errors that arise from boundaries with artificial homogenous Neumann conditions, CGP generates undamped flows for simulations with velocity Dirichlet boundary conditions. Comparisons of the data accuracy and CPU times for the incremental-CGP versus non-incremental-CGP computations are presented.

  15. Simulations of the transport and deposition of 137Cs over Europe after the Chernobyl NPP accident: influence of varying emission-altitude and model horizontal and vertical resolution

    NASA Astrophysics Data System (ADS)

    Evangeliou, N.; Balkanski, Y.; Cozic, A.; Møller, A. P.

    2013-03-01

    The coupled model LMDzORINCA has been used to simulate the transport, wet and dry deposition of the radioactive tracer 137Cs after accidental releases. For that reason, two horizontal resolutions were deployed and used in the model, a regular grid of 2.5°×1.25°, and the same grid stretched over Europe to reach a resolution of 0.45°×0.51°. The vertical dimension is represented with two different resolutions, 19 and 39 levels, respectively, extending up to mesopause. Four different simulations are presented in this work; the first uses the regular grid over 19 vertical levels assuming that the emissions took place at the surface (RG19L(S)), the second also uses the regular grid over 19 vertical levels but realistic source injection heights (RG19L); in the third resolution the grid is regular and the vertical resolution 39 vertical levels (RG39L) and finally, it is extended to the stretched grid with 19 vertical levels (Z19L). The best choice for the model validation was the Chernobyl accident which occurred in Ukraine (ex-USSR) on 26 May 1986. This accident has been widely studied since 1986, and a large database has been created containing measurements of atmospheric activity concentration and total cumulative deposition for 137Cs from most of the European countries. According to the results, the performance of the model to predict the transport and deposition of the radioactive tracer was efficient and accurate presenting low biases in activity concentrations and deposition inventories, despite the large uncertainties on the intensity of the source released. However, the best agreement with observations was obtained using the highest horizontal resolution of the model (Z19L run). The model managed to predict the radioactive contamination in most of the European regions (similar to Atlas), and also the arrival times of the radioactive fallout. As regards to the vertical resolution, the largest biases were obtained for the 39 layers run due to the increase of the levels in conjunction with the uncertainty of the source term. Moreover, the ecological half-life of 137Cs in the atmosphere after the accident ranged between 6 and 9 days, which is in good accordance to what previously reported and in the same range with the recent accident in Japan. The high response of LMDzORINCA model for 137Cs reinforces the importance of atmospheric modeling in emergency cases to gather information for protecting the population from the adverse effects of radiation.

  16. Simulations of the transport and deposition of 137Cs over Europe after the Chernobyl Nuclear Power Plant accident: influence of varying emission-altitude and model horizontal and vertical resolution

    NASA Astrophysics Data System (ADS)

    Evangeliou, N.; Balkanski, Y.; Cozic, A.; Møller, A. P.

    2013-07-01

    The coupled model LMDZORINCA has been used to simulate the transport, wet and dry deposition of the radioactive tracer 137Cs after accidental releases. For that reason, two horizontal resolutions were deployed and used in the model, a regular grid of 2.5° × 1.27°, and the same grid stretched over Europe to reach a resolution of 0.66° × 0.51°. The vertical dimension is represented with two different resolutions, 19 and 39 levels respectively, extending up to the mesopause. Four different simulations are presented in this work; the first uses the regular grid over 19 vertical levels assuming that the emissions took place at the surface (RG19L(S)), the second also uses the regular grid over 19 vertical levels but realistic source injection heights (RG19L); in the third resolution the grid is regular and the vertical resolution 39 levels (RG39L) and finally, it is extended to the stretched grid with 19 vertical levels (Z19L). The model is validated with the Chernobyl accident which occurred in Ukraine (ex-USSR) on 26 May 1986 using the emission inventory from Brandt et al. (2002). This accident has been widely studied since 1986, and a large database has been created containing measurements of atmospheric activity concentration and total cumulative deposition for 137Cs from most of the European countries. According to the results, the performance of the model to predict the transport and deposition of the radioactive tracer was efficient and accurate presenting low biases in activity concentrations and deposition inventories, despite the large uncertainties on the intensity of the source released. The best agreement with observations was obtained using the highest horizontal resolution of the model (Z19L run). The model managed to predict the radioactive contamination in most of the European regions (similar to De Cort et al., 1998), and also the arrival times of the radioactive fallout. As regards to the vertical resolution, the largest biases were obtained for the 39 layers run due to the increase of the levels in conjunction with the uncertainty of the source term. Moreover, the ecological half-life of 137Cs in the atmosphere after the accident ranged between 6 and 9 days, which is in good accordance to what previously reported and in the same range with the recent accident in Japan. The high response of LMDZORINCA model for 137Cs reinforces the importance of atmospheric modelling in emergency cases to gather information for protecting the population from the adverse effects of radiation.

  17. Scenario generation for stochastic optimization problems via the sparse grid method

    DOE PAGES

    Chen, Michael; Mehrotra, Sanjay; Papp, David

    2015-04-19

    We study the use of sparse grids in the scenario generation (or discretization) problem in stochastic programming problems where the uncertainty is modeled using a continuous multivariate distribution. We show that, under a regularity assumption on the random function involved, the sequence of optimal objective function values of the sparse grid approximations converges to the true optimal objective function values as the number of scenarios increases. The rate of convergence is also established. We treat separately the special case when the underlying distribution is an affine transform of a product of univariate distributions, and show how the sparse grid methodmore » can be adapted to the distribution by the use of quadrature formulas tailored to the distribution. We numerically compare the performance of the sparse grid method using different quadrature rules with classic quasi-Monte Carlo (QMC) methods, optimal rank-one lattice rules, and Monte Carlo (MC) scenario generation, using a series of utility maximization problems with up to 160 random variables. The results show that the sparse grid method is very efficient, especially if the integrand is sufficiently smooth. In such problems the sparse grid scenario generation method is found to need several orders of magnitude fewer scenarios than MC and QMC scenario generation to achieve the same accuracy. As a result, it is indicated that the method scales well with the dimension of the distribution--especially when the underlying distribution is an affine transform of a product of univariate distributions, in which case the method appears scalable to thousands of random variables.« less

  18. CVD-MPFA full pressure support, coupled unstructured discrete fracture-matrix Darcy-flux approximations

    NASA Astrophysics Data System (ADS)

    Ahmed, Raheel; Edwards, Michael G.; Lamine, Sadok; Huisman, Bastiaan A. H.; Pal, Mayur

    2017-11-01

    Two novel control-volume methods are presented for flow in fractured media, and involve coupling the control-volume distributed multi-point flux approximation (CVD-MPFA) constructed with full pressure support (FPS), to two types of discrete fracture-matrix approximation for simulation on unstructured grids; (i) involving hybrid grids and (ii) a lower dimensional fracture model. Flow is governed by Darcy's law together with mass conservation both in the matrix and the fractures, where large discontinuities in permeability tensors can occur. Finite-volume FPS schemes are more robust than the earlier CVD-MPFA triangular pressure support (TPS) schemes for problems involving highly anisotropic homogeneous and heterogeneous full-tensor permeability fields. We use a cell-centred hybrid-grid method, where fractures are modelled by lower-dimensional interfaces between matrix cells in the physical mesh but expanded to equi-dimensional cells in the computational domain. We present a simple procedure to form a consistent hybrid-grid locally for a dual-cell. We also propose a novel hybrid-grid for intersecting fractures, for the FPS method, which reduces the condition number of the global linear system and leads to larger time steps for tracer transport. The transport equation for tracer flow is coupled with the pressure equation and provides flow parameter assessment of the fracture models. Transport results obtained via TPS and FPS hybrid-grid formulations are compared with the corresponding results of fine-scale explicit equi-dimensional formulations. The results show that the hybrid-grid FPS method applies to general full-tensor fields and provides improved robust approximations compared to the hybrid-grid TPS method for fractured domains, for both weakly anisotropic permeability fields and very strong anisotropic full-tensor permeability fields where the TPS scheme exhibits spurious oscillations. The hybrid-grid FPS formulation is extended to compressible flow and the results demonstrate the method is also robust for transient flow. Furthermore, we present FPS coupled with a lower-dimensional fracture model, where fractures are strictly lower-dimensional in the physical mesh as well as in the computational domain. We present a comparison of the hybrid-grid FPS method and the lower-dimensional fracture model for several cases of isotropic and anisotropic fractured media which illustrate the benefits of the respective methods.

  19. Transformation of two and three-dimensional regions by elliptic systems

    NASA Technical Reports Server (NTRS)

    Mastin, C. Wayne

    1991-01-01

    A reliable linear system is presented for grid generation in 2-D and 3-D. The method is robust in the sense that convergence is guaranteed but is not as reliable as other nonlinear elliptic methods in generating nonfolding grids. The construction of nonfolding grids depends on having reasonable approximations of cell aspect ratios and an appropriate distribution of grid points on the boundary of the region. Some guidelines are included on approximating the aspect ratios, but little help is offered on setting up the boundary grid other than to say that in 2-D the boundary correspondence should be close to that generated by a conformal mapping. It is assumed that the functions which control the grid distribution depend only on the computational variables and not on the physical variables. Whether this is actually the case depends on how the grid is constructed. In a dynamic adaptive procedure where the grid is constructed in the process of solving a fluid flow problem, the grid is usually updated at fixed iteration counts using the current value of the control function. Since the control function is not being updated during the iteration of the grid equations, the grid construction is a linear procedure. However, in the case of a static adaptive procedure where a trial solution is computed and used to construct an adaptive grid, the control functions may be recomputed at every step of the grid iteration.

  20. On the suitability of current atmospheric reanalyses for regional warming studies over China

    NASA Astrophysics Data System (ADS)

    Zhou, Chunlüe; He, Yanyi; Wang, Kaicun

    2018-06-01

    Reanalyses are widely used because they add value to routine observations by generating physically or dynamically consistent and spatiotemporally complete atmospheric fields. Existing studies include extensive discussions of the temporal suitability of reanalyses in studies of global change. This study adds to this existing work by investigating the suitability of reanalyses in studies of regional climate change, in which land-atmosphere interactions play a comparatively important role. In this study, surface air temperatures (Ta) from 12 current reanalysis products are investigated; in particular, the spatial patterns of trends in Ta are examined using homogenized measurements of Ta made at ˜ 2200 meteorological stations in China from 1979 to 2010. The results show that ˜ 80 % of the mean differences in Ta between the reanalyses and the in situ observations can be attributed to the differences in elevation between the stations and the model grids. Thus, the Ta climatologies display good skill, and these findings rebut previous reports of biases in Ta. However, the biases in theTa trends in the reanalyses diverge spatially (standard deviation = 0.15-0.30 °C decade-1 using 1° × 1° grid cells). The simulated biases in the trends in Ta correlate well with those of precipitation frequency, surface incident solar radiation (Rs) and atmospheric downward longwave radiation (Ld) among the reanalyses (r = -0.83, 0.80 and 0.77; p < 0.1) when the spatial patterns of these variables are considered. The biases in the trends in Ta over southern China (on the order of -0.07 °C decade-1) are caused by biases in the trends in Rs, Ld and precipitation frequency on the order of 0.10, -0.08 and -0.06 °C decade-1, respectively. The biases in the trends in Ta over northern China (on the order of -0.12 °C decade-1) result jointly from those in Ld and precipitation frequency. Therefore, improving the simulation of precipitation frequency and Rs helps to maximize the signal component corresponding to regional climate. In addition, the analysis of Ta observations helps represent regional warming in ERA-Interim and JRA-55. Incorporating vegetation dynamics in reanalyses and the use of accurate aerosol information, as in the Modern-Era Retrospective Analysis for Research and Applications, version 2 (MERRA-2), would lead to improvements in the modelling of regional warming. The use of the ensemble technique adopted in the twentieth-century atmospheric model ensemble ERA-20CM significantly narrows the uncertainties associated with regional warming in reanalyses (standard deviation = 0.15 °C decade-1).

  1. Evaluation of ERA-Interim precipitation data in complex terrain

    NASA Astrophysics Data System (ADS)

    Gao, Lu; Bernhardt, Matthias; Schulz, Karsten

    2013-04-01

    Precipitation controls a large variety of environmental processes, which is an essential input parameter for land surface models e.g. in hydrology, ecology and climatology. However, rain gauge networks provides the necessary information, are commonly sparse in complex terrains, especially in high mountainous regions. Reanalysis products (e.g. ERA-40 and NCEP-NCAR) as surrogate data are increasing applied in the past years. Although they are improving forward, previous studies showed that these products should be objectively evaluated due to their various uncertainties. In this study, we evaluated the precipitation data from ERA-Interim, which is a latest reanalysis product developed by ECMWF. ERA-Interim daily total precipitation are compared with high resolution gridded observation dataset (E-OBS) at 0.25°×0.25° grids for the period 1979-2010 over central Alps (45.5-48°N, 6.25-11.5°E). Wet or dry day is defined using different threshold values (0.5mm, 1mm, 5mm, 10mm and 20mm). The correspondence ratio (CR) is applied for frequency comparison, which is the ratio of days when precipitation occurs in both ERA-Interim and E-OBS dataset. The result shows that ERA-Interim captures precipitation occurrence very well with a range of CR from 0.80 to 0.97 for 0.5mm to 20mm thresholds. However, the bias of intensity increases with rising thresholds. Mean absolute error (MAE) varies between 4.5 mm day-1 and 9.5 mm day-1 in wet days for whole area. In term of mean annual cycle, ERA-Interim almost has the same standard deviation of the interannual variability of daily precipitation with E-OBS, 1.0 mm day-1. Significant wet biases happened in ERA-Interim throughout warm season (May to August) and dry biases in cold season (November to February). The spatial distribution of mean annual daily precipitation shows that ERA-Interim significant underestimates precipitation intensity in high mountains and northern flank of Alpine chain from November to March while pronounced overestimate in the southern flank of Alps. The poor topographical and flow related characteristic representation of ERA-Interim model is possibly responsible for the bias. Particularly, the mountain block effect of moisture is weak captured. The comparison demonstrates that ERA-Interim precipitation intensity needs bias correction for further alpine climate studies, although it reasonably captures precipitation frequency. This critical evaluation not only diagnosed the data quality of ERA-Interim, but also provided the evidence for reanalysis products downscaling and bias correction in complex terrain.

  2. Batch mode grid generation: An endangered species

    NASA Technical Reports Server (NTRS)

    Schuster, David M.

    1992-01-01

    Non-interactive grid generation schemes should thrive as emphasis shifts from development of numerical analysis and design methods to application of these tools to real engineering problems. A strong case is presented for the continued development and application of non-interactive geometry modeling methods. Guidelines, strategies, and techniques for developing and implementing these tools are presented using current non-interactive grid generation methods as examples. These schemes play an important role in the development of multidisciplinary analysis methods and some of these applications are also discussed.

  3. WE-EF-207-10: Striped Ratio Grids: A New Concept for Scatter Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsieh, S

    2015-06-15

    Purpose: To propose a new method for estimating scatter in x-ray imaging. We propose the “striped ratio grid,” an anti-scatter grid with alternating stripes of high scatter rejection (attained, for example, by high grid ratio) and low scatter rejection. To minimize artifacts, stripes are oriented parallel to the direction of the ramp filter. Signal discontinuities at the boundaries between stripes provide information on local scatter content, although these discontinuities are contaminated by variation in primary radiation. Methods: We emulated a striped ratio grid by imaging phantoms with two sequential CT scans, one with and one without a conventional grid, andmore » processed them together to mimic a striped ratio grid. Two phantoms were scanned with the emulated striped ratio grid and compared with a conventional anti-scatter grid and a fan-beam acquisition, which served as ground truth. A nonlinear image processing algorithm was developed to mitigate the problem of primary variation. Results: The emulated striped ratio grid reduced scatter more effectively than the conventional grid alone. Contrast is thereby improved in projection imaging. In CT imaging, cupping is markedly reduced. Artifacts introduced by the striped ratio grid appear to be minimal. Conclusion: Striped ratio grids could be a simple and effective evolution of conventional anti-scatter grids. Unlike several other approaches currently under investigation for scatter management, striped ratio grids require minimal computation, little new hardware (at least for systems which already use removable grids) and impose few assumptions on the nature of the object being scanned.« less

  4. Inert gas thrusters

    NASA Technical Reports Server (NTRS)

    Kaufman, H. R.; Robinson, R. S.

    1980-01-01

    Some advances in component technology for inert gas thrusters are described. The maximum electron emission of a hollow cathode with Ar was increased 60-70% by the use of an enclosed keeper configuration. Operation with Ar, but without emissive oxide, was also obtained. A 30 cm thruster operated with Ar at moderate discharge voltages give double-ion measurements consistent with a double ion correlation developed previously using 15 cm thruster data. An attempt was made to reduce discharge losses by biasing anodes positive of the discharge plasma. The reason this attempt was unsuccessful is not yet clear. The performance of a single-grid ion-optics configuration was evaluated. The ion impingement on the single grid accelerator was found to approach the value expected from the projected blockage when the sheath thickness next to the accelerator was 2-3 times the aperture diameter.

  5. Inert-gas thruster technology

    NASA Technical Reports Server (NTRS)

    Kaufman, H. R.; Robinson, R. S.; Trock, D. C.

    1981-01-01

    Attention is given to recent advances in component technology for inert-gas thrusters. It is noted that the maximum electron emission of a hollow cathode with Ar can be increased 60-70% by using an enclosed keeper configuration. Operation with Ar but without emissive oxide has also been attained. A 30-cm thruster operated with Ar at moderate discharge voltages is found to give double-ion measurements consistent with a double-ion correlation developed earlier on the basis of 15-cm thruster data. An attempt is made to reduce discharge losses by biasing anodes positive of the discharge plasma. The performance of a single-grid ion-optics configuration is assessed. The ion impingement on the single-grid accelerator is found to approach the value expected from the projected blockage when the sheath thickness next to the accelerator is 2-3 times the aperture diameter.

  6. Constrained CVT meshes and a comparison of triangular mesh generators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Hoa; Burkardt, John; Gunzburger, Max

    2009-01-01

    Mesh generation in regions in Euclidean space is a central task in computational science, and especially for commonly used numerical methods for the solution of partial differential equations, e.g., finite element and finite volume methods. We focus on the uniform Delaunay triangulation of planar regions and, in particular, on how one selects the positions of the vertices of the triangulation. We discuss a recently developed method, based on the centroidal Voronoi tessellation (CVT) concept, for effecting such triangulations and present two algorithms, including one new one, for CVT-based grid generation. We also compare several methods, including CVT-based methods, for triangulatingmore » planar domains. To this end, we define several quantitative measures of the quality of uniform grids. We then generate triangulations of several planar regions, including some having complexities that are representative of what one may encounter in practice. We subject the resulting grids to visual and quantitative comparisons and conclude that all the methods considered produce high-quality uniform grids and that the CVT-based grids are at least as good as any of the others.« less

  7. Semi-implicit integration factor methods on sparse grids for high-dimensional systems

    NASA Astrophysics Data System (ADS)

    Wang, Dongyong; Chen, Weitao; Nie, Qing

    2015-07-01

    Numerical methods for partial differential equations in high-dimensional spaces are often limited by the curse of dimensionality. Though the sparse grid technique, based on a one-dimensional hierarchical basis through tensor products, is popular for handling challenges such as those associated with spatial discretization, the stability conditions on time step size due to temporal discretization, such as those associated with high-order derivatives in space and stiff reactions, remain. Here, we incorporate the sparse grids with the implicit integration factor method (IIF) that is advantageous in terms of stability conditions for systems containing stiff reactions and diffusions. We combine IIF, in which the reaction is treated implicitly and the diffusion is treated explicitly and exactly, with various sparse grid techniques based on the finite element and finite difference methods and a multi-level combination approach. The overall method is found to be efficient in terms of both storage and computational time for solving a wide range of PDEs in high dimensions. In particular, the IIF with the sparse grid combination technique is flexible and effective in solving systems that may include cross-derivatives and non-constant diffusion coefficients. Extensive numerical simulations in both linear and nonlinear systems in high dimensions, along with applications of diffusive logistic equations and Fokker-Planck equations, demonstrate the accuracy, efficiency, and robustness of the new methods, indicating potential broad applications of the sparse grid-based integration factor method.

  8. Enhancement of Efficiency and Reduction of Grid Thickness Variation on Casting Process with Lean Six Sigma Method

    NASA Astrophysics Data System (ADS)

    Witantyo; Setyawan, David

    2018-03-01

    In a lead acid battery industry, grid casting is a process that has high defect and thickness variation level. DMAIC (Define-Measure-Analyse-Improve-Control) method and its tools will be used to improve the casting process. In the Define stage, it is used project charter and SIPOC (Supplier Input Process Output Customer) method to map the existent problem. In the Measure stage, it is conducted a data retrieval related to the types of defect and the amount of it, also the grid thickness variation that happened. And then the retrieved data is processed and analyzed by using 5 Why’s and FMEA method. In the Analyze stage, it is conducted a grid observation that experience fragile and crack type of defect by using microscope showing the amount of oxide Pb inclusion in the grid. Analysis that is used in grid casting process shows the difference of temperature that is too high between the metal fluid and mold temperature, also the corking process that doesn’t have standard. The Improve stage is conducted a fixing process which generates the reduction of grid variation thickness level and defect/unit level from 9,184% to 0,492%. In Control stage, it is conducted a new working standard determination and already fixed control process.

  9. On the Quality of Velocity Interpolation Schemes for Marker-in-Cell Method and Staggered Grids

    NASA Astrophysics Data System (ADS)

    Pusok, Adina E.; Kaus, Boris J. P.; Popov, Anton A.

    2017-03-01

    The marker-in-cell method is generally considered a flexible and robust method to model the advection of heterogenous non-diffusive properties (i.e., rock type or composition) in geodynamic problems. In this method, Lagrangian points carrying compositional information are advected with the ambient velocity field on an Eulerian grid. However, velocity interpolation from grid points to marker locations is often performed without considering the divergence of the velocity field at the interpolated locations (i.e., non-conservative). Such interpolation schemes can induce non-physical clustering of markers when strong velocity gradients are present (Journal of Computational Physics 166:218-252, 2001) and this may, eventually, result in empty grid cells, a serious numerical violation of the marker-in-cell method. To remedy this at low computational costs, Jenny et al. (Journal of Computational Physics 166:218-252, 2001) and Meyer and Jenny (Proceedings in Applied Mathematics and Mechanics 4:466-467, 2004) proposed a simple, conservative velocity interpolation scheme for 2-D staggered grid, while Wang et al. (Geochemistry, Geophysics, Geosystems 16(6):2015-2023, 2015) extended the formulation to 3-D finite element methods. Here, we adapt this formulation for 3-D staggered grids (correction interpolation) and we report on the quality of various velocity interpolation methods for 2-D and 3-D staggered grids. We test the interpolation schemes in combination with different advection schemes on incompressible Stokes problems with strong velocity gradients, which are discretized using a finite difference method. Our results suggest that a conservative formulation reduces the dispersion and clustering of markers, minimizing the need of unphysical marker control in geodynamic models.

  10. The influence of model resolution on ozone in industrial volatile organic compound plumes.

    PubMed

    Henderson, Barron H; Jeffries, Harvey E; Kim, Byeong-Uk; Vizuete, William G

    2010-09-01

    Regions with concentrated petrochemical industrial activity (e.g., Houston or Baton Rouge) frequently experience large, localized releases of volatile organic compounds (VOCs). Aircraft measurements suggest these released VOCs create plumes with ozone (O3) production rates 2-5 times higher than typical urban conditions. Modeling studies found that simulating high O3 productions requires superfine (1-km) horizontal grid cell size. Compared with fine modeling (4-kmin), the superfine resolution increases the peak O3 concentration by as much as 46%. To understand this drastic O3 change, this study quantifies model processes for O3 and "odd oxygen" (Ox) in both resolutions. For the entire plume, the superfine resolution increases the maximum O3 concentration 3% but only decreases the maximum Ox concentration 0.2%. The two grid sizes produce approximately equal Ox mass but by different reaction pathways. Derived sensitivity to oxides of nitrogen (NOx) and VOC emissions suggests resolution-specific sensitivity to NOx and VOC emissions. Different sensitivity to emissions will result in different O3 responses to subsequently encountered emissions (within the city or downwind). Sensitivity of O3 to emission changes also results in different simulated O3 responses to the same control strategies. Sensitivity of O3 to NOx and VOC emission changes is attributed to finer resolved Eulerian grid and finer resolved NOx emissions. Urban NOx concentration gradients are often caused by roadway mobile sources that would not typically be addressed with Plume-in-Grid models. This study shows that grid cell size (an artifact of modeling) influences simulated control strategies and could bias regulatory decisions. Understanding the dynamics of VOC plume dependence on grid size is the first step toward providing more detailed guidance for resolution. These results underscore VOC and NOx resolution interdependencies best addressed by finer resolution. On the basis of these results, the authors suggest a need for quantitative metrics for horizontal grid resolution in future model guidance.

  11. Microchannel cross load array with dense parallel input

    DOEpatents

    Swierkowski, Stefan P.

    2004-04-06

    An architecture or layout for microchannel arrays using T or Cross (+) loading for electrophoresis or other injection and separation chemistry that are performed in microfluidic configurations. This architecture enables a very dense layout of arrays of functionally identical shaped channels and it also solves the problem of simultaneously enabling efficient parallel shapes and biasing of the input wells, waste wells, and bias wells at the input end of the separation columns. One T load architecture uses circular holes with common rows, but not columns, which allows the flow paths for each channel to be identical in shape, using multiple mirror image pieces. Another T load architecture enables the access hole array to be formed on a biaxial, collinear grid suitable for EDM micromachining (square holes), with common rows and columns.

  12. AIRS-Observed Interrelationships of Anomaly Time-Series of Moist Process-Related Parameters and Inferred Feedback Values on Various Spatial Scales

    NASA Technical Reports Server (NTRS)

    Molnar, Gyula I.; Susskind, Joel; Iredell, Lena

    2011-01-01

    In the beginning, a good measure of a GMCs performance was their ability to simulate the observed mean seasonal cycle. That is, a reasonable simulation of the means (i.e., small biases) and standard deviations of TODAY?S climate would suffice. Here, we argue that coupled GCM (CG CM for short) simulations of FUTURE climates should be evaluated in much more detail, both spatially and temporally. Arguably, it is not the bias, but rather the reliability of the model-generated anomaly time-series, even down to the [C]GCM grid-scale, which really matter. This statement is underlined by the social need to address potential REGIONAL climate variability, and climate drifts/changes in a manner suitable for policy decisions.

  13. Bayesian Non-Stationary Index Gauge Modeling of Gridded Precipitation Extremes

    NASA Astrophysics Data System (ADS)

    Verdin, A.; Bracken, C.; Caldwell, J.; Balaji, R.; Funk, C. C.

    2017-12-01

    We propose a Bayesian non-stationary model to generate watershed scale gridded estimates of extreme precipitation return levels. The Climate Hazards Group Infrared Precipitation with Stations (CHIRPS) dataset is used to obtain gridded seasonal precipitation extremes over the Taylor Park watershed in Colorado for the period 1981-2016. For each year, grid cells within the Taylor Park watershed are aggregated to a representative "index gauge," which is input to the model. Precipitation-frequency curves for the index gauge are estimated for each year, using climate variables with significant teleconnections as proxies. Such proxies enable short-term forecasting of extremes for the upcoming season. Disaggregation ratios of the index gauge to the grid cells within the watershed are computed for each year and preserved to translate the index gauge precipitation-frequency curve to gridded precipitation-frequency maps for select return periods. Gridded precipitation-frequency maps are of the same spatial resolution as CHIRPS (0.05° x 0.05°). We verify that the disaggregation method preserves spatial coherency of extremes in the Taylor Park watershed. Validation of the index gauge extreme precipitation-frequency method consists of ensuring extreme value statistics are preserved on a grid cell basis. To this end, a non-stationary extreme precipitation-frequency analysis is performed on each grid cell individually, and the resulting frequency curves are compared to those produced by the index gauge disaggregation method.

  14. Domain decomposition by the advancing-partition method for parallel unstructured grid generation

    NASA Technical Reports Server (NTRS)

    Banihashemi, legal representative, Soheila (Inventor); Pirzadeh, Shahyar Z. (Inventor)

    2012-01-01

    In a method for domain decomposition for generating unstructured grids, a surface mesh is generated for a spatial domain. A location of a partition plane dividing the domain into two sections is determined. Triangular faces on the surface mesh that intersect the partition plane are identified. A partition grid of tetrahedral cells, dividing the domain into two sub-domains, is generated using a marching process in which a front comprises only faces of new cells which intersect the partition plane. The partition grid is generated until no active faces remain on the front. Triangular faces on each side of the partition plane are collected into two separate subsets. Each subset of triangular faces is renumbered locally and a local/global mapping is created for each sub-domain. A volume grid is generated for each sub-domain. The partition grid and volume grids are then merged using the local-global mapping.

  15. A Solution Adaptive Structured/Unstructured Overset Grid Flow Solver with Applications to Helicopter Rotor Flows

    NASA Technical Reports Server (NTRS)

    Duque, Earl P. N.; Biswas, Rupak; Strawn, Roger C.

    1995-01-01

    This paper summarizes a method that solves both the three dimensional thin-layer Navier-Stokes equations and the Euler equations using overset structured and solution adaptive unstructured grids with applications to helicopter rotor flowfields. The overset structured grids use an implicit finite-difference method to solve the thin-layer Navier-Stokes/Euler equations while the unstructured grid uses an explicit finite-volume method to solve the Euler equations. Solutions on a helicopter rotor in hover show the ability to accurately convect the rotor wake. However, isotropic subdivision of the tetrahedral mesh rapidly increases the overall problem size.

  16. Unstructured Cartesian/prismatic grid generation for complex geometries

    NASA Technical Reports Server (NTRS)

    Karman, Steve L., Jr.

    1995-01-01

    The generation of a hybrid grid system for discretizing complex three dimensional (3D) geometries is described. The primary grid system is an unstructured Cartesian grid automatically generated using recursive cell subdivision. This grid system is sufficient for computing Euler solutions about extremely complex 3D geometries. A secondary grid system, using triangular-prismatic elements, may be added for resolving the boundary layer region of viscous flows near surfaces of solid bodies. This paper describes the grid generation processes used to generate each grid type. Several example grids are shown, demonstrating the ability of the method to discretize complex geometries, with very little pre-processing required by the user.

  17. Algorithms for the automatic generation of 2-D structured multi-block grids

    NASA Technical Reports Server (NTRS)

    Schoenfeld, Thilo; Weinerfelt, Per; Jenssen, Carl B.

    1995-01-01

    Two different approaches to the fully automatic generation of structured multi-block grids in two dimensions are presented. The work aims to simplify the user interactivity necessary for the definition of a multiple block grid topology. The first approach is based on an advancing front method commonly used for the generation of unstructured grids. The original algorithm has been modified toward the generation of large quadrilateral elements. The second method is based on the divide-and-conquer paradigm with the global domain recursively partitioned into sub-domains. For either method each of the resulting blocks is then meshed using transfinite interpolation and elliptic smoothing. The applicability of these methods to practical problems is demonstrated for typical geometries of fluid dynamics.

  18. A two-stage adaptive stochastic collocation method on nested sparse grids for multiphase flow in randomly heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi

    2017-02-01

    A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod-Patterson-Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiency of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.

  19. Occupancy change detection system and method

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID

    2009-09-01

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes instructions for producing an occupancy grid map of an environment around the robot, scanning the environment to generate a current obstacle map relative to a current robot position, and converting the current obstacle map to a current occupancy grid map. The instructions also include processing each grid cell in the occupancy grid map. Within the processing of each grid cell, the instructions include comparing each grid cell in the occupancy grid map to a corresponding grid cell in the current occupancy grid map. For grid cells with a difference, the instructions include defining a change vector for each changed grid cell, wherein the change vector includes a direction from the robot to the changed grid cell and a range from the robot to the changed grid cell.

  20. Action Research to Improve Methods of Delivery and Feedback in an Access Grid Room Environment

    ERIC Educational Resources Information Center

    McArthur, Lynne C.; Klass, Lara; Eberhard, Andrew; Stacey, Andrew

    2011-01-01

    This article describes a qualitative study which was undertaken to improve the delivery methods and feedback opportunity in honours mathematics lectures which are delivered through Access Grid Rooms. Access Grid Rooms are facilities that provide two-way video and audio interactivity across multiple sites, with the inclusion of smart boards. The…

  1. Common method biases in behavioral research: a critical review of the literature and recommended remedies.

    PubMed

    Podsakoff, Philip M; MacKenzie, Scott B; Lee, Jeong-Yeon; Podsakoff, Nathan P

    2003-10-01

    Interest in the problem of method biases has a long history in the behavioral sciences. Despite this, a comprehensive summary of the potential sources of method biases and how to control for them does not exist. Therefore, the purpose of this article is to examine the extent to which method biases influence behavioral research results, identify potential sources of method biases, discuss the cognitive processes through which method biases influence responses to measures, evaluate the many different procedural and statistical techniques that can be used to control method biases, and provide recommendations for how to select appropriate procedural and statistical remedies for different types of research settings.

  2. Calculating Soil Wetness, Evapotranspiration and Carbon Cycle Processes Over Large Grid Areas Using a New Scaling Technique

    NASA Technical Reports Server (NTRS)

    Sellers, Piers

    2012-01-01

    Soil wetness typically shows great spatial variability over the length scales of general circulation model (GCM) grid areas (approx 100 km ), and the functions relating evapotranspiration and photosynthetic rate to local-scale (approx 1 m) soil wetness are highly non-linear. Soil respiration is also highly dependent on very small-scale variations in soil wetness. We therefore expect significant inaccuracies whenever we insert a single grid area-average soil wetness value into a function to calculate any of these rates for the grid area. For the particular case of evapotranspiration., this method - use of a grid-averaged soil wetness value - can also provoke severe oscillations in the evapotranspiration rate and soil wetness under some conditions. A method is presented whereby the probability distribution timction(pdf) for soil wetness within a grid area is represented by binning. and numerical integration of the binned pdf is performed to provide a spatially-integrated wetness stress term for the whole grid area, which then permits calculation of grid area fluxes in a single operation. The method is very accurate when 10 or more bins are used, can deal realistically with spatially variable precipitation, conserves moisture exactly and allows for precise modification of the soil wetness pdf after every time step. The method could also be applied to other ecological problems where small-scale processes must be area-integrated, or upscaled, to estimate fluxes over large areas, for example in treatments of the terrestrial carbon budget or trace gas generation.

  3. Ray tracing a three dimensional scene using a grid

    DOEpatents

    Wald, Ingo; Ize, Santiago; Parker, Steven G; Knoll, Aaron

    2013-02-26

    Ray tracing a three-dimensional scene using a grid. One example embodiment is a method for ray tracing a three-dimensional scene using a grid. In this example method, the three-dimensional scene is made up of objects that are spatially partitioned into a plurality of cells that make up the grid. The method includes a first act of computing a bounding frustum of a packet of rays, and a second act of traversing the grid slice by slice along a major traversal axis. Each slice traversal includes a first act of determining one or more cells in the slice that are overlapped by the frustum and a second act of testing the rays in the packet for intersection with any objects at least partially bounded by the one or more cells overlapped by the frustum.

  4. One-way coupling of an atmospheric and a hydrologic model in Colorado

    USGS Publications Warehouse

    Hay, L.E.; Clark, M.P.; Pagowski, M.; Leavesley, G.H.; Gutowski, W.J.

    2006-01-01

    This paper examines the accuracy of high-resolution nested mesoscale model simulations of surface climate. The nesting capabilities of the atmospheric fifth-generation Pennsylvania State University (PSU)-National Center for Atmospheric Research (NCAR) Mesoscale Model (MM5) were used to create high-resolution, 5-yr climate simulations (from 1 October 1994 through 30 September 1999), starting with a coarse nest of 20 km for the western United States. During this 5-yr period, two finer-resolution nests (5 and 1.7 km) were run over the Yampa River basin in northwestern Colorado. Raw and bias-corrected daily precipitation and maximum and minimum temperature time series from the three MM5 nests were used as input to the U.S. Geological Survey's distributed hydrologic model [the Precipitation Runoff Modeling System (PRMS)] and were compared with PRMS results using measured climate station data. The distributed capabilities of PRMS were provided by partitioning the Yampa River basin into hydrologic response units (HRUs). In addition to the classic polygon method of HRU definition, HRUs for PRMS were defined based on the three MM5 nests. This resulted in 16 datasets being tested using PRMS. The input datasets were derived using measured station data and raw and bias-corrected MM5 20-, 5-, and 1.7-km output distributed to 1) polygon HRUs and 2) 20-, 5-, and 1.7-km-gridded HRUs, respectively. Each dataset was calibrated independently, using a multiobjective, stepwise automated procedure. Final results showed a general increase in the accuracy of simulated runoff with an increase in HRU resolution. In all steps of the calibration procedure, the station-based simulations of runoff showed higher accuracy than the MM5-based simulations, although the accuracy of MM5 simulations was close to station data for the high-resolution nests. Further work is warranted in identifying the causes of the biases in MM5 local climate simulations and developing methods to remove them. ?? 2006 American Meteorological Society.

  5. Retrievals of Ice Cloud Microphysical Properties of Deep Convective Systems using Radar Measurements

    NASA Astrophysics Data System (ADS)

    Tian, J.; Dong, X.; Xi, B.; Wang, J.; Homeyer, C. R.

    2015-12-01

    This study presents innovative algorithms for retrieving ice cloud microphysical properties of Deep Convective Systems (DCSs) using Next-Generation Radar (NEXRAD) reflectivity and newly derived empirical relationships from aircraft in situ measurements in Wang et al. (2015) during the Midlatitude Continental Convective Clouds Experiment (MC3E). With composite gridded NEXRAD radar reflectivity, four-dimensional (space-time) ice cloud microphysical properties of DCSs are retrieved, which is not possible from either in situ sampling at a single altitude or from vertical pointing radar measurements. For this study, aircraft in situ measurements provide the best-estimated ice cloud microphysical properties for validating the radar retrievals. Two statistical comparisons between retrieved and aircraft in situ measured ice microphysical properties are conducted from six selected cases during MC3E. For the temporal-averaged method, the averaged ice water content (IWC) and median mass diameter (Dm) from aircraft in situ measurements are 0.50 g m-3 and 1.51 mm, while the retrievals from radar reflectivity have negative biases of 0.12 g m-3 (24%) and 0.02 mm (1.3%) with correlations of 0.71 and 0.48, respectively. For the spatial-averaged method, the IWC retrievals are closer to the aircraft results (0.51 vs. 0.47 g m-3) with a positive bias of 8.5%, whereas the Dm retrievals are larger than the aircraft results (1.65 mm vs. 1.51 mm) with a positive bias of 9.3%. The retrieved IWCs decrease from ~0.6 g m-3 at 5 km to ~0.15 g m-3 at 13 km, and Dm values decrease from ~2 mm to ~0.7 mm at the same levels. In general, the aircraft in situ measured IWC and Dm values at each level are within one standard derivation of retrieved properties. Good agreements between microphysical properties measured from aircraft and retrieved from radar reflectivity measurements indicate the reasonable accuracy of our retrievals.

  6. TranAir: A full-potential, solution-adaptive, rectangular grid code for predicting subsonic, transonic, and supersonic flows about arbitrary configurations. Theory document

    NASA Technical Reports Server (NTRS)

    Johnson, F. T.; Samant, S. S.; Bieterman, M. B.; Melvin, R. G.; Young, D. P.; Bussoletti, J. E.; Hilmes, C. L.

    1992-01-01

    A new computer program, called TranAir, for analyzing complex configurations in transonic flow (with subsonic or supersonic freestream) was developed. This program provides accurate and efficient simulations of nonlinear aerodynamic flows about arbitrary geometries with the ease and flexibility of a typical panel method program. The numerical method implemented in TranAir is described. The method solves the full potential equation subject to a set of general boundary conditions and can handle regions with differing total pressure and temperature. The boundary value problem is discretized using the finite element method on a locally refined rectangular grid. The grid is automatically constructed by the code and is superimposed on the boundary described by networks of panels; thus no surface fitted grid generation is required. The nonlinear discrete system arising from the finite element method is solved using a preconditioned Krylov subspace method embedded in an inexact Newton method. The solution is obtained on a sequence of successively refined grids which are either constructed adaptively based on estimated solution errors or are predetermined based on user inputs. Many results obtained by using TranAir to analyze aerodynamic configurations are presented.

  7. The effects of sampling bias and model complexity on the predictive performance of MaxEnt species distribution models.

    PubMed

    Syfert, Mindy M; Smith, Matthew J; Coomes, David A

    2013-01-01

    Species distribution models (SDMs) trained on presence-only data are frequently used in ecological research and conservation planning. However, users of SDM software are faced with a variety of options, and it is not always obvious how selecting one option over another will affect model performance. Working with MaxEnt software and with tree fern presence data from New Zealand, we assessed whether (a) choosing to correct for geographical sampling bias and (b) using complex environmental response curves have strong effects on goodness of fit. SDMs were trained on tree fern data, obtained from an online biodiversity data portal, with two sources that differed in size and geographical sampling bias: a small, widely-distributed set of herbarium specimens and a large, spatially clustered set of ecological survey records. We attempted to correct for geographical sampling bias by incorporating sampling bias grids in the SDMs, created from all georeferenced vascular plants in the datasets, and explored model complexity issues by fitting a wide variety of environmental response curves (known as "feature types" in MaxEnt). In each case, goodness of fit was assessed by comparing predicted range maps with tree fern presences and absences using an independent national dataset to validate the SDMs. We found that correcting for geographical sampling bias led to major improvements in goodness of fit, but did not entirely resolve the problem: predictions made with clustered ecological data were inferior to those made with the herbarium dataset, even after sampling bias correction. We also found that the choice of feature type had negligible effects on predictive performance, indicating that simple feature types may be sufficient once sampling bias is accounted for. Our study emphasizes the importance of reducing geographical sampling bias, where possible, in datasets used to train SDMs, and the effectiveness and essentialness of sampling bias correction within MaxEnt.

  8. A grid-embedding transonic flow analysis computer program for wing/nacelle configurations

    NASA Technical Reports Server (NTRS)

    Atta, E. H.; Vadyak, J.

    1983-01-01

    An efficient grid-interfacing zonal algorithm was developed for computing the three-dimensional transonic flow field about wing/nacelle configurations. the algorithm uses the full-potential formulation and the AF2 approximate factorization scheme. The flow field solution is computed using a component-adaptive grid approach in which separate grids are employed for the individual components in the multi-component configuration, where each component grid is optimized for a particular geometry such as the wing or nacelle. The wing and nacelle component grids are allowed to overlap, and flow field information is transmitted from one grid to another through the overlap region using trivariate interpolation. This report represents a discussion of the computational methods used to generate both the wing and nacelle component grids, the technique used to interface the component grids, and the method used to obtain the inviscid flow solution. Computed results and correlations with experiment are presented. also presented are discussions on the organization of the wing grid generation (GRGEN3) and nacelle grid generation (NGRIDA) computer programs, the grid interface (LK) computer program, and the wing/nacelle flow solution (TWN) computer program. Descriptions of the respective subroutines, definitions of the required input parameters, a discussion on interpretation of the output, and the sample cases illustrating application of the analysis are provided for each of the four computer programs.

  9. Rupture Dynamics Simulation for Non-Planar fault by a Curved Grid Finite Difference Method

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Zhu, G.; Chen, X.

    2011-12-01

    We first implement the non-staggered finite difference method to solve the dynamic rupture problem, with split-node, for non-planar fault. Split-node method for dynamic simulation has been used widely, because of that it's more precise to represent the fault plane than other methods, for example, thick fault, stress glut and so on. The finite difference method is also a popular numeric method to solve kinematic and dynamic problem in seismology. However, previous works focus most of theirs eyes on the staggered-grid method, because of its simplicity and computational efficiency. However this method has its own disadvantage comparing to non-staggered finite difference method at some fact for example describing the boundary condition, especially the irregular boundary, or non-planar fault. Zhang and Chen (2006) proposed the MacCormack high order non-staggered finite difference method based on curved grids to precisely solve irregular boundary problem. Based upon on this non-staggered grid method, we make success of simulating the spontaneous rupture problem. The fault plane is a kind of boundary condition, which could be irregular of course. So it's convinced that we could simulate rupture process in the case of any kind of bending fault plane. We will prove this method is valid in the case of Cartesian coordinate first. In the case of bending fault, the curvilinear grids will be used.

  10. Globally-Gridded Interpolated Night-Time Marine Air Temperatures 1900-2014

    NASA Astrophysics Data System (ADS)

    Junod, R.; Christy, J. R.

    2016-12-01

    Over the past century, climate records have pointed to an increase in global near-surface average temperature. Near-surface air temperature over the oceans is a relatively unused parameter in understanding the current state of climate, but is useful as an independent temperature metric over the oceans and serves as a geographical and physical complement to near-surface air temperature over land. Though versions of this dataset exist (i.e. HadMAT1 and HadNMAT2), it has been strongly recommended that various groups generate climate records independently. This University of Alabama in Huntsville (UAH) study began with the construction of monthly night-time marine air temperature (UAHNMAT) values from the early-twentieth century through to the present era. Data from the International Comprehensive Ocean and Atmosphere Data Set (ICOADS) were used to compile a time series of gridded UAHNMAT, (20S-70N). This time series was homogenized to correct for the many biases such as increasing ship height, solar deck heating, etc. The time series of UAHNMAT, once adjusted to a standard reference height, is gridded to 1.25° pentad grid boxes and interpolated using the kriging interpolation technique. This study will present results which quantify the variability and trends and compare to current trends of other related datasets that include HadNMAT2 and sea-surface temperatures (HadISST & ERSSTv4).

  11. Uncertain Representations of Sub-Grid Pollutant Transport in Chemistry-Transport Models and Impacts on Long-Range Transport and Global Composition

    NASA Technical Reports Server (NTRS)

    Pawson, Steven; Zhu, Z.; Ott, L. E.; Molod, A.; Duncan, B. N.; Nielsen, J. E.

    2009-01-01

    Sub-grid transport, by convection and turbulence, is known to play an important role in lofting pollutants from their source regions. Consequently, the long-range transport and climatology of simulated atmospheric composition are impacted. This study uses the Goddard Earth Observing System, Version 5 (GEOS-5) atmospheric model to study pollutant transport. The baseline model uses a Relaxed Arakawa-Schubert (RAS) scheme that represents convection through a sequence of linearly entraining cloud plumes characterized by unique detrainment levels. Thermodynamics, moisture and trace gases are transported in the same manner. Various approximate forms of trace-gas transport are implemented, in which the box-averaged cloud mass fluxes from RAS are used with different numerical approaches. Substantial impacts on forward-model simulations of CO (using a linearized chemistry) are evident. In particular, some aspects of simulations using a diffusive form of sub-grid transport bear more resemblance to space-biased CO observations than do the baseline simulations with RAS transport. Implications for transport in the real atmosphere will be discussed. Another issue of importance is that many adjoint/inversion computations use simplified representations of sub-grid transport that may be inconsistent with the forward models: implications will be discussed. Finally, simulations using a complex chemistry model in GEOS-5 (in place of the linearized CO model) are underway: noteworthy results from this simulation will be mentioned.

  12. Implement a Sub-grid Turbulent Orographic Form Drag in WRF and its application to Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Zhou, X.; Yang, K.; Wang, Y.; Huang, B.

    2017-12-01

    Sub-grid-scale orographic variation exerts turbulent form drag on atmospheric flows. The Weather Research and Forecasting model (WRF) includes a turbulent orographic form drag (TOFD) scheme that adds the stress to the surface layer. In this study, another TOFD scheme has been incorporated in WRF3.7, which exerts an exponentially decaying drag on each model layer. To investigate the effect of the new scheme, WRF with the old and new one was used to simulate the climate over the complex terrain of the Tibetan Plateau. The two schemes were evaluated in terms of the direct impact (on wind) and the indirect impact (on air temperature, surface pressure and precipitation). Both in winter and summer, the new TOFD scheme reduces the mean bias in the surface wind, and clearly reduces the root mean square error (RMSEs) in comparisons with the station measurements (Figure 1). Meanwhile, the 2-m air temperature and surface pressure is also improved (Figure 2) due to the more warm air northward transport across south boundary of TP in winter. The 2-m air temperature is hardly improved in summer but the precipitation improvement is more obvious, with reduced mean bias and RMSEs. This is due to the weakening of water vapor flux (at low-level flow with the new scheme) crossing the Himalayan Mountains from South Asia.

  13. Parallel Cartesian grid refinement for 3D complex flow simulations

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Sotiropoulos, Fotis

    2013-11-01

    A second order accurate method for discretizing the Navier-Stokes equations on 3D unstructured Cartesian grids is presented. Although the grid generator is based on the oct-tree hierarchical method, fully unstructured data-structure is adopted enabling robust calculations for incompressible flows, avoiding both the need of synchronization of the solution between different levels of refinement and usage of prolongation/restriction operators. The current solver implements a hybrid staggered/non-staggered grid layout, employing the implicit fractional step method to satisfy the continuity equation. The pressure-Poisson equation is discretized by using a novel second order fully implicit scheme for unstructured Cartesian grids and solved using an efficient Krylov subspace solver. The momentum equation is also discretized with second order accuracy and the high performance Newton-Krylov method is used for integrating them in time. Neumann and Dirichlet conditions are used to validate the Poisson solver against analytical functions and grid refinement results to a significant reduction of the solution error. The effectiveness of the fractional step method results in the stability of the overall algorithm and enables the performance of accurate multi-resolution real life simulations. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482.

  14. Regional models of the gravity field from terrestrial gravity data of heterogeneous quality and density

    NASA Astrophysics Data System (ADS)

    Talvik, Silja; Oja, Tõnis; Ellmann, Artu; Jürgenson, Harli

    2014-05-01

    Gravity field models in a regional scale are needed for a number of applications, for example national geoid computation, processing of precise levelling data and geological modelling. Thus the methods applied for modelling the gravity field from surveyed gravimetric information need to be considered carefully. The influence of using different gridding methods, the inclusion of unit or realistic weights and indirect gridding of free air anomalies (FAA) are investigated in the study. Known gridding methods such as kriging (KRIG), least squares collocation (LSCO), continuous curvature (CCUR) and optimal Delaunay triangulation (ODET) are used for production of gridded gravity field surfaces. As the quality of data collected varies considerably depending on the methods and instruments available or used in surveying it is important to somehow weigh the input data. This puts additional demands on data maintenance as accuracy information needs to be available for each data point participating in the modelling which is complicated by older gravity datasets where the uncertainties of not only gravity values but also supplementary information such as survey point position are not always known very accurately. A number of gravity field applications (e.g. geoid computation) demand foran FAA model, the acquisition of which is also investigated. Instead of direct gridding it could be more appropriate to proceed with indirect FAA modelling using a Bouguer anomaly grid to reduce the effect of topography on the resulting FAA model (e.g. near terraced landforms). The inclusion of different gridding methods, weights and indirect FAA modelling helps to improve gravity field modelling methods. It becomes possible to estimate the impact of varying methodical approaches on the gravity field modelling as statistical output is compared. Such knowledge helps assess the accuracy of gravity field models and their effect on the aforementioned applications.

  15. Online Optimization Method for Operation of Generators in a Micro Grid

    NASA Astrophysics Data System (ADS)

    Hayashi, Yasuhiro; Miyamoto, Hideki; Matsuki, Junya; Iizuka, Toshio; Azuma, Hitoshi

    Recently a lot of studies and developments about distributed generator such as photovoltaic generation system, wind turbine generation system and fuel cell have been performed under the background of the global environment issues and deregulation of the electricity market, and the technique of these distributed generators have progressed. Especially, micro grid which consists of several distributed generators, loads and storage battery is expected as one of the new operation system of distributed generator. However, since precipitous load fluctuation occurs in micro grid for the reason of its smaller capacity compared with conventional power system, high-accuracy load forecasting and control scheme to balance of supply and demand are needed. Namely, it is necessary to improve the precision of operation in micro grid by observing load fluctuation and correcting start-stop schedule and output of generators online. But it is not easy to determine the operation schedule of each generator in short time, because the problem to determine start-up, shut-down and output of each generator in micro grid is a mixed integer programming problem. In this paper, the authors propose an online optimization method for the optimal operation schedule of generators in micro grid. The proposed method is based on enumeration method and particle swarm optimization (PSO). In the proposed method, after picking up all unit commitment patterns of each generators satisfied with minimum up time and minimum down time constraint by using enumeration method, optimal schedule and output of generators are determined under the other operational constraints by using PSO. Numerical simulation is carried out for a micro grid model with five generators and photovoltaic generation system in order to examine the validity of the proposed method.

  16. The fundamentals of adaptive grid movement

    NASA Technical Reports Server (NTRS)

    Eiseman, Peter R.

    1990-01-01

    Basic grid point movement schemes are studied. The schemes are referred to as adaptive grids. Weight functions and equidistribution in one dimension are treated. The specification of coefficients in the linear weight, attraction to a given grid or a curve, and evolutionary forces are considered. Curve by curve and finite volume methods are described. The temporal coupling of partial differential equations solvers and grid generators was discussed.

  17. Using Four Downscaling Techniques to Characterize Uncertainty in Updating Intensity-Duration-Frequency Curves Under Climate Change

    NASA Astrophysics Data System (ADS)

    Cook, L. M.; Samaras, C.; McGinnis, S. A.

    2017-12-01

    Intensity-duration-frequency (IDF) curves are a common input to urban drainage design, and are used to represent extreme rainfall in a region. As rainfall patterns shift into a non-stationary regime as a result of climate change, these curves will need to be updated with future projections of extreme precipitation. Many regions have begun to update these curves to reflect the trends from downscaled climate models; however, few studies have compared the methods for doing so, as well as the uncertainty that results from the selection of the native grid scale and temporal resolution of the climate model. This study examines the variability in updated IDF curves for Pittsburgh using four different methods for adjusting gridded regional climate model (RCM) outputs into station scale precipitation extremes: (1) a simple change factor applied to observed return levels, (2) a naïve adjustment of stationary and non-stationary Generalized Extreme Value (GEV) distribution parameters, (3) a transfer function of the GEV parameters from the annual maximum series, and (4) kernel density distribution mapping bias correction of the RCM time series. Return level estimates (rainfall intensities) and confidence intervals from these methods for the 1-hour to 48-hour duration are tested for sensitivity to the underlying spatial and temporal resolution of the climate ensemble from the NA-CORDEX project, as well as, the future time period for updating. The first goal is to determine if uncertainty is highest for: (i) the downscaling method, (ii) the climate model resolution, (iii) the climate model simulation, (iv) the GEV parameters, or (v) the future time period examined. Initial results of the 6-hour, 10-year return level adjusted with the simple change factor method using four climate model simulations of two different spatial resolutions show that uncertainty is highest in the estimation of the GEV parameters. The second goal is to determine if complex downscaling methods and high-resolution climate models are necessary for updating, or if simpler methods and lower resolution climate models will suffice. The final results can be used to inform the most appropriate method and climate model resolutions to use for updating IDF curves for urban drainage design.

  18. Mass Conservation of the Unified Continuous and Discontinuous Element-Based Galerkin Methods on Dynamically Adaptive Grids with Application to Atmospheric Simulations

    DTIC Science & Technology

    2015-09-01

    Discontinuous Element-Based Galerkin Methods on Dynamically Adaptive Grids with Application to Atmospheric Simulations 5a. CONTRACT NUMBER 5b. GRANT NUMBER...Discontinuous Element-Based Galerkin Methods on Dynamically Adaptive Grids with Application to Atmospheric Simulations. Michal A. Koperaa,∗, Francis X...mass conservation, as it is an important feature for many atmospheric applications . We believe this is a good metric because, for smooth solutions

  19. The large discretization step method for time-dependent partial differential equations

    NASA Technical Reports Server (NTRS)

    Haras, Zigo; Taasan, Shlomo

    1995-01-01

    A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.

  20. Using a composite grid approach in a complex coastal domain to estimate estuarine residence time

    USGS Publications Warehouse

    Warner, John C.; Geyer, W. Rockwell; Arango, Herman G.

    2010-01-01

    We investigate the processes that influence residence time in a partially mixed estuary using a three-dimensional circulation model. The complex geometry of the study region is not optimal for a structured grid model and so we developed a new method of grid connectivity. This involves a novel approach that allows an unlimited number of individual grids to be combined in an efficient manner to produce a composite grid. We then implemented this new method into the numerical Regional Ocean Modeling System (ROMS) and developed a composite grid of the Hudson River estuary region to investigate the residence time of a passive tracer. Results show that the residence time is a strong function of the time of release (spring vs. neap tide), the along-channel location, and the initial vertical placement. During neap tides there is a maximum in residence time near the bottom of the estuary at the mid-salt intrusion length. During spring tides the residence time is primarily a function of along-channel location and does not exhibit a strong vertical variability. This model study of residence time illustrates the utility of the grid connectivity method for circulation and dispersion studies in regions of complex geometry.

  1. On Bi-Grid Local Mode Analysis of Solution Techniques for 3-D Euler and Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Ibraheem, S. O.; Demuren, A. O.

    1994-01-01

    A procedure is presented for utilizing a bi-grid stability analysis as a practical tool for predicting multigrid performance in a range of numerical methods for solving Euler and Navier-Stokes equations. Model problems based on the convection, diffusion and Burger's equation are used to illustrate the superiority of the bi-grid analysis as a predictive tool for multigrid performance in comparison to the smoothing factor derived from conventional von Neumann analysis. For the Euler equations, bi-grid analysis is presented for three upwind difference based factorizations, namely Spatial, Eigenvalue and Combination splits, and two central difference based factorizations, namely LU and ADI methods. In the former, both the Steger-Warming and van Leer flux-vector splitting methods are considered. For the Navier-Stokes equations, only the Beam-Warming (ADI) central difference scheme is considered. In each case, estimates of multigrid convergence rates from the bi-grid analysis are compared to smoothing factors obtained from single-grid stability analysis. Effects of grid aspect ratio and flow skewness are examined. Both predictions are compared with practical multigrid convergence rates for 2-D Euler and Navier-Stokes solutions based on the Beam-Warming central scheme.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiley, J.C.

    The author describes a general `hp` finite element method with adaptive grids. The code was based on the work of Oden, et al. The term `hp` refers to the method of spatial refinement (h), in conjunction with the order of polynomials used as a part of the finite element discretization (p). This finite element code seems to handle well the different mesh grid sizes occuring between abuted grids with different resolutions.

  3. A coarse-grid projection method for accelerating incompressible flow computations

    NASA Astrophysics Data System (ADS)

    San, Omer; Staples, Anne

    2011-11-01

    We present a coarse-grid projection (CGP) algorithm for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. Here, we investigate a particular CGP method for the vorticity-stream function formulation that uses the full weighting operation for mapping from fine to coarse grids, the third-order Runge-Kutta method for time stepping, and finite differences for the spatial discretization. After solving the Poisson equation on a coarsened grid, bilinear interpolation is used to obtain the fine data for consequent time stepping on the full grid. We compute several benchmark flows: the Taylor-Green vortex, a vortex pair merging, a double shear layer, decaying turbulence and the Taylor-Green vortex on a distorted grid. In all cases we use either FFT-based or V-cycle multigrid linear-cost Poisson solvers. Reducing the number of degrees of freedom of the Poisson solver by powers of two accelerates these computations while, for the first level of coarsening, retaining the same level of accuracy in the fine resolution vorticity field.

  4. A simplified analysis of the multigrid V-cycle as a fast elliptic solver

    NASA Technical Reports Server (NTRS)

    Decker, Naomi H.; Taasan, Shlomo

    1988-01-01

    For special model problems, Fourier analysis gives exact convergence rates for the two-grid multigrid cycle and, for more general problems, provides estimates of the two-grid convergence rates via local mode analysis. A method is presented for obtaining mutigrid convergence rate estimates for cycles involving more than two grids (using essentially the same analysis as for the two-grid cycle). For the simple cast of the V-cycle used as a fast Laplace solver on the unit square, the k-grid convergence rate bounds obtained by this method are sharper than the bounds predicted by the variational theory. Both theoretical justification and experimental evidence are presented.

  5. In Search of Grid Converged Solutions

    NASA Technical Reports Server (NTRS)

    Lockard, David P.

    2010-01-01

    Assessing solution error continues to be a formidable task when numerically solving practical flow problems. Currently, grid refinement is the primary method used for error assessment. The minimum grid spacing requirements to achieve design order accuracy for a structured-grid scheme are determined for several simple examples using truncation error evaluations on a sequence of meshes. For certain methods and classes of problems, obtaining design order may not be sufficient to guarantee low error. Furthermore, some schemes can require much finer meshes to obtain design order than would be needed to reduce the error to acceptable levels. Results are then presented from realistic problems that further demonstrate the challenges associated with using grid refinement studies to assess solution accuracy.

  6. Grid-based precision aim system and method for disrupting suspect objects

    DOEpatents

    Gladwell, Thomas Scott; Garretson, Justin; Hobart, Clinton G.; Monda, Mark J.

    2014-06-10

    A system and method for disrupting at least one component of a suspect object is provided. The system has a source for passing radiation through the suspect object, a grid board positionable adjacent the suspect object (the grid board having a plurality of grid areas, the radiation from the source passing through the grid board), a screen for receiving the radiation passing through the suspect object and generating at least one image, a weapon for deploying a discharge, and a targeting unit for displaying the image of the suspect object and aiming the weapon according to a disruption point on the displayed image and deploying the discharge into the suspect object to disable the suspect object.

  7. Numerical Nuclear Second Derivatives on a Computing Grid: Enabling and Accelerating Frequency Calculations on Complex Molecular Systems.

    PubMed

    Yang, Tzuhsiung; Berry, John F

    2018-06-04

    The computation of nuclear second derivatives of energy, or the nuclear Hessian, is an essential routine in quantum chemical investigations of ground and transition states, thermodynamic calculations, and molecular vibrations. Analytic nuclear Hessian computations require the resolution of costly coupled-perturbed self-consistent field (CP-SCF) equations, while numerical differentiation of analytic first derivatives has an unfavorable 6 N ( N = number of atoms) prefactor. Herein, we present a new method in which grid computing is used to accelerate and/or enable the evaluation of the nuclear Hessian via numerical differentiation: NUMFREQ@Grid. Nuclear Hessians were successfully evaluated by NUMFREQ@Grid at the DFT level as well as using RIJCOSX-ZORA-MP2 or RIJCOSX-ZORA-B2PLYP for a set of linear polyacenes with systematically increasing size. For the larger members of this group, NUMFREQ@Grid was found to outperform the wall clock time of analytic Hessian evaluation; at the MP2 or B2LYP levels, these Hessians cannot even be evaluated analytically. We also evaluated a 156-atom catalytically relevant open-shell transition metal complex and found that NUMFREQ@Grid is faster (7.7 times shorter wall clock time) and less demanding (4.4 times less memory requirement) than an analytic Hessian. Capitalizing on the capabilities of parallel grid computing, NUMFREQ@Grid can outperform analytic methods in terms of wall time, memory requirements, and treatable system size. The NUMFREQ@Grid method presented herein demonstrates how grid computing can be used to facilitate embarrassingly parallel computational procedures and is a pioneer for future implementations.

  8. Interpolation Method Needed for Numerical Uncertainty Analysis of Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Groves, Curtis; Ilie, Marcel; Schallhorn, Paul

    2014-01-01

    Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors in an unstructured grid, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors. Nomenclature

  9. Vortical Flow Prediction Using an Adaptive Unstructured Grid Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2001-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65deg delta wing with different values of leading-edge bluntness, and the second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the windtunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  10. On automating domain connectivity for overset grids

    NASA Technical Reports Server (NTRS)

    Chiu, Ing-Tsau

    1994-01-01

    An alternative method for domain connectivity among systems of overset grids is presented. Reference uniform Cartesian systems of points are used to achieve highly efficient domain connectivity, and form the basis for a future fully automated system. The Cartesian systems are used to approximated body surfaces and to map the computational space of component grids. By exploiting the characteristics of Cartesian Systems, Chimera type hole-cutting and identification of donor elements for intergrid boundary points can be carried out very efficiently. The method is tested for a range of geometrically complex multiple-body overset grid systems.

  11. Solution of Poisson equations for 3-dimensional grid generations. [computations of a flow field over a thin delta wing

    NASA Technical Reports Server (NTRS)

    Fujii, K.

    1983-01-01

    A method for generating three dimensional, finite difference grids about complicated geometries by using Poisson equations is developed. The inhomogenous terms are automatically chosen such that orthogonality and spacing restrictions at the body surface are satisfied. Spherical variables are used to avoid the axis singularity, and an alternating-direction-implicit (ADI) solution scheme is used to accelerate the computations. Computed results are presented that show the capability of the method. Since most of the results presented have been used as grids for flow-field computations, this is indicative that the method is a useful tool for generating three-dimensional grids about complicated geometries.

  12. Methods and apparatus of analyzing electrical power grid data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hafen, Ryan P.; Critchlow, Terence J.; Gibson, Tara D.

    Apparatus and methods of processing large-scale data regarding an electrical power grid are described. According to one aspect, a method of processing large-scale data regarding an electrical power grid includes accessing a large-scale data set comprising information regarding an electrical power grid; processing data of the large-scale data set to identify a filter which is configured to remove erroneous data from the large-scale data set; using the filter, removing erroneous data from the large-scale data set; and after the removing, processing data of the large-scale data set to identify an event detector which is configured to identify events of interestmore » in the large-scale data set.« less

  13. Study of grid independence of finite element method on MHD free convective casson fluid flow with slip effect

    NASA Astrophysics Data System (ADS)

    Raju, R. Srinivasa; Ramesh, K.

    2018-05-01

    The purpose of this work is to study the grid independence of finite element method on MHD Casson fluid flow past a vertically inclined plate filled in a porous medium in presence of chemical reaction, heat absorption, an external magnetic field and slip effect has been investigated. For this study of grid independence, a mathematical model is developed and analyzed by using appropriate mathematical technique, called finite element method. Grid study discussed with the help of numerical values of velocity, temperature and concentration profiles in tabular form. avourable comparisons with previously published work on various special cases of the problem are obtained.

  14. A two-stage adaptive stochastic collocation method on nested sparse grids for multiphase flow in randomly heterogeneous porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Qinzhuo, E-mail: liaoqz@pku.edu.cn; Zhang, Dongxiao; Tchelepi, Hamdi

    A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod–Patterson–Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiencymore » of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.« less

  15. SU-C-209-03: Anti-Scatter Grid-Line Artifact Minimization for Removing the Grid Lines for Three Different Grids Used with a High Resolution CMOS Detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rana, R; Bednarek, D; Rudin, S

    Purpose: Demonstrate the effectiveness of an anti-scatter grid artifact minimization method by removing the grid-line artifacts for three different grids when used with a high resolution CMOS detector. Method: Three different stationary x-ray grids were used with a high resolution CMOS x-ray detector (Dexela 1207, 75 µm pixels, sensitivity area 11.5cm × 6.5cm) to image a simulated artery block phantom (Nuclear Associates, Stenosis/Aneurysm Artery Block 76–705) combined with a frontal head phantom used as the scattering source. The x-ray parameters were 98kVp, 200mA, and 16ms for all grids. With all the three grids, two images were acquired: the first formore » a scatter-less flat field including the grid and the second of the object with the grid which may still have some scatter transmission. Because scatter has a low spatial frequency distribution, it was represented by an estimated constant value as an initial approximation and subtracted from the image of the object with grid before dividing by an average frame of the grid flat-field with no scatter. The constant value was iteratively changed to minimize residual grid-line artifact. This artifact minimization process was used for all the three grids. Results: Anti-scatter grid lines artifacts were successfully eliminated in all the three final images taken with the three different grids. The image contrast and CNR were also compared before and after the correction, and also compared with those from the image of the object when no grid was used. The corrected images showed an increase in CNR of approximately 28%, 33% and 25% for the three grids, as compared to the images when no grid at all was used. Conclusion: Anti-scatter grid-artifact minimization works effectively irrespective of the specifications of the grid when it is used with a high spatial resolution detector. Partial support from NIH Grant R01-EB002873 and Toshiba Medical Systems Corp.« less

  16. A highly parallel multigrid-like method for the solution of the Euler equations

    NASA Technical Reports Server (NTRS)

    Tuminaro, Ray S.

    1989-01-01

    We consider a highly parallel multigrid-like method for the solution of the two dimensional steady Euler equations. The new method, introduced as filtering multigrid, is similar to a standard multigrid scheme in that convergence on the finest grid is accelerated by iterations on coarser grids. In the filtering method, however, additional fine grid subproblems are processed concurrently with coarse grid computations to further accelerate convergence. These additional problems are obtained by splitting the residual into a smooth and an oscillatory component. The smooth component is then used to form a coarse grid problem (similar to standard multigrid) while the oscillatory component is used for a fine grid subproblem. The primary advantage in the filtering approach is that fewer iterations are required and that most of the additional work per iteration can be performed in parallel with the standard coarse grid computations. We generalize the filtering algorithm to a version suitable for nonlinear problems. We emphasize that this generalization is conceptually straight-forward and relatively easy to implement. In particular, no explicit linearization (e.g., formation of Jacobians) needs to be performed (similar to the FAS multigrid approach). We illustrate the nonlinear version by applying it to the Euler equations, and presenting numerical results. Finally, a performance evaluation is made based on execution time models and convergence information obtained from numerical experiments.

  17. Design and evaluation of a grid reciprocation scheme for use in digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Patel, Tushita; Sporkin, Helen; Peppard, Heather; Williams, Mark B.

    2016-03-01

    This work describes a methodology for efficient removal of scatter radiation during digital breast tomosynthesis (DBT). The goal of this approach is to enable grid image obscuration without a large increase in radiation dose by minimizing misalignment of the grid focal point (GFP) and x-ray focal spot (XFS) during grid reciprocation. Hardware for the motion scheme was built and tested on the dual modality breast tomosynthesis (DMT) scanner, which combines DBT and molecular breast tomosynthesis (MBT) on a single gantry. The DMT scanner uses fully isocentric rotation of tube and x-ray detector for maintaining a fixed tube-detector alignment during DBT imaging. A cellular focused copper prototype grid with 80 cm focal length, 3.85 mm height, 0.1 mm thick lamellae, and 1.1 mm hole pitch was tested. Primary transmission of the grid at 28 kV tube voltage was on average 74% with the grid stationary and aligned for maximum transmission. It fell to 72% during grid reciprocation by the proposed method. Residual grid line artifacts (GLAs) in projection views and reconstructed DBT images are characterized and methods for reducing the visibility of GLAs in the reconstructed volume through projection image flat-field correction and spatial frequency-based filtering of the DBT slices are described and evaluated. The software correction methods reduce the visibility of these artifacts in the reconstructed volume, making them imperceptible both in the reconstructed DBT images and their Fourier transforms.

  18. Geometric Stitching Method for Double Cameras with Weak Convergence Geometry

    NASA Astrophysics Data System (ADS)

    Zhou, N.; He, H.; Bao, Y.; Yue, C.; Xing, K.; Cao, S.

    2017-05-01

    In this paper, a new geometric stitching method is proposed which utilizes digital elevation model (DEM)-aided block adjustment to solve relative orientation parameters for dual-camera with weak convergence geometry. A rational function model (RFM) with affine transformation is chosen as the relative orientation model. To deal with the weak geometry, a reference DEM is used in this method as an additional constraint in the block adjustment, which only calculates the planimetry coordinates of tie points (TPs). After that we can use the obtained affine transform coefficients to generate virtual grid, and update rational polynomial coefficients (RPCs) to complete the geometric stitching. Our proposed method was tested on GaoFen-2(GF-2) dual-camera panchromatic (PAN) images. The test results show that the proposed method can achieve an accuracy of better than 0.5 pixel in planimetry and have a seamless visual effect. For regions with small relief, when global DEM with 1 km grid, SRTM with 90 m grid and ASTER GDEM V2 with 30 m grid replaced DEM with 1m grid as elevation constraint it is almost no loss of accuracy. The test results proved the effectiveness and feasibility of the stitching method.

  19. Reduction of a grid moiré pattern by integrating a carbon-interspaced high precision x-ray grid with a digital radiographic detector.

    PubMed

    Yoon, Jai-Woong; Park, Young-Guk; Park, Chun-Joo; Kim, Do-Il; Lee, Jin-Ho; Chung, Nag-Kun; Choe, Bo-Young; Suh, Tae-Suk; Lee, Hyoung-Koo

    2007-11-01

    The stationary grid commonly used with a digital x-ray detector causes a moiré interference pattern due to the inadequate sampling of the grid shadows by the detector pixels. There are limitations with the previous methods used to remove the moiré such as imperfect electromagnetic interference shielding and the loss of image information. A new method is proposed for removing the moiré pattern by integrating a carbon-interspaced high precision x-ray grid with high grid line uniformity with the detector for frequency matching. The grid was aligned to the detector by translating and rotating the x-ray grid with respect to the detector using microcontrolled alignment mechanism. The gap between the grid and the detector surface was adjusted with micrometer precision to precisely match the projected grid line pitch to the detector pixel pitch. Considering the magnification of the grid shadows on the detector plane, the grids were manufactured such that the grid line frequency was slightly higher than the detector sampling frequency. This study examined the factors that affect the moiré pattern, particularly the line frequency and displacement. The frequency of the moiré pattern was found to be sensitive to the angular displacement of the grid with respect to the detector while the horizontal translation alters the phase but not the moiré frequency. The frequency of the moiré pattern also decreased with decreasing difference in frequency between the grid and the detector, and a moiré-free image was produced after complete matching for a given source to detector distance. The image quality factors including the contrast, signal-to-noise ratio and uniformity in the images with and without the moiré pattern were investigated.

  20. A new ghost-node method for linking different models and initial investigations of heterogeneity and nonmatching grids

    USGS Publications Warehouse

    Dickinson, J.E.; James, S.C.; Mehl, S.; Hill, M.C.; Leake, S.A.; Zyvoloski, G.A.; Faunt, C.C.; Eddebbarh, A.-A.

    2007-01-01

    A flexible, robust method for linking parent (regional-scale) and child (local-scale) grids of locally refined models that use different numerical methods is developed based on a new, iterative ghost-node method. Tests are presented for two-dimensional and three-dimensional pumped systems that are homogeneous or that have simple heterogeneity. The parent and child grids are simulated using the block-centered finite-difference MODFLOW and control-volume finite-element FEHM models, respectively. The models are solved iteratively through head-dependent (child model) and specified-flow (parent model) boundary conditions. Boundary conditions for models with nonmatching grids or zones of different hydraulic conductivity are derived and tested against heads and flows from analytical or globally-refined models. Results indicate that for homogeneous two- and three-dimensional models with matched grids (integer number of child cells per parent cell), the new method is nearly as accurate as the coupling of two MODFLOW models using the shared-node method and, surprisingly, errors are slightly lower for nonmatching grids (noninteger number of child cells per parent cell). For heterogeneous three-dimensional systems, this paper compares two methods for each of the two sets of boundary conditions: external heads at head-dependent boundary conditions for the child model are calculated using bilinear interpolation or a Darcy-weighted interpolation; specified-flow boundary conditions for the parent model are calculated using model-grid or hydrogeologic-unit hydraulic conductivities. Results suggest that significantly more accurate heads and flows are produced when both Darcy-weighted interpolation and hydrogeologic-unit hydraulic conductivities are used, while the other methods produce larger errors at the boundary between the regional and local models. The tests suggest that, if posed correctly, the ghost-node method performs well. Additional testing is needed for highly heterogeneous systems. ?? 2007 Elsevier Ltd. All rights reserved.

  1. New multigrid approach for three-dimensional unstructured, adaptive grids

    NASA Technical Reports Server (NTRS)

    Parthasarathy, Vijayan; Kallinderis, Y.

    1994-01-01

    A new multigrid method with adaptive unstructured grids is presented. The three-dimensional Euler equations are solved on tetrahedral grids that are adaptively refined or coarsened locally. The multigrid method is employed to propagate the fine grid corrections more rapidly by redistributing the changes-in-time of the solution from the fine grid to the coarser grids to accelerate convergence. A new approach is employed that uses the parent cells of the fine grid cells in an adapted mesh to generate successively coaser levels of multigrid. This obviates the need for the generation of a sequence of independent, nonoverlapping grids as well as the relatively complicated operations that need to be performed to interpolate the solution and the residuals between the independent grids. The solver is an explicit, vertex-based, finite volume scheme that employs edge-based data structures and operations. Spatial discretization is of central-differencing type combined with a special upwind-like smoothing operators. Application cases include adaptive solutions obtained with multigrid acceleration for supersonic and subsonic flow over a bump in a channel, as well as transonic flow around the ONERA M6 wing. Two levels of multigrid resulted in reduction in the number of iterations by a factor of 5.

  2. ADVANCED WAVEFORM SIMULATION FOR SEISMIC MONITORING EVENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helmberger, Donald V.; Tromp, Jeroen; Rodgers, Arthur J.

    Earthquake source parameters underpin several aspects of nuclear explosion monitoring. Such aspects are: calibration of moment magnitudes (including coda magnitudes) and magnitude and distance amplitude corrections (MDAC); source depths; discrimination by isotropic moment tensor components; and waveform modeling for structure (including waveform tomography). This project seeks to improve methods for and broaden the applicability of estimating source parameters from broadband waveforms using the Cut-and-Paste (CAP) methodology. The CAP method uses a library of Green’s functions for a one-dimensional (1D, depth-varying) seismic velocity model. The method separates the main arrivals of the regional waveform into 5 windows: Pnl (vertical and radialmore » components), Rayleigh (vertical and radial components) and Love (transverse component). Source parameters are estimated by grid search over strike, dip, rake and depth and seismic moment or equivalently moment magnitude, MW, are adjusted to fit the amplitudes. Key to the CAP method is allowing the synthetic seismograms to shift in time relative to the data in order to account for path-propagation errors (delays) in the 1D seismic velocity model used to compute the Green’s functions. The CAP method has been shown to improve estimates of source parameters, especially when delay and amplitude biases are calibrated using high signal-to-noise data from moderate earthquakes, CAP+.« less

  3. Deep learning for classification of islanding and grid disturbance based on multi-resolution singular spectrum entropy

    NASA Astrophysics Data System (ADS)

    Li, Tie; He, Xiaoyang; Tang, Junci; Zeng, Hui; Zhou, Chunying; Zhang, Nan; Liu, Hui; Lu, Zhuoxin; Kong, Xiangrui; Yan, Zheng

    2018-02-01

    Forasmuch as the distinguishment of islanding is easy to be interfered by grid disturbance, island detection device may make misjudgment thus causing the consequence of photovoltaic out of service. The detection device must provide with the ability to differ islanding from grid disturbance. In this paper, the concept of deep learning is introduced into classification of islanding and grid disturbance for the first time. A novel deep learning framework is proposed to detect and classify islanding or grid disturbance. The framework is a hybrid of wavelet transformation, multi-resolution singular spectrum entropy, and deep learning architecture. As a signal processing method after wavelet transformation, multi-resolution singular spectrum entropy combines multi-resolution analysis and spectrum analysis with entropy as output, from which we can extract the intrinsic different features between islanding and grid disturbance. With the features extracted, deep learning is utilized to classify islanding and grid disturbance. Simulation results indicate that the method can achieve its goal while being highly accurate, so the photovoltaic system mistakenly withdrawing from power grids can be avoided.

  4. DEM Based Modeling: Grid or TIN? The Answer Depends

    NASA Astrophysics Data System (ADS)

    Ogden, F. L.; Moreno, H. A.

    2015-12-01

    The availability of petascale supercomputing power has enabled process-based hydrological simulations on large watersheds and two-way coupling with mesoscale atmospheric models. Of course with increasing watershed scale come corresponding increases in watershed complexity, including wide ranging water management infrastructure and objectives, and ever increasing demands for forcing data. Simulations of large watersheds using grid-based models apply a fixed resolution over the entire watershed. In large watersheds, this means an enormous number of grids, or coarsening of the grid resolution to reduce memory requirements. One alternative to grid-based methods is the triangular irregular network (TIN) approach. TINs provide the flexibility of variable resolution, which allows optimization of computational resources by providing high resolution where necessary and low resolution elsewhere. TINs also increase required effort in model setup, parameter estimation, and coupling with forcing data which are often gridded. This presentation discusses the costs and benefits of the use of TINs compared to grid-based methods, in the context of large watershed simulations within the traditional gridded WRF-HYDRO framework and the new TIN-based ADHydro high performance computing watershed simulator.

  5. GENIE(++): A Multi-Block Structured Grid System

    NASA Technical Reports Server (NTRS)

    Williams, Tonya; Nadenthiran, Naren; Thornburg, Hugh; Soni, Bharat K.

    1996-01-01

    The computer code GENIE++ is a continuously evolving grid system containing a multitude of proven geometry/grid techniques. The generation process in GENIE++ is based on an earlier version. The process uses several techniques either separately or in combination to quickly and economically generate sculptured geometry descriptions and grids for arbitrary geometries. The computational mesh is formed by using an appropriate algebraic method. Grid clustering is accomplished with either exponential or hyperbolic tangent routines which allow the user to specify a desired point distribution. Grid smoothing can be accomplished by using an elliptic solver with proper forcing functions. B-spline and Non-Uniform Rational B-splines (NURBS) algorithms are used for surface definition and redistribution. The built in sculptured geometry definition with desired distribution of points, automatic Bezier curve/surface generation for interior boundaries/surfaces, and surface redistribution is based on NURBS. Weighted Lagrance/Hermite transfinite interpolation methods, interactive geometry/grid manipulation modules, and on-line graphical visualization of the generation process are salient features of this system which result in a significant time savings for a given geometry/grid application.

  6. Black box multigrid

    NASA Technical Reports Server (NTRS)

    Dendy, J. E., Jr.

    1981-01-01

    The black box multigrid (BOXMG) code, which only needs specification of the matrix problem for application in the multigrid method was investigated. It is contended that a major problem with the multigrid method is that each new grid configuration requires a major programming effort to develop a code that specifically handles that grid configuration. The SOR and ICCG methods only specify the matrix problem, no matter what the grid configuration. It is concluded that the BOXMG does everything else necessary to set up the auxiliary coarser problems to achieve a multigrid solution.

  7. Conservative Overset Grids for Overflow For The Sonic Wave Atmospheric Propagation Project

    NASA Technical Reports Server (NTRS)

    Onufer, Jeff T.; Cummings, Russell M.

    1999-01-01

    Methods are presented that can be used to make multiple, overset grids communicate in a conservative manner. The methods are developed for use with the Chimera overset method using the PEGSUS code and the OVERFLOW solver.

  8. The eGo grid model: An open-source and open-data based synthetic medium-voltage grid model for distribution power supply systems

    NASA Astrophysics Data System (ADS)

    Amme, J.; Pleßmann, G.; Bühler, J.; Hülk, L.; Kötter, E.; Schwaegerl, P.

    2018-02-01

    The increasing integration of renewable energy into the electricity supply system creates new challenges for distribution grids. The planning and operation of distribution systems requires appropriate grid models that consider the heterogeneity of existing grids. In this paper, we describe a novel method to generate synthetic medium-voltage (MV) grids, which we applied in our DIstribution Network GeneratOr (DINGO). DINGO is open-source software and uses freely available data. Medium-voltage grid topologies are synthesized based on location and electricity demand in defined demand areas. For this purpose, we use GIS data containing demand areas with high-resolution spatial data on physical properties, land use, energy, and demography. The grid topology is treated as a capacitated vehicle routing problem (CVRP) combined with a local search metaheuristics. We also consider the current planning principles for MV distribution networks, paying special attention to line congestion and voltage limit violations. In the modelling process, we included power flow calculations for validation. The resulting grid model datasets contain 3608 synthetic MV grids in high resolution, covering all of Germany and taking local characteristics into account. We compared the modelled networks with real network data. In terms of number of transformers and total cable length, we conclude that the method presented in this paper generates realistic grids that could be used to implement a cost-optimised electrical energy system.

  9. Assessment of grid optimisation measures for the German transmission grid using open source grid data

    NASA Astrophysics Data System (ADS)

    Böing, F.; Murmann, A.; Pellinger, C.; Bruckmeier, A.; Kern, T.; Mongin, T.

    2018-02-01

    The expansion of capacities in the German transmission grid is a necessity for further integration of renewable energy sources into the electricity sector. In this paper, the grid optimisation measures ‘Overhead Line Monitoring’, ‘Power-to-Heat’ and ‘Demand Response in the Industry’ are evaluated and compared against conventional grid expansion for the year 2030. Initially, the methodical approach of the simulation model is presented and detailed descriptions of the grid model and the used grid data, which partly originates from open-source platforms, are provided. Further, this paper explains how ‘Curtailment’ and ‘Redispatch’ can be reduced by implementing grid optimisation measures and how the depreciation of economic costs can be determined considering construction costs. The developed simulations show that the conventional grid expansion is more efficient and implies more grid relieving effects than the evaluated grid optimisation measures.

  10. Surface Modeling and Grid Generation of Orbital Sciences X34 Vehicle. Phase 1

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    1997-01-01

    The surface modeling and grid generation requirements, motivations, and methods used to develop Computational Fluid Dynamic volume grids for the X34-Phase 1 are presented. The requirements set forth by the Aerothermodynamics Branch at the NASA Langley Research Center serve as the basis for the final techniques used in the construction of all volume grids, including grids for parametric studies of the X34. The Integrated Computer Engineering and Manufacturing code for Computational Fluid Dynamics (ICEM/CFD), the Grid Generation code (GRIDGEN), the Three-Dimensional Multi-block Advanced Grid Generation System (3DMAGGS) code, and Volume Grid Manipulator (VGM) code are used to enable the necessary surface modeling, surface grid generation, volume grid generation, and grid alterations, respectively. All volume grids generated for the X34, as outlined in this paper, were used for CFD simulations within the Aerothermodynamics Branch.

  11. Noniterative three-dimensional grid generation using parabolic partial differential equations

    NASA Technical Reports Server (NTRS)

    Edwards, T. A.

    1985-01-01

    A new algorithm for generating three-dimensional grids has been developed and implemented which numerically solves a parabolic partial differential equation (PDE). The solution procedure marches outward in two coordinate directions, and requires inversion of a scalar tridiagonal system in the third. Source terms have been introduced to control the spacing and angle of grid lines near the grid boundaries, and to control the outer boundary point distribution. The method has been found to generate grids about 100 times faster than comparable grids generated via solution of elliptic PDEs, and produces smooth grids for finite-difference flow calculations.

  12. A self-adaptive-grid method with application to airfoil flow

    NASA Technical Reports Server (NTRS)

    Nakahashi, K.; Deiwert, G. S.

    1985-01-01

    A self-adaptive-grid method is described that is suitable for multidimensional steady and unsteady computations. Based on variational principles, a spring analogy is used to redistribute grid points in an optimal sense to reduce the overall solution error. User-specified parameters, denoting both maximum and minimum permissible grid spacings, are used to define the all-important constants, thereby minimizing the empiricism and making the method self-adaptive. Operator splitting and one-sided controls for orthogonality and smoothness are used to make the method practical, robust, and efficient. Examples are included for both steady and unsteady viscous flow computations about airfoils in two dimensions, as well as for a steady inviscid flow computation and a one-dimensional case. These examples illustrate the precise control the user has with the self-adaptive method and demonstrate a significant improvement in accuracy and quality of the solutions.

  13. A Comparative Study of Three Methodologies for Modeling Dynamic Stall

    NASA Technical Reports Server (NTRS)

    Sankar, L.; Rhee, M.; Tung, C.; ZibiBailly, J.; LeBalleur, J. C.; Blaise, D.; Rouzaud, O.

    2002-01-01

    During the past two decades, there has been an increased reliance on the use of computational fluid dynamics methods for modeling rotors in high speed forward flight. Computational methods are being developed for modeling the shock induced loads on the advancing side, first-principles based modeling of the trailing wake evolution, and for retreating blade stall. The retreating blade dynamic stall problem has received particular attention, because the large variations in lift and pitching moments encountered in dynamic stall can lead to blade vibrations and pitch link fatigue. Restricting to aerodynamics, the numerical prediction of dynamic stall is still a complex and challenging CFD problem, that, even in two dimensions at low speed, gathers the major difficulties of aerodynamics, such as the grid resolution requirements for the viscous phenomena at leading-edge bubbles or in mixing-layers, the bias of the numerical viscosity, and the major difficulties of the physical modeling, such as the turbulence models, the transition models, whose both determinant influences, already present in static maximal-lift or stall computations, are emphasized by the dynamic aspect of the phenomena.

  14. An Off-Grid Turbo Channel Estimation Algorithm for Millimeter Wave Communications.

    PubMed

    Han, Lingyi; Peng, Yuexing; Wang, Peng; Li, Yonghui

    2016-09-22

    The bandwidth shortage has motivated the exploration of the millimeter wave (mmWave) frequency spectrum for future communication networks. To compensate for the severe propagation attenuation in the mmWave band, massive antenna arrays can be adopted at both the transmitter and receiver to provide large array gains via directional beamforming. To achieve such array gains, channel estimation (CE) with high resolution and low latency is of great importance for mmWave communications. However, classic super-resolution subspace CE methods such as multiple signal classification (MUSIC) and estimation of signal parameters via rotation invariant technique (ESPRIT) cannot be applied here due to RF chain constraints. In this paper, an enhanced CE algorithm is developed for the off-grid problem when quantizing the angles of mmWave channel in the spatial domain where off-grid problem refers to the scenario that angles do not lie on the quantization grids with high probability, and it results in power leakage and severe reduction of the CE performance. A new model is first proposed to formulate the off-grid problem. The new model divides the continuously-distributed angle into a quantized discrete grid part, referred to as the integral grid angle, and an offset part, termed fractional off-grid angle. Accordingly, an iterative off-grid turbo CE (IOTCE) algorithm is proposed to renew and upgrade the CE between the integral grid part and the fractional off-grid part under the Turbo principle. By fully exploiting the sparse structure of mmWave channels, the integral grid part is estimated by a soft-decoding based compressed sensing (CS) method called improved turbo compressed channel sensing (ITCCS). It iteratively updates the soft information between the linear minimum mean square error (LMMSE) estimator and the sparsity combiner. Monte Carlo simulations are presented to evaluate the performance of the proposed method, and the results show that it enhances the angle detection resolution greatly.

  15. Transformation of two and three-dimensional regions by elliptic systems

    NASA Technical Reports Server (NTRS)

    Mastin, C. Wayne

    1994-01-01

    Several reports are attached to this document which contain the results of our research at the end of this contract period. Three of the reports deal with our work on generating surface grids. One is a preprint of a paper which will appear in the journal Applied Mathematics and Computation. Another is the abstract from a dissertation which has been prepared by Ahmed Khamayseh, a graduate student who has been supported by this grant for the last two years. The last report on surface grids is the extended abstract of a paper to be presented at the 14th IMACS World Congress in July. This report contains results on conformal mappings of surfaces, which are closely related to elliptic methods for surface grid generation. A preliminary report is included on new methods for dealing with block interfaces in multiblock grid systems. The development work is complete and the methods will eventually be incorporated into the National Grid Project (NGP) grid generation code. Thus, the attached report contains only a simple grid system which was used to test the algorithms to prove that the concepts are sound. These developments will greatly aid grid control when using elliptic systems and prevent unwanted grid movement. The last report is a brief summary of some timings that were obtained when the multiblock grid generation code was run on the Intel IPSC/860 hypercube. Since most of the data in a grid code is local to a particular block, only a small fraction of the total data must be passed between processors. The data is also distributed among the processors so that the total size of the grid can be increase along with the number of processors. This work is only in a preliminary stage. However, one of the ERC graduate students has taken an interest in the project and is presently extending these results as a part of his master's thesis.

  16. Benchmarking an Unstructured-Grid Model for Tsunami Current Modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Yinglong J.; Priest, George; Allan, Jonathan; Stimely, Laura

    2016-12-01

    We present model results derived from a tsunami current benchmarking workshop held by the NTHMP (National Tsunami Hazard Mitigation Program) in February 2015. Modeling was undertaken using our own 3D unstructured-grid model that has been previously certified by the NTHMP for tsunami inundation. Results for two benchmark tests are described here, including: (1) vortex structure in the wake of a submerged shoal and (2) impact of tsunami waves on Hilo Harbor in the 2011 Tohoku event. The modeled current velocities are compared with available lab and field data. We demonstrate that the model is able to accurately capture the velocity field in the two benchmark tests; in particular, the 3D model gives a much more accurate wake structure than the 2D model for the first test, with the root-mean-square error and mean bias no more than 2 cm s-1 and 8 mm s-1, respectively, for the modeled velocity.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griswold, M. E., E-mail: mgriswold@trialphaenergy.com; Korepanov, S.; Thompson, M. C.

    An end loss analyzer system consisting of electrostatic, gridded retarding-potential analyzers and pyroelectric crystal bolometers was developed to characterize the plasma loss along open field lines to the divertors of C-2U. The system measures the current and energy distribution of escaping ions as well as the total power flux to enable calculation of the energy lost per escaping electron/ion pair. Special care was taken in the construction of the analyzer elements so that they can be directly mounted to the divertor electrode. An attenuation plate at the entrance to the gridded retarding-potential analyzer reduces plasma density by a factor ofmore » 60 to prevent space charge limitations inside the device, without sacrificing its angular acceptance of ions. In addition, all of the electronics for the measurement are isolated from ground so that they can float to the bias potential of the electrode, 2 kV below ground.« less

  18. A bulk viscosity approach for shock capturing on unstructured grids

    NASA Astrophysics Data System (ADS)

    Shoeybi, Mohammad; Larsson, Nils Johan; Ham, Frank; Moin, Parviz

    2008-11-01

    The bulk viscosity approach for shock capturing (Cook and Cabot, JCP, 2005) augments the bulk part of the viscous stress tensor. The intention is to capture shock waves without dissipating turbulent structures. The present work extends and modifies this method for unstructured grids. We propose a method that properly scales the bulk viscosity with the grid spacing normal to the shock for unstructured grid for which the shock is not necessarily aligned with the grid. The magnitude of the strain rate tensor used in the original formulation is replaced with the dilatation, which appears to be more appropriate in the vortical turbulent flow regions (Mani et al., 2008). The original form of the model is found to have an impact on dilatational motions away form the shock wave, which is eliminated by a proposed localization of the bulk viscosity. Finally, to allow for grid adaptation around shock waves, an explicit/implicit time advancement scheme has been developed that adaptively identifies the stiff regions. The full method has been verified with several test cases, including 2D shock-vorticity entropy interaction, homogenous isotropic turbulence, and turbulent flow over a cylinder.

  19. An Approach for Dynamic Grids

    NASA Technical Reports Server (NTRS)

    Slater, John W.; Liou, Meng-Sing; Hindman, Richard G.

    1994-01-01

    An approach is presented for the generation of two-dimensional, structured, dynamic grids. The grid motion may be due to the motion of the boundaries of the computational domain or to the adaptation of the grid to the transient, physical solution. A time-dependent grid is computed through the time integration of the grid speeds which are computed from a system of grid speed equations. The grid speed equations are derived from the time-differentiation of the grid equations so as to ensure that the dynamic grid maintains the desired qualities of the static grid. The grid equations are the Euler-Lagrange equations derived from a variational statement for the grid. The dynamic grid method is demonstrated for a model problem involving boundary motion, an inviscid flow in a converging-diverging nozzle during startup, and a viscous flow over a flat plate with an impinging shock wave. It is shown that the approach is more accurate for transient flows than an approach in which the grid speeds are computed using a finite difference with respect to time of the grid. However, the approach requires significantly more computational effort.

  20. Comparison of the accuracy of hemoglobin point of care testing using HemoCue and GEM Premier 3000 with automated hematology analyzer in emergency room.

    PubMed

    Zatloukal, Jan; Pouska, Jiri; Kletecka, Jakub; Pradl, Richard; Benes, Jan

    2016-12-01

    The laboratory analysis provides accurate, but time consuming hemoglobin level estimation especially in the emergency setting. The reliability of time-sparing point of care devices (POCT) remains uncertain. We tested two POCT devices accuracy (HemoCue ® 201 + and Gem ® Premier™3000) in routine emergency department workflow. Blood samples taken from patients admitted to the emergency department were analyzed for hemoglobin concentration using a laboratory reference Beckman Coulter LH 750 (HB LAB ), the HemoCue (HB HC ) and the Gem Premier 3000 (HB GEM ). Pairwise comparison for each device and Hb LAB was performed using correlation and the Bland-Altman methods. The reliability of transfusion decision was assessed using three-zone error grid. A total of 292 measurements were performed in 99 patients. Mean hemoglobin level were 115 ± 33, 110 ± 28 and 111 ± 30 g/l for Hb HC , Hb GEM and Hb LAB respectively. A significant correlation was observed for both devices: Hb HC versus Hb LAB (r 2  = 0.93, p < 0.001) and HB GEM versus HB LAB (r 2  = 0.86, p < 0.001). The Bland-Altman method revealed bias of -3.7 g/l (limits of agreement -20.9 to 13.5) for HB HC and HB LAB and 2.5 g/l (-18.6 to 23.5) for HB GEM and HB LAB , which significantly differed between POCT devices (p < 0.001). Using the error grid methodology: 94 or 91 % of values (Hb HC and Hb GEM ) fell in the zone of acceptable difference (A), whereas 0 and 1 % (Hb HC and Hb GEM ) were unacceptable (zone C). The absolute accuracy of tested POCT devices was low though reaching a high level of correlation with laboratory measurement. The results of the Morey´s error grid were unfavorable for both POCT devices.

  1. European Forest Cover During the Past 12,000 Years: A Palynological Reconstruction Based on Modern Analogs and Remote Sensing

    PubMed Central

    Zanon, Marco; Davis, Basil A. S.; Marquer, Laurent; Brewer, Simon; Kaplan, Jed O.

    2018-01-01

    Characterization of land cover change in the past is fundamental to understand the evolution and present state of the Earth system, the amount of carbon and nutrient stocks in terrestrial ecosystems, and the role played by land-atmosphere interactions in influencing climate. The estimation of land cover changes using palynology is a mature field, as thousands of sites in Europe have been investigated over the last century. Nonetheless, a quantitative land cover reconstruction at a continental scale has been largely missing. Here, we present a series of maps detailing the evolution of European forest cover during last 12,000 years. Our reconstructions are based on the Modern Analog Technique (MAT): a calibration dataset is built by coupling modern pollen samples with the corresponding satellite-based forest-cover data. Fossil reconstructions are then performed by assigning to every fossil sample the average forest cover of its closest modern analogs. The occurrence of fossil pollen assemblages with no counterparts in modern vegetation represents a known limit of analog-based methods. To lessen the influence of no-analog situations, pollen taxa were converted into plant functional types prior to running the MAT algorithm. We then interpolate site-specific reconstructions for each timeslice using a four-dimensional gridding procedure to create continuous gridded maps at a continental scale. The performance of the MAT is compared against methodologically independent forest-cover reconstructions produced using the REVEALS method. MAT and REVEALS estimates are most of the time in good agreement at a trend level, yet MAT regularly underestimates the occurrence of densely forested situations, requiring the application of a bias correction procedure. The calibrated MAT-based maps draw a coherent picture of the establishment of forests in Europe in the Early Holocene with the greatest forest-cover fractions reconstructed between ∼8,500 and 6,000 calibrated years BP. This forest maximum is followed by a general decline in all parts of the continent, likely as a result of anthropogenic deforestation. The continuous spatial and temporal nature of our reconstruction, its continental coverage, and gridded format make it suitable for climate, hydrological, and biogeochemical modeling, among other uses. PMID:29568303

  2. European Forest Cover During the Past 12,000 Years: A Palynological Reconstruction Based on Modern Analogs and Remote Sensing.

    PubMed

    Zanon, Marco; Davis, Basil A S; Marquer, Laurent; Brewer, Simon; Kaplan, Jed O

    2018-01-01

    Characterization of land cover change in the past is fundamental to understand the evolution and present state of the Earth system, the amount of carbon and nutrient stocks in terrestrial ecosystems, and the role played by land-atmosphere interactions in influencing climate. The estimation of land cover changes using palynology is a mature field, as thousands of sites in Europe have been investigated over the last century. Nonetheless, a quantitative land cover reconstruction at a continental scale has been largely missing. Here, we present a series of maps detailing the evolution of European forest cover during last 12,000 years. Our reconstructions are based on the Modern Analog Technique (MAT): a calibration dataset is built by coupling modern pollen samples with the corresponding satellite-based forest-cover data. Fossil reconstructions are then performed by assigning to every fossil sample the average forest cover of its closest modern analogs. The occurrence of fossil pollen assemblages with no counterparts in modern vegetation represents a known limit of analog-based methods. To lessen the influence of no-analog situations, pollen taxa were converted into plant functional types prior to running the MAT algorithm. We then interpolate site-specific reconstructions for each timeslice using a four-dimensional gridding procedure to create continuous gridded maps at a continental scale. The performance of the MAT is compared against methodologically independent forest-cover reconstructions produced using the REVEALS method. MAT and REVEALS estimates are most of the time in good agreement at a trend level, yet MAT regularly underestimates the occurrence of densely forested situations, requiring the application of a bias correction procedure. The calibrated MAT-based maps draw a coherent picture of the establishment of forests in Europe in the Early Holocene with the greatest forest-cover fractions reconstructed between ∼8,500 and 6,000 calibrated years BP. This forest maximum is followed by a general decline in all parts of the continent, likely as a result of anthropogenic deforestation. The continuous spatial and temporal nature of our reconstruction, its continental coverage, and gridded format make it suitable for climate, hydrological, and biogeochemical modeling, among other uses.

  3. Surface Modeling, Grid Generation, and Related Issues in Computational Fluid Dynamic (CFD) Solutions

    NASA Technical Reports Server (NTRS)

    Choo, Yung K. (Compiler)

    1995-01-01

    The NASA Steering Committee for Surface Modeling and Grid Generation (SMAGG) sponsored a workshop on surface modeling, grid generation, and related issues in Computational Fluid Dynamics (CFD) solutions at Lewis Research Center, Cleveland, Ohio, May 9-11, 1995. The workshop provided a forum to identify industry needs, strengths, and weaknesses of the five grid technologies (patched structured, overset structured, Cartesian, unstructured, and hybrid), and to exchange thoughts about where each technology will be in 2 to 5 years. The workshop also provided opportunities for engineers and scientists to present new methods, approaches, and applications in SMAGG for CFD. This Conference Publication (CP) consists of papers on industry overview, NASA overview, five grid technologies, new methods/ approaches/applications, and software systems.

  4. Feasibility of a simple method of hybrid collimation for megavoltage grid therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almendral, Pedro; Mancha, Pedro J.; Roberto, Daniel

    2013-05-15

    Purpose: Megavoltage grid therapy is currently delivered with step-and-shoot multisegment techniques or using a high attenuation block with divergent holes. However, the commercial availability of grid blocks is limited, their construction is difficult, and step-and-shoot techniques require longer treatment times and are not practical with some multileaf collimators. This work studies the feasibility of a hybrid collimation system for grid therapy that does not require multiple segments and can be easily implemented with widely available technical means. Methods: The authors have developed a system to generate a grid of beamlets by the simultaneous use of two perpendicular sets of equallymore » spaced leaves that project stripe patterns in orthogonal directions. One of them is generated with the multileaf collimator integrated in the accelerator and the other with an in-house made collimator constructed with a low melting point alloy commonly available at radiation oncology departments. The characteristics of the grid fields for 6 and 18 MV have been studied with a shielded diode, an unshielded diode, and radiochromic film. Results: The grid obtained with the hybrid collimation is similar to some of the grids used clinically with respect to the beamlet size (about 1 cm) and the percentage of open beam (1/4 of the total field). The grid fields are less penetrating than the open fields of the same energy. Depending on the depth and the direction of the profiles (diagonal or along the principal axes), the measured valley-to-peak dose ratios range from 5% to 16% for 6 MV and from 9% to 20% for 18 MV. All the detectors yield similar results in the measurement of profiles and percent depth dose, but the shielded diode seems to overestimate the output factors. Conclusions: The combination of two stripe pattern collimators in orthogonal directions is a feasible method to obtain two-dimensional arrays of beamlets and has potential usefulness as an efficient way to deliver grid therapy. The implementation of this method is technically simpler than the construction of a conventional grid block.« less

  5. Hypothesis Testing Using Factor Score Regression

    PubMed Central

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2015-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886

  6. A Grid Sourcing and Adaptation Study Using Unstructured Grids for Supersonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Carter, Melissa B.; Deere, Karen A.

    2008-01-01

    NASA created the Supersonics Project as part of the NASA Fundamental Aeronautics Program to advance technology that will make a supersonic flight over land viable. Computational flow solvers have lacked the ability to accurately predict sonic boom from the near to far field. The focus of this investigation was to establish gridding and adaptation techniques to predict near-to-mid-field (<10 body lengths below the aircraft) boom signatures at supersonic speeds using the USM3D unstructured grid flow solver. The study began by examining sources along the body the aircraft, far field sourcing and far field boundaries. The study then examined several techniques for grid adaptation. During the course of the study, volume sourcing was introduced as a new way to source grids using the grid generation code VGRID. Two different methods of using the volume sources were examined. The first method, based on manual insertion of the numerous volume sources, made great improvements in the prediction capability of USM3D for boom signatures. The second method (SSGRID), which uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid and pressure waves, showed similar results with a more automated approach. Due to SSGRID s results and ease of use, the rest of the study focused on developing a best practice using SSGRID. The best practice created by this study for boom predictions using the CFD code USM3D involved: 1) creating a small cylindrical outer boundary either 1 or 2 body lengths in diameter (depending on how far below the aircraft the boom prediction is required), 2) using a single volume source under the aircraft, and 3) using SSGRID to stretch and shear the grid to the desired length.

  7. Islanding detection technique using wavelet energy in grid-connected PV system

    NASA Astrophysics Data System (ADS)

    Kim, Il Song

    2016-08-01

    This paper proposes a new islanding detection method using wavelet energy in a grid-connected photovoltaic system. The method detects spectral changes in the higher-frequency components of the point of common coupling voltage and obtains wavelet coefficients by multilevel wavelet analysis. The autocorrelation of the wavelet coefficients can clearly identify islanding detection, even in the variations of the grid voltage harmonics during normal operating conditions. The advantage of the proposed method is that it can detect islanding condition the conventional under voltage/over voltage/under frequency/over frequency methods fail to detect. The theoretical method to obtain wavelet energies is evolved and verified by the experimental result.

  8. 3D magnetospheric parallel hybrid multi-grid method applied to planet–plasma interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leclercq, L., E-mail: ludivine.leclercq@latmos.ipsl.fr; Modolo, R., E-mail: ronan.modolo@latmos.ipsl.fr; Leblanc, F.

    2016-03-15

    We present a new method to exploit multiple refinement levels within a 3D parallel hybrid model, developed to study planet–plasma interactions. This model is based on the hybrid formalism: ions are kinetically treated whereas electrons are considered as a inertia-less fluid. Generally, ions are represented by numerical particles whose size equals the volume of the cells. Particles that leave a coarse grid subsequently entering a refined region are split into particles whose volume corresponds to the volume of the refined cells. The number of refined particles created from a coarse particle depends on the grid refinement rate. In order tomore » conserve velocity distribution functions and to avoid calculations of average velocities, particles are not coalesced. Moreover, to ensure the constancy of particles' shape function sizes, the hybrid method is adapted to allow refined particles to move within a coarse region. Another innovation of this approach is the method developed to compute grid moments at interfaces between two refinement levels. Indeed, the hybrid method is adapted to accurately account for the special grid structure at the interfaces, avoiding any overlapping grid considerations. Some fundamental test runs were performed to validate our approach (e.g. quiet plasma flow, Alfven wave propagation). Lastly, we also show a planetary application of the model, simulating the interaction between Jupiter's moon Ganymede and the Jovian plasma.« less

  9. A computing method for spatial accessibility based on grid partition

    NASA Astrophysics Data System (ADS)

    Ma, Linbing; Zhang, Xinchang

    2007-06-01

    An accessibility computing method and process based on grid partition was put forward in the paper. As two important factors impacting on traffic, density of road network and relative spatial resistance for difference land use was integrated into computing traffic cost in each grid. A* algorithms was inducted to searching optimum traffic cost of grids path, a detailed searching process and definition of heuristic evaluation function was described in the paper. Therefore, the method can be implemented more simply and its data source is obtained more easily. Moreover, by changing heuristic searching information, more reasonable computing result can be obtained. For confirming our research, a software package was developed with C# language under ArcEngine9 environment. Applying the computing method, a case study on accessibility of business districts in Guangzhou city was carried out.

  10. Mapping species distributions with MAXENT using a geographically biased sample of presence data: a performance assessment of methods for correcting sampling bias.

    PubMed

    Fourcade, Yoan; Engler, Jan O; Rödder, Dennis; Secondi, Jean

    2014-01-01

    MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one "virtual" derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases.

  11. Mapping Species Distributions with MAXENT Using a Geographically Biased Sample of Presence Data: A Performance Assessment of Methods for Correcting Sampling Bias

    PubMed Central

    Fourcade, Yoan; Engler, Jan O.; Rödder, Dennis; Secondi, Jean

    2014-01-01

    MAXENT is now a common species distribution modeling (SDM) tool used by conservation practitioners for predicting the distribution of a species from a set of records and environmental predictors. However, datasets of species occurrence used to train the model are often biased in the geographical space because of unequal sampling effort across the study area. This bias may be a source of strong inaccuracy in the resulting model and could lead to incorrect predictions. Although a number of sampling bias correction methods have been proposed, there is no consensual guideline to account for it. We compared here the performance of five methods of bias correction on three datasets of species occurrence: one “virtual” derived from a land cover map, and two actual datasets for a turtle (Chrysemys picta) and a salamander (Plethodon cylindraceus). We subjected these datasets to four types of sampling biases corresponding to potential types of empirical biases. We applied five correction methods to the biased samples and compared the outputs of distribution models to unbiased datasets to assess the overall correction performance of each method. The results revealed that the ability of methods to correct the initial sampling bias varied greatly depending on bias type, bias intensity and species. However, the simple systematic sampling of records consistently ranked among the best performing across the range of conditions tested, whereas other methods performed more poorly in most cases. The strong effect of initial conditions on correction performance highlights the need for further research to develop a step-by-step guideline to account for sampling bias. However, this method seems to be the most efficient in correcting sampling bias and should be advised in most cases. PMID:24818607

  12. Interactive grid adaption

    NASA Technical Reports Server (NTRS)

    Abolhassani, Jamshid S.; Everton, Eric L.

    1990-01-01

    An interactive grid adaption method is developed, discussed and applied to the unsteady flow about an oscillating airfoil. The user is allowed to have direct interaction with the adaption of the grid as well as the solution procedure. Grid points are allowed to adapt simultaneously to several variables. In addition to the theory and results, the hardware and software requirements are discussed.

  13. Two and three dimensional grid generation by an algebraic homotopy procedure

    NASA Technical Reports Server (NTRS)

    Moitra, Anutosh

    1990-01-01

    An algebraic method for generating two- and three-dimensional grid systems for aerospace vehicles is presented. The method is based on algebraic procedures derived from homotopic relations for blending between inner and outer boundaries of any given configuration. Stable properties of homotopic maps have been exploited to provide near-orthogonality and specified constant spacing at the inner boundary. The method has been successfully applied to analytically generated blended wing-body configurations as well as discretely defined geometries such as the High-Speed Civil Transport Aircraft. Grid examples representative of the capabilities of the method are presented.

  14. PULSE HEIGHT ANALYZER

    DOEpatents

    Johnstone, C.W.

    1958-01-21

    An anticoincidence device is described for a pair of adjacent channels of a multi-channel pulse height analyzer for preventing the lower channel from generating a count pulse in response to an input pulse when the input pulse has sufficient magnitude to reach the upper level channel. The anticoincidence circuit comprises a window amplifier, upper and lower level discriminators, and a biased-off amplifier. The output of the window amplifier is coupled to the inputs of the discriminators, the output of the upper level discriminator is connected to the resistance end of a series R-C network, the output of the lower level discriminator is coupled to the capacitance end of the R-C network, and the grid of the biased-off amplifier is coupled to the junction of the R-C network. In operation each discriminator produces a negative pulse output when the input pulse traverses its voltage setting. As a result of the connections to the R-C network, a trigger pulse will be sent to the biased-off amplifier when the incoming pulse level is sufficient to trigger only the lower level discriminator.

  15. Dependence of the source performance on plasma parameters at the BATMAN test facility

    NASA Astrophysics Data System (ADS)

    Wimmer, C.; Fantz, U.

    2015-04-01

    The investigation of the dependence of the source performance (high jH-, low je) for optimum Cs conditions on the plasma parameters at the BATMAN (Bavarian Test MAchine for Negative hydrogen ions) test facility is desirable in order to find key parameters for the operation of the source as well as to deepen the physical understanding. The most relevant source physics takes place in the extended boundary layer, which is the plasma layer with a thickness of several cm in front of the plasma grid: the production of H-, its transport through the plasma and its extraction, inevitably accompanied by the co-extraction of electrons. Hence, a link of the source performance with the plasma parameters in the extended boundary layer is expected. In order to characterize electron and negative hydrogen ion fluxes in the extended boundary layer, Cavity Ring-Down Spectroscopy and Langmuir probes have been applied for the measurement of the H- density and the determination of the plasma density, the plasma potential and the electron temperature, respectively. The plasma potential is of particular importance as it determines the sheath potential profile at the plasma grid: depending on the plasma grid bias relative to the plasma potential, a transition in the plasma sheath from an electron repelling to an electron attracting sheath takes place, influencing strongly the electron fraction of the bias current and thus the amount of co-extracted electrons. Dependencies of the source performance on the determined plasma parameters are presented for the comparison of two source pressures (0.6 Pa, 0.45 Pa) in hydrogen operation. The higher source pressure of 0.6 Pa is a standard point of operation at BATMAN with external magnets, whereas the lower pressure of 0.45 Pa is closer to the ITER requirements (p ≤ 0.3 Pa).

  16. System and method for islanding detection and prevention in distributed generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhowmik, Shibashis; Mazhari, Iman; Parkhideh, Babak

    Various examples are directed to systems and methods for detecting an islanding condition at an inverter configured to couple a distributed generation system to an electrical grid network. A controller may determine a command frequency and a command frequency variation. The controller may determine that the command frequency variation indicates a potential islanding condition and send to the inverter an instruction to disconnect the distributed generation system from the electrical grid network. When the distributed generation system is disconnected from the electrical grid network, the controller may determine whether the grid network is valid.

  17. Posteriori error determination and grid adaptation for AMR and ALE computational fluid dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapenta, G. M.

    2002-01-01

    We discuss grid adaptation for application to AMR and ALE codes. Two new contributions are presented. First, a new method to locate the regions where the truncation error is being created due to an insufficient accuracy: the operator recovery error origin (OREO) detector. The OREO detector is automatic, reliable, easy to implement and extremely inexpensive. Second, a new grid motion technique is presented for application to ALE codes. The method is based on the Brackbill-Saltzman approach but it is directly linked to the OREO detector and moves the grid automatically to minimize the error.

  18. Accurate finite difference methods for time-harmonic wave propagation

    NASA Technical Reports Server (NTRS)

    Harari, Isaac; Turkel, Eli

    1994-01-01

    Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.

  19. Collar grids for intersecting geometric components within the Chimera overlapped grid scheme

    NASA Technical Reports Server (NTRS)

    Parks, Steven J.; Buning, Pieter G.; Chan, William M.; Steger, Joseph L.

    1991-01-01

    A method for overcoming problems with using the Chimera overset grid scheme in the region of intersecting geometry components is presented. A 'collar grid' resolves the intersection region and provides communication between the component grids. This approach is validated by comparing computed and experimental data for a flow about a wing/body configuration. Application of the collar grid scheme to the Orbiter fuselage and vertical tail intersection in a computation of the full Space Shuttle launch vehicle demonstrates its usefulness for simulation of flow about complex aerospace vehicles.

  20. Evaluation of Statistical Downscaling Skill at Reproducing Extreme Events

    NASA Astrophysics Data System (ADS)

    McGinnis, S. A.; Tye, M. R.; Nychka, D. W.; Mearns, L. O.

    2015-12-01

    Climate model outputs usually have much coarser spatial resolution than is needed by impacts models. Although higher resolution can be achieved using regional climate models for dynamical downscaling, further downscaling is often required. The final resolution gap is often closed with a combination of spatial interpolation and bias correction, which constitutes a form of statistical downscaling. We use this technique to downscale regional climate model data and evaluate its skill in reproducing extreme events. We downscale output from the North American Regional Climate Change Assessment Program (NARCCAP) dataset from its native 50-km spatial resolution to the 4-km resolution of University of Idaho's METDATA gridded surface meterological dataset, which derives from the PRISM and NLDAS-2 observational datasets. We operate on the major variables used in impacts analysis at a daily timescale: daily minimum and maximum temperature, precipitation, humidity, pressure, solar radiation, and winds. To interpolate the data, we use the patch recovery method from the Earth System Modeling Framework (ESMF) regridding package. We then bias correct the data using Kernel Density Distribution Mapping (KDDM), which has been shown to exhibit superior overall performance across multiple metrics. Finally, we evaluate the skill of this technique in reproducing extreme events by comparing raw and downscaled output with meterological station data in different bioclimatic regions according to the the skill scores defined by Perkins et al in 2013 for evaluation of AR4 climate models. We also investigate techniques for improving bias correction of values in the tails of the distributions. These techniques include binned kernel density estimation, logspline kernel density estimation, and transfer functions constructed by fitting the tails with a generalized pareto distribution.

Top