Sample records for study unit grid

  1. Unit Planning Grids for Music: Grade 9-12 Advanced.

    ERIC Educational Resources Information Center

    Delaware State Dept. of Education, Dover.

    This unit planning grid outlines the expectations of Delaware high school students for advanced music studies. The grid identifies nine standards for music: (1) students will sing, independently and with others, a varied repertoire of music; (2) students will perform on instruments, independently and with others, a varied repertoire of music; (3)…

  2. Unit Planning Grids for Music: Grade 9-12 Basic.

    ERIC Educational Resources Information Center

    Delaware State Dept. of Education, Dover.

    This unit planning grid outlines the expectations of Delaware high school students for basic music studies. The grid identifies nine standards for music: (1) students will sing, independently and with others, a varied repertoire of music; (2) students will perform on instruments, independently and with others, a varied repertoire of music; (3)…

  3. Comparison of Standards and Technical Requirements of Grid-Connected Wind Power Plants in China and the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, David Wenzhong; Muljadi, Eduard; Tian, Tian

    The rapid deployment of wind power has made grid integration and operational issues focal points in industry discussions and research. Compliance with grid connection standards for wind power plants (WPPs) is crucial to ensuring the reliable and stable operation of the electric power grid. This report compares the standards for grid-connected WPPs in China to those in the United States to facilitate further improvements in wind power standards and enhance the development of wind power equipment. Detailed analyses of power quality, low-voltage ride-through capability, active power control, reactive power control, voltage control, and wind power forecasting are provided to enhancemore » the understanding of grid codes in the two largest markets of wind power. This study compares WPP interconnection standards and technical requirements in China to those in the United States.« less

  4. Distribution Grid Integration Unit Cost Database | Solar Research | NREL

    Science.gov Websites

    Unit Cost Database Distribution Grid Integration Unit Cost Database NREL's Distribution Grid Integration Unit Cost Database contains unit cost information for different components that may be used to associated with PV. It includes information from the California utility unit cost guides on traditional

  5. 78 FR 11954 - Revised Pricing Grid for Gold and Platinum Products

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-20

    ... DEPARTMENT OF THE TREASURY United States Mint Revised Pricing Grid for Gold and Platinum Products AGENCY: United States Mint, Department of the Treasury. ACTION: Notice. SUMMARY: The United States Mint is announcing a revised pricing grid for 2013 gold and platinum products. Please see the grid...

  6. National Offshore Wind Energy Grid Interconnection Study Full Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniel, John P.; Liu, Shu; Ibanez, Eduardo

    2014-07-30

    The National Offshore Wind Energy Grid Interconnection Study (NOWEGIS) considers the availability and potential impacts of interconnecting large amounts of offshore wind energy into the transmission system of the lower 48 contiguous United States.

  7. Matching soil grid unit resolutions with polygon unit scales for DNDC modelling of regional SOC pool

    NASA Astrophysics Data System (ADS)

    Zhang, H. D.; Yu, D. S.; Ni, Y. L.; Zhang, L. M.; Shi, X. Z.

    2015-03-01

    Matching soil grid unit resolution with polygon unit map scale is important to minimize uncertainty of regional soil organic carbon (SOC) pool simulation as their strong influences on the uncertainty. A series of soil grid units at varying cell sizes were derived from soil polygon units at the six map scales of 1:50 000 (C5), 1:200 000 (D2), 1:500 000 (P5), 1:1 000 000 (N1), 1:4 000 000 (N4) and 1:14 000 000 (N14), respectively, in the Tai lake region of China. Both format soil units were used for regional SOC pool simulation with DeNitrification-DeComposition (DNDC) process-based model, which runs span the time period 1982 to 2000 at the six map scales, respectively. Four indices, soil type number (STN) and area (AREA), average SOC density (ASOCD) and total SOC stocks (SOCS) of surface paddy soils simulated with the DNDC, were attributed from all these soil polygon and grid units, respectively. Subjecting to the four index values (IV) from the parent polygon units, the variation of an index value (VIV, %) from the grid units was used to assess its dataset accuracy and redundancy, which reflects uncertainty in the simulation of SOC. Optimal soil grid unit resolutions were generated and suggested for the DNDC simulation of regional SOC pool, matching with soil polygon units map scales, respectively. With the optimal raster resolution the soil grid units dataset can hold the same accuracy as its parent polygon units dataset without any redundancy, when VIV < 1% of all the four indices was assumed as criteria to the assessment. An quadratic curve regression model y = -8.0 × 10-6x2 + 0.228x + 0.211 (R2 = 0.9994, p < 0.05) was revealed, which describes the relationship between optimal soil grid unit resolution (y, km) and soil polygon unit map scale (1:x). The knowledge may serve for grid partitioning of regions focused on the investigation and simulation of SOC pool dynamics at certain map scale.

  8. Enabling Efficient, Responsive, and Resilient Buildings: Collaboration Between the United States and India

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basu, Chandrayee; Ghatikar, Girish

    The United States and India have among the largest economies in the world, and they continue to work together to address current and future challenges in reliable electricity supply. The acceleration to efficient, grid-responsive, resilient buildings represents a key energy security objective for federal and state agencies in both countries. The weaknesses in the Indian grid system were manifest in 2012, in the country’s worst blackout, which jeopardized the lives of half of India’s 1.2 billion people. While both countries are investing significantly in power sector reform, India, by virtue of its colossal growth rate in commercial energy intensity andmore » commercial floor space, is better placed than the United States to integrate and test state-of-art Smart Grid technologies in its future grid-responsive commercial buildings. This paper presents a roadmap of technical collaboration between the research organizations, and public-private stakeholders in both countries to accelerate the building-to-grid integration through pilot studies in India.« less

  9. NREL and iUnit Open the Door to Grid-Integrated, Multifamily Construction |

    Science.gov Websites

    Open the Door to Grid-Integrated, Multifamily Construction May 11, 2017 A photo of a group of people News | NREL and iUnit Open the Door to Grid-Integrated, Multifamily Construction NREL and iUnit apartment created by Denver developer, iUnit. Open house participants were able to tour the unit in person

  10. Research and application of thermal power unit’s load dynamic adjustment based on extraction steam

    NASA Astrophysics Data System (ADS)

    Li, Jun; Li, Huicong; Li, Weiwei

    2018-02-01

    The rapid development of heat and power generation in large power plant has caused tremendous constraints on the load adjustment of power grids and power plants. By introducing the thermodynamic system of thermal power unit, the relationship between thermal power extraction steam and unit’s load has analyzed and calculated. The practical application results show that power capability of the unit affected by extraction and it is not conducive to adjust the grid frequency. By monitoring the load adjustment capacity of thermal power units, especially the combined heat and power generating units, the upper and lower limits of the unit load can be dynamically adjusted by the operator on the grid side. The grid regulation and control departments can effectively control the load adjustable intervals of the operating units and provide reliable for the cooperative action of the power grid and power plants, to ensure the safety and stability of the power grid.

  11. Smart Grid Legislative and Regulatory Policies and Case Studies

    EIA Publications

    2011-01-01

    In recent years, a number of U.S. states have adopted or are considering smart grid related laws, regulations, and voluntary or mandatory requirements. At the same time, the number of smart grid pilot projects has been increasing rapidly. The Energy Information Administration (EIA) commissioned SAIC to research the development of smart grid in the United States and abroad. The research produced several documents that will help guide EIA as it considers how best to track smart grid developments.

  12. Study on optimal configuration of the grid-connected wind-solar-battery hybrid power system

    NASA Astrophysics Data System (ADS)

    Ma, Gang; Xu, Guchao; Ju, Rong; Wu, Tiantian

    2017-08-01

    The capacity allocation of each energy unit in the grid-connected wind-solar-battery hybrid power system is a significant segment in system design. In this paper, taking power grid dispatching into account, the research priorities are as follows: (1) We establish the mathematic models of each energy unit in the hybrid power system. (2) Based on dispatching of the power grid, energy surplus rate, system energy volatility and total cost, we establish the evaluation system for the wind-solar-battery power system and use a number of different devices as the constraint condition. (3) Based on an improved Genetic algorithm, we put forward a multi-objective optimisation algorithm to solve the optimal configuration problem in the hybrid power system, so we can achieve the high efficiency and economy of the grid-connected hybrid power system. The simulation result shows that the grid-connected wind-solar-battery hybrid power system has a higher comprehensive performance; the method of optimal configuration in this paper is useful and reasonable.

  13. [Appropriateness of direct admissions to acute care geriatric unit for nursing home patients: an adaptation of the AEPf GRID].

    PubMed

    Abdoulhadi, Dalia; Chevalet, Pascal; Moret, Leila; Fix, Marie-Hélène; Gégu, Marine; Jaulin, Philippe; Berrut, Gilles; de Decker, Laure

    2015-03-01

    The patient population staying in nursing homes is increasingly vulnerable and dependent and should benefit from a direct access to an acute care geriatric unit. Nevertheless, the easy access by a simple phone call from the general practitioner to the geriatrician, as well as the lack of orientation of these patients by emergency units, might lead to inappropriate admissions. This work studied the appropriateness of direct admissions of 40 patients living in nursing home in an acute care geriatric unit. Based on the AEPf assessment grid, 82.5% of these admissions were considered as appropriate (52.5%) or justified (30% based on an expert panel decision), and 17.5% were inappropriate. In conclusion, the process of direct admission does not seem to increase the rate of inappropriate admissions. Some actions could decrease this rate: implementation of geriatric mobile teams or psychogeriatric mobile teams intervening in nursing home, a better and more adapted use of ambulatory structures, a better information to the general practitioners. In order to reduce the intervention of the panel of experts, an adaptation of the AEPf assessment grid to these geriatric patients has been proposed. The "AEPg" assessment grid should benefit from a validation study.

  14. Critical Infrastructure Protection: EMP Impacts on the U.S. Electric Grid

    NASA Astrophysics Data System (ADS)

    Boston, Edwin J., Jr.

    The purpose of this research is to identify the United States electric grid infrastructure systems vulnerabilities to electromagnetic pulse attacks and the cyber-based impacts of those vulnerabilities to the electric grid. Additionally, the research identifies multiple defensive strategies designed to harden the electric grid against electromagnetic pulse attack that include prevention, mitigation and recovery postures. Research results confirm the importance of the electric grid to the United States critical infrastructures system and that an electromagnetic pulse attack against the electric grid could result in electric grid degradation, critical infrastructure(s) damage and the potential for societal collapse. The conclusions of this research indicate that while an electromagnetic pulse attack against the United States electric grid could have catastrophic impacts on American society, there are currently many defensive strategies under consideration designed to prevent, mitigate and or recover from an electromagnetic pulse attack. However, additional research is essential to further identify future target hardening opportunities, efficient implementation strategies and funding resources.

  15. Comparative Analysis and Considerations for PV Interconnection Standards in the United States and China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, David Wenzhong; Muljadi, Eduard; Tian, Tian

    The main objectives of this report are to evaluate China's photovoltaic (PV) interconnection standards and the U.S. counterparts and to propose recommendations for future revisions to these standards. This report references the 2013 report Comparative Study of Standards for Grid-Connected PV System in China, the U.S. and European Countries, which compares U.S., European, and China's PV grid interconnection standards; reviews various metrics for the characterization of distribution network with PV; and suggests modifications to China's PV interconnection standards and requirements. The recommendations are accompanied by assessments of four high-penetration PV grid interconnection cases in the United States to illustrate solutionsmore » implemented to resolve issues encountered at different sites. PV penetration in China and in the United States has significantly increased during the past several years, presenting comparable challenges depending on the conditions of the grid at the point of interconnection; solutions are generally unique to each interconnected PV installation or PV plant.« less

  16. Fuel Cell Backup Power System for Grid Service and Micro-Grid in Telecommunication Applications: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Zhiwen; Eichman, Joshua D; Kurtz, Jennifer M

    This paper presents the feasibility and economics of using fuel cell backup power systems in telecommunication cell towers to provide grid services (e.g., ancillary services, demand response). The fuel cells are able to provide power for the cell tower during emergency conditions. This study evaluates the strategic integration of clean, efficient, and reliable fuel cell systems with the grid for improved economic benefits. The backup systems have potential as enhanced capability through information exchanges with the power grid to add value as grid services that depend on location and time. The economic analysis has been focused on the potential revenuemore » for distributed telecommunications fuel cell backup units to provide value-added power supply. This paper shows case studies on current fuel cell backup power locations and regional grid service programs. The grid service benefits and system configurations for different operation modes provide opportunities for expanding backup fuel cell applications responsive to grid needs.« less

  17. Groundwater-quality data for the Sierra Nevada study unit, 2008: Results from the California GAMA program

    USGS Publications Warehouse

    Shelton, Jennifer L.; Fram, Miranda S.; Munday, Cathy M.; Belitz, Kenneth

    2010-01-01

    Groundwater quality in the approximately 25,500-square-mile Sierra Nevada study unit was investigated in June through October 2008, as part of the Priority Basin Project of the Groundwater Ambient Monitoring and Assessment (GAMA) Program. The GAMA Priority Basin Project is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). The Sierra Nevada study was designed to provide statistically robust assessments of untreated groundwater quality within the primary aquifer systems in the study unit, and to facilitate statistically consistent comparisons of groundwater quality throughout California. The primary aquifer systems (hereinafter, primary aquifers) are defined by the depth of the screened or open intervals of the wells listed in the California Department of Public Health (CDPH) database of wells used for public and community drinking-water supplies. The quality of groundwater in shallower or deeper water-bearing zones may differ from that in the primary aquifers; shallow groundwater may be more vulnerable to contamination from the surface. In the Sierra Nevada study unit, groundwater samples were collected from 84 wells (and springs) in Lassen, Plumas, Butte, Sierra, Yuba, Nevada, Placer, El Dorado, Amador, Alpine, Calaveras, Tuolumne, Madera, Mariposa, Fresno, Inyo, Tulare, and Kern Counties. The wells were selected on two overlapping networks by using a spatially-distributed, randomized, grid-based approach. The primary grid-well network consisted of 30 wells, one well per grid cell in the study unit, and was designed to provide statistical representation of groundwater quality throughout the entire study unit. The lithologic grid-well network is a secondary grid that consisted of the wells in the primary grid-well network plus 53 additional wells and was designed to provide statistical representation of groundwater quality in each of the four major lithologic units in the Sierra Nevada study unit: granitic, metamorphic, sedimentary, and volcanic rocks. One natural spring that is not used for drinking water was sampled for comparison with a nearby primary grid well in the same cell. Groundwater samples were analyzed for organic constituents (volatile organic compounds [VOC], pesticides and pesticide degradates, and pharmaceutical compounds), constituents of special interest (N-nitrosodimethylamine [NDMA] and perchlorate), naturally occurring inorganic constituents (nutrients, major ions, total dissolved solids, and trace elements), and radioactive constituents (radium isotopes, radon-222, gross alpha and gross beta particle activities, and uranium isotopes). Naturally occurring isotopes and geochemical tracers (stable isotopes of hydrogen and oxygen in water, stable isotopes of carbon, carbon-14, strontium isotopes, and tritium), and dissolved noble gases also were measured to help identify the sources and ages of the sampled groundwater. Three types of quality-control samples (blanks, replicates, and samples for matrix spikes) each were collected at approximately 10 percent of the wells sampled for each analysis, and the results for these samples were used to evaluate the quality of the data for the groundwater samples. Field blanks rarely contained detectable concentrations of any constituent, suggesting that contamination from sample collection, handling, and analytical procedures was not a significant source of bias in the data for the groundwater samples. Differences between replicate samples were within acceptable ranges, with few exceptions. Matrix-spike recoveries were within acceptable ranges for most compounds. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, groundwater typically is treated, disinfected, or blended with other waters to maintain water quality. Regulatory benchmarks apply to finished drinking water that is served to the consumer, not to untre

  18. Research on wind power grid-connected operation and dispatching strategies of Liaoning power grid

    NASA Astrophysics Data System (ADS)

    Han, Qiu; Qu, Zhi; Zhou, Zhi; He, Xiaoyang; Li, Tie; Jin, Xiaoming; Li, Jinze; Ling, Zhaowei

    2018-02-01

    As a kind of clean energy, wind power has gained rapid development in recent years. Liaoning Province has abundant wind resources and the total installed capacity of wind power is in the forefront. With the large-scale wind power grid-connected operation, the contradiction between wind power utilization and peak load regulation of power grid has been more prominent. To this point, starting with the power structure and power grid installation situation of Liaoning power grid, the distribution and the space-time output characteristics of wind farm, the prediction accuracy, the curtailment and the off-grid situation of wind power are analyzed. Based on the deep analysis of the seasonal characteristics of power network load, the composition and distribution of main load are presented. Aiming at the problem between the acceptance of wind power and power grid adjustment, the scheduling strategies are given, including unit maintenance scheduling, spinning reserve, energy storage equipment settings by the analysis of the operation characteristics and the response time of thermal power units and hydroelectric units, which can meet the demand of wind power acceptance and provide a solution to improve the level of power grid dispatching.

  19. A Qualitative Meta-Analysis of the Diffusion of Mandated and Subsidized Technology: United States Energy Security and Independence

    ERIC Educational Resources Information Center

    Noah, Philip D., Jr.

    2013-01-01

    The purpose of this research project was to explore what the core factors are that play a role in the development of the smart-grid. This research study examined The Energy Independence and Security Act (EISA) of 2007 as it pertains to the smart-grid, the economic and security effects of the smart grid, and key factors for its success. The…

  20. Regional photochemical air quality modeling in the Mexico-US border area

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendoza, A.; Russell, A.G.; Mejia, G.M.

    1998-12-31

    The Mexico-United States border area has become an increasingly important region due to its commercial, industrial and urban growth. As a result, environmental concerns have risen. Treaties like the North American Free Trade Agreement (NAFTA) have further motivated the development of environmental impact assessment in the area. Of particular concern are air quality, and how the activities on both sides of the border contribute to its degradation. This paper presents results of applying a three-dimensional photochemical airshed model to study air pollution dynamics along the Mexico-United States border. In addition, studies were conducted to assess how size resolution impacts themore » model performance. The model performed within acceptable statistic limits using 12.5 x 12.5 km{sup 2} grid cells, and the benefits using finer grids were limited. Results were further used to assess the influence of grid-cell size on the modeling of control strategies, where coarser grids lead to significant loss of information.« less

  1. Unit Planning Grids for Music--Grade 7.

    ERIC Educational Resources Information Center

    Delaware State Dept. of Education, Dover.

    These unit planning grids for grade 7 music education in Delaware public schools outline nine standards for students to attain in music. Standards cited in the grids are: (1) students will sing, independently and with others, a varied repertoire of music; (2) students will perform on instruments, independently, and with others, a varied repertoire…

  2. Unit Planning Grids for Music--Grade 5.

    ERIC Educational Resources Information Center

    Delaware State Dept. of Education, Dover.

    These unit planning grids for grade 5 music education in Delaware public schools outline nine standards for students to attain in music. Standards cited in the grids are: (1) students will sing, independently and with others, a varied repertoire of music; (2) students will perform on instruments, independently, and with others, a varied repertoire…

  3. Unit Planning Grids for Music--Grade 3.

    ERIC Educational Resources Information Center

    Delaware State Dept. of Education, Dover.

    These unit planning grids for grade 3 music education in Delaware public schools outline nine standards for students to attain in music. Standards cited in the grids are: (1) students will sing, independently and with others, a varied repertoire of music; (2) students will perform on instruments, independently, and with others, a varied repertoire…

  4. Unit Planning Grids for Music--Grade 4.

    ERIC Educational Resources Information Center

    Delaware State Dept. of Education, Dover.

    These unit planning grids for grade 4 music education in Delaware public schools outline nine standards for students to attain in music. Standards cited in the grids are: (1) students will sing, independently and with others, a varied repertoire of music; (2) students will perform on instruments, independently, and with others, a varied repertoire…

  5. Unit Planning Grids for Music--Grade 6.

    ERIC Educational Resources Information Center

    Delaware State Dept. of Education, Dover.

    These unit planning grids for grade 6 music education in Delaware public schools outline nine standards for students to attain in music. Standards cited in the grids are: (1) students will sing, independently and with others, a varied repertoire of music; (2) students will perform on instruments, independently, and with others, a varied repertoire…

  6. Unit Planning Grids for Music--Grade 8.

    ERIC Educational Resources Information Center

    Delaware State Dept. of Education, Dover.

    These unit planning grids for grade 8 music education in Delaware public schools outline nine standards for students to attain in music. Standards cited in the grids are: (1) students will sing, independently and with others, a varied repertoire of music; (2) students will perform on instruments, independently, and with others, a varied repertoire…

  7. Grid impacts of wind power: a summary of recent studies in the United States

    NASA Astrophysics Data System (ADS)

    Parsons, Brian; Milligan, Michael; Zavadil, Bob; Brooks, Daniel; Kirby, Brendan; Dragoon, Ken; Caldwell, Jim

    2004-04-01

    Several detailed technical investigations of grid ancillary service impacts of wind power plants in the United States have recently been performed. These studies were applied to Xcel Energy (in Minnesota) and PacifiCorp and the Bonneville Power Administration (both in the northwestern United States). Although the approaches vary, three utility time frames appear to be most at issue: regulation, load following and unit commitment. This article describes and compares the analytic frameworks from recent analysis and discusses the implications and cost estimates of wind integration. The findings of these studies indicate that relatively large-scale wind generation will have an impact on power system operation and costs, but these impacts and costs are relatively low at penetration rates that are expected over the next several years. Published in 2004 by John Wiley & Sons, Ltd.

  8. Groundwater-quality data in the Santa Barbara study unit, 2011: results from the California GAMA Program

    USGS Publications Warehouse

    Davis, Tracy A.; Kulongoski, Justin T.; Belitz, Kenneth

    2013-01-01

    Groundwater quality in the 48-square-mile Santa Barbara study unit was investigated by the U.S. Geological Survey (USGS) from January to February 2011, as part of the California State Water Resources Control Board (SWRCB) Groundwater Ambient Monitoring and Assessment (GAMA) Program’s Priority Basin Project (PBP). The GAMA-PBP was developed in response to the California Groundwater Quality Monitoring Act of 2001 and is being conducted in collaboration with the SWRCB and Lawrence Livermore National Laboratory (LLNL). The Santa Barbara study unit was the thirty-fourth study unit to be sampled as part of the GAMA-PBP. The GAMA Santa Barbara study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the primary aquifer system, and to facilitate statistically consistent comparisons of untreated-groundwater quality throughout California. The primary aquifer system is defined as those parts of the aquifers corresponding to the perforation intervals of wells listed in the California Department of Public Health (CDPH) database for the Santa Barbara study unit. Groundwater quality in the primary aquifer system may differ from the quality in the shallower or deeper water-bearing zones; shallow groundwater may be more vulnerable to surficial contamination. In the Santa Barbara study unit located in Santa Barbara and Ventura Counties, groundwater samples were collected from 24 wells. Eighteen of the wells were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the study unit (grid wells), and six wells were selected to aid in evaluation of water-quality issues (understanding wells). The groundwater samples were analyzed for organic constituents (volatile organic compounds [VOCs], pesticides and pesticide degradates, and pharmaceutical compounds); constituents of special interest (perchlorate and N-nitrosodimethylamine [NDMA]); naturally occurring inorganic constituents (trace elements, nutrients, major and minor ions, silica, total dissolved solids [TDS], alkalinity, and arsenic, chromium, and iron species); and radioactive constituents (radon-222 and gross alpha and gross beta radioactivity). Naturally occurring isotopes (stable isotopes of hydrogen and oxygen in water, stables isotopes of inorganic carbon and boron dissolved in water, isotope ratios of dissolved strontium, tritium activities, and carbon-14 abundances) and dissolved noble gases also were measured to help identify the sources and ages of the sampled groundwater. In total, 281 constituents and water-quality indicators were measured. Three types of quality-control samples (blanks, replicates, and matrix spikes) were collected at up to 12 percent of the wells in the Santa Barbara study unit, and the results for these samples were used to evaluate the quality of the data for the groundwater samples. Blanks rarely contained detectable concentrations of any constituent, suggesting that contamination from sample collection procedures was not a significant source of bias in the data for the groundwater samples. Replicate samples generally were within the limits of acceptable analytical reproducibility. Matrix-spike recoveries were within the acceptable range (70 to 130 percent) for approximately 82 percent of the compounds. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, untreated groundwater typically is treated, disinfected, and (or) blended with other waters to maintain water quality. Regulatory benchmarks apply to water that is served to the consumer, not to untreated groundwater. However, to provide some context for the results, concentrations of constituents measured in the untreated groundwater were compared with regulatory and non-regulatory health-based benchmarks established by the U.S. Environmental Protection Agency (USEPA) and CDPH and to non-regulatory benchmarks established for aesthetic concerns by CDPH. Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks. All organic constituents and most inorganic constituents that were detected in groundwater samples from the 18 grid wells in the Santa Barbara study unit were detected at concentrations less than drinking-water benchmarks. Of the 220 organic and special-interest constituents sampled for at the 18 grid wells, 13 were detected in groundwater samples; concentrations of all detected constituents were less than regulatory and non-regulatory health-based benchmarks. In total, VOCs were detected in 61 percent of the 18 grid wells sampled, pesticides and pesticide degradates were detected in 11 percent, and perchlorate was detected in 67 percent. Polar pesticides and their degradates, pharmaceutical compounds, and NDMA were not detected in any of the grid wells sampled in the Santa Barbara study unit. Eighteen grid wells were sampled for trace elements, major and minor ions, nutrients, and radioactive constituents; most detected concentrations were less than health-based benchmarks. Exceptions are one detection of boron greater than the CDPH notification level (NL-CA) of 1,000 micrograms per liter (μg/L) and one detection of fluoride greater than the CDPH maximum contaminant level (MCL-CA) of 2 milligrams per liter (mg/L). Results for constituents with non-regulatory benchmarks set for aesthetic concerns from the grid wells showed that iron concentrations greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 300 μg/L were detected in three grid wells. Manganese concentrations greater than the SMCL-CA of 50 μg/L were detected in seven grid wells. Chloride was detected at a concentration greater than the SMCL-CA recommended benchmark of 250 mg/L in four grid wells. Sulfate concentrations greater than the SMCL-CA recommended benchmark of 250 mg/L were measured in eight grid wells, and the concentration in one of these wells was also greater than the SMCL-CA upper benchmark of 500 mg/L. TDS concentrations greater than the SMCL-CA recommended benchmark of 500 mg/L were measured in 17 grid wells, and concentrations in six of these wells were also greater than the SMCL-CA upper benchmark of 1,000 mg/L.

  9. A water balance based, spatiotemporal evaluation of terrestrial evapotranspiration products across the contiguous United States

    USDA-ARS?s Scientific Manuscript database

    Accurate gridded estimates of evapotranspiration (ET) are essential to the analysis of terrestrial water budgets. In this study, ET estimates from three gridded energy-balance based products (ETEB) with independent model formations and data forcings are evaluated for their ability to capture long te...

  10. Construction method of pre assembled unit of bolt sphere grid

    NASA Astrophysics Data System (ADS)

    Hu, L. W.; Guo, F. L.; Wang, J. L.; Bu, F. M.

    2018-03-01

    The traditional construction of bolt sphere grid has many disadvantages, such as high cost, large amount of work at high altitude and long construction period, in order to make up for these shortcomings, in this paper, a new and applicable construction method is explored: setting up local scaffolding, installing the bolt sphere grid starting frame on the local scaffolding, then the pre assembled unit of bolt sphere grid is assembled on the ground, using small hoisting equipment to lift pre assembled unit to high altitude and install. Compared with the traditional installation method, the construction method has strong practicability and high economic efficiency, and has achieved good social and economic benefits.

  11. Uncertainty in gridded CO 2 emissions estimates

    DOE PAGES

    Hogue, Susannah; Marland, Eric; Andres, Robert J.; ...

    2016-05-19

    We are interested in the spatial distribution of fossil-fuel-related emissions of CO 2 for both geochemical and geopolitical reasons, but it is important to understand the uncertainty that exists in spatially explicit emissions estimates. Working from one of the widely used gridded data sets of CO 2 emissions, we examine the elements of uncertainty, focusing on gridded data for the United States at the scale of 1° latitude by 1° longitude. Uncertainty is introduced in the magnitude of total United States emissions, the magnitude and location of large point sources, the magnitude and distribution of non-point sources, and from themore » use of proxy data to characterize emissions. For the United States, we develop estimates of the contribution of each component of uncertainty. At 1° resolution, in most grid cells, the largest contribution to uncertainty comes from how well the distribution of the proxy (in this case population density) represents the distribution of emissions. In other grid cells, the magnitude and location of large point sources make the major contribution to uncertainty. Uncertainty in population density can be important where a large gradient in population density occurs near a grid cell boundary. Uncertainty is strongly scale-dependent with uncertainty increasing as grid size decreases. In conclusion, uncertainty for our data set with 1° grid cells for the United States is typically on the order of ±150%, but this is perhaps not excessive in a data set where emissions per grid cell vary over 8 orders of magnitude.« less

  12. Improved image quality of cone beam CT scans for radiotherapy image guidance using fiber-interspaced antiscatter grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stankovic, Uros; Herk, Marcel van; Ploeger, Lennert S.

    Purpose: Medical linear accelerator mounted cone beam CT (CBCT) scanner provides useful soft tissue contrast for purposes of image guidance in radiotherapy. The presence of extensive scattered radiation has a negative effect on soft tissue visibility and uniformity of CBCT scans. Antiscatter grids (ASG) are used in the field of diagnostic radiography to mitigate the scatter. They usually do increase the contrast of the scan, but simultaneously increase the noise. Therefore, and considering other scatter mitigation mechanisms present in a CBCT scanner, the applicability of ASGs with aluminum interspacing for a wide range of imaging conditions has been inconclusive inmore » previous studies. In recent years, grids using fiber interspacers have appeared, providing grids with higher scatter rejection while maintaining reasonable transmission of primary radiation. The purpose of this study was to evaluate the impact of one such grid on CBCT image quality. Methods: The grid used (Philips Medical Systems) had ratio of 21:1, frequency 36 lp/cm, and nominal selectivity of 11.9. It was mounted on the kV flat panel detector of an Elekta Synergy linear accelerator and tested in a phantom and a clinical study. Due to the flex of the linac and presence of gridline artifacts an angle dependent gain correction algorithm was devised to mitigate resulting artifacts. Scan reconstruction was performed using XVI4.5 augmented with inhouse developed image lag correction and Hounsfield unit calibration. To determine the necessary parameters for Hounsfield unit calibration and software scatter correction parameters, the Catphan 600 (The Phantom Laboratory) phantom was used. Image quality parameters were evaluated using CIRS CBCT Image Quality and Electron Density Phantom (CIRS) in two different geometries: one modeling head and neck and other pelvic region. Phantoms were acquired with and without the grid and reconstructed with and without software correction which was adapted for the different acquisition scenarios. Parameters used in the phantom study weret{sub cup} for nonuniformity and contrast-to-noise ratio (CNR) for soft tissue visibility. Clinical scans were evaluated in an observer study in which four experienced radiotherapy technologists rated soft tissue visibility and uniformity of scans with and without the grid. Results: The proposed angle dependent gain correction algorithm suppressed the visible ring artifacts. Grid had a beneficial impact on nonuniformity, contrast to noise ratio, and Hounsfield unit accuracy for both scanning geometries. The nonuniformity reduced by 90% for head sized object and 91% for pelvic-sized object. CNR improved compared to no corrections on average by a factor 2.8 for the head sized object, and 2.2 for the pelvic sized phantom. Grid outperformed software correction alone, but adding additional software correction to the grid was overall the best strategy. In the observer study, a significant improvement was found in both soft tissue visibility and nonuniformity of scans when grid is used. Conclusions: The evaluated fiber-interspaced grid improved the image quality of the CBCT system for broad range of imaging conditions. Clinical scans show significant improvement in soft tissue visibility and uniformity without the need to increase the imaging dose.« less

  13. Improved image quality of cone beam CT scans for radiotherapy image guidance using fiber-interspaced antiscatter grid.

    PubMed

    Stankovic, Uros; van Herk, Marcel; Ploeger, Lennert S; Sonke, Jan-Jakob

    2014-06-01

    Medical linear accelerator mounted cone beam CT (CBCT) scanner provides useful soft tissue contrast for purposes of image guidance in radiotherapy. The presence of extensive scattered radiation has a negative effect on soft tissue visibility and uniformity of CBCT scans. Antiscatter grids (ASG) are used in the field of diagnostic radiography to mitigate the scatter. They usually do increase the contrast of the scan, but simultaneously increase the noise. Therefore, and considering other scatter mitigation mechanisms present in a CBCT scanner, the applicability of ASGs with aluminum interspacing for a wide range of imaging conditions has been inconclusive in previous studies. In recent years, grids using fiber interspacers have appeared, providing grids with higher scatter rejection while maintaining reasonable transmission of primary radiation. The purpose of this study was to evaluate the impact of one such grid on CBCT image quality. The grid used (Philips Medical Systems) had ratio of 21:1, frequency 36 lp/cm, and nominal selectivity of 11.9. It was mounted on the kV flat panel detector of an Elekta Synergy linear accelerator and tested in a phantom and a clinical study. Due to the flex of the linac and presence of gridline artifacts an angle dependent gain correction algorithm was devised to mitigate resulting artifacts. Scan reconstruction was performed using XVI4.5 augmented with inhouse developed image lag correction and Hounsfield unit calibration. To determine the necessary parameters for Hounsfield unit calibration and software scatter correction parameters, the Catphan 600 (The Phantom Laboratory) phantom was used. Image quality parameters were evaluated using CIRS CBCT Image Quality and Electron Density Phantom (CIRS) in two different geometries: one modeling head and neck and other pelvic region. Phantoms were acquired with and without the grid and reconstructed with and without software correction which was adapted for the different acquisition scenarios. Parameters used in the phantom study were t(cup) for nonuniformity and contrast-to-noise ratio (CNR) for soft tissue visibility. Clinical scans were evaluated in an observer study in which four experienced radiotherapy technologists rated soft tissue visibility and uniformity of scans with and without the grid. The proposed angle dependent gain correction algorithm suppressed the visible ring artifacts. Grid had a beneficial impact on nonuniformity, contrast to noise ratio, and Hounsfield unit accuracy for both scanning geometries. The nonuniformity reduced by 90% for head sized object and 91% for pelvic-sized object. CNR improved compared to no corrections on average by a factor 2.8 for the head sized object, and 2.2 for the pelvic sized phantom. Grid outperformed software correction alone, but adding additional software correction to the grid was overall the best strategy. In the observer study, a significant improvement was found in both soft tissue visibility and nonuniformity of scans when grid is used. The evaluated fiber-interspaced grid improved the image quality of the CBCT system for broad range of imaging conditions. Clinical scans show significant improvement in soft tissue visibility and uniformity without the need to increase the imaging dose.

  14. Controllable Grid Interface Test System | Energy Systems Integration

    Science.gov Websites

    Facility | NREL Controllable Grid Interface Test System Controllable Grid Interface Test System NREL's controllable grid interface (CGI) test system can reduce certification testing time and costs grid interface is the first test facility in the United States that has fault simulation capabilities

  15. Comparative Study of Standards for Grid-Connected Wind Power Plant in China and the U.S.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Wenzhong; Tian, Tian; Muljadi, Eduard

    2015-10-06

    The rapid deployment of wind power has made grid integration and operational issues focal points in industry discussions and research. Compliance with grid connection standards for wind power plants (WPP) is crucial to ensuring the safe and stable operation of the electric power grid. The standards for grid-connected WPPs in China and the United States are compared in this paper to facilitate further improvements to the standards and enhance the development of wind power equipment. Detailed analyses in power quality, low-voltage ride-through capability, active power control, reactive power control, voltage control, and wind power forecasting are provided to enhance themore » understanding of grid codes in the two largest markets of wind power.« less

  16. Grid-Level Application of Electrical Energy Storage: Example Use Cases in the United States and China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yingchen; Gevorgian, Vahan; Wang, Caixia

    Electrical energy storage (EES) systems are expected to play an increasing role in helping the United States and China-the world's largest economies with the two largest power systems-meet the challenges of integrating more variable renewable resources and enhancing the reliability of power systems by improving the operating capabilities of the electric grid. EES systems are becoming integral components of a resilient and efficient grid through a diverse set of applications that include energy management, load shifting, frequency regulation, grid stabilization, and voltage support.

  17. Precision grid survey apparatus and method for the mapping of hidden ferromagnetic structures

    DOEpatents

    von Wimmerspeg, Udo

    2004-11-16

    The present invention is for a precision grid surveyor having a stationary unit and a roving unit. The stationary unit has a light source unit that emits a light beam and a rotator to project the light beam toward detectors on a roving unit. The roving unit moves over an area to be surveyed. Further the invention is for a method of mapping details of hidden underground iron pipelines, and more particularly the location of bell joints.

  18. Self-similar grid patterns in free-space shuffle-exchange networks

    NASA Astrophysics Data System (ADS)

    Haney, Michael W.

    1993-12-01

    Self-similar grid patterns are proposed as an alternative to rectangular grid, array optoelectronic sources, and detectors of smart pixels. For shuffle based multistage interconnection networks, it is suggested that smart pixel should not be arrayed on a rectangular grid and that smart pixel unit cell should be the kernel of a self-similar grid pattern.

  19. Optimal management of stationary lithium-ion battery system in electricity distribution grids

    NASA Astrophysics Data System (ADS)

    Purvins, Arturs; Sumner, Mark

    2013-11-01

    The present article proposes an optimal battery system management model in distribution grids for stationary applications. The main purpose of the management model is to maximise the utilisation of distributed renewable energy resources in distribution grids, preventing situations of reverse power flow in the distribution transformer. Secondly, battery management ensures efficient battery utilisation: charging at off-peak prices and discharging at peak prices when possible. This gives the battery system a shorter payback time. Management of the system requires predictions of residual distribution grid demand (i.e. demand minus renewable energy generation) and electricity price curves (e.g. for 24 h in advance). Results of a hypothetical study in Great Britain in 2020 show that the battery can contribute significantly to storing renewable energy surplus in distribution grids while being highly utilised. In a distribution grid with 25 households and an installed 8.9 kW wind turbine, a battery system with rated power of 8.9 kW and battery capacity of 100 kWh can store 7 MWh of 8 MWh wind energy surplus annually. Annual battery utilisation reaches 235 cycles in per unit values, where one unit is a full charge-depleting cycle depth of a new battery (80% of 100 kWh).

  20. Groundwater-quality data in the Cascade Range and Modoc Plateau study unit, 2010-Results from the California GAMA Program

    USGS Publications Warehouse

    Shelton, Jennifer L.; Fram, Miranda S.; Belitz, Kenneth

    2013-01-01

    Groundwater quality in the 39,000-square-kilometer Cascade Range and Modoc Plateau (CAMP) study unit was investigated by the U.S. Geological Survey (USGS) from July through October 2010, as part of the California State Water Resources Control Board (SWRCB) Groundwater Ambient Monitoring and Assessment (GAMA) Program’s Priority Basin Project (PBP). The GAMA PBP was developed in response to the California Groundwater Quality Monitoring Act of 2001 and is being conducted in collaboration with the SWRCB and Lawrence Livermore National Laboratory (LLNL). The CAMP study unit is the thirty-second study unit to be sampled as part of the GAMA PBP. The GAMA CAMP study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the primary aquifer system and to facilitate statistically consistent comparisons of untreated-groundwater quality throughout California. The primary aquifer system is defined as that part of the aquifer corresponding to the open or screened intervals of wells listed in the California Department of Public Health (CDPH) database for the CAMP study unit. The quality of groundwater in shallow or deep water-bearing zones may differ from the quality of groundwater in the primary aquifer system; shallow groundwater may be more vulnerable to surficial contamination. In the CAMP study unit, groundwater samples were collected from 90 wells and springs in 6 study areas (Sacramento Valley Eastside, Honey Lake Valley, Cascade Range and Modoc Plateau Low Use Basins, Shasta Valley and Mount Shasta Volcanic Area, Quaternary Volcanic Areas, and Tertiary Volcanic Areas) in Butte, Lassen, Modoc, Plumas, Shasta, Siskiyou, and Tehama Counties. Wells and springs were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the study unit (grid wells). Groundwater samples were analyzed for field water-quality indicators, organic constituents, perchlorate, inorganic constituents, radioactive constituents, and microbial indicators. Naturally occurring isotopes and dissolved noble gases also were measured to provide a dataset that will be used to help interpret the sources and ages of the sampled groundwater in subsequent reports. In total, 221 constituents were investigated for this study. Three types of quality-control samples (blanks, replicates, and matrix spikes) were collected at approximately 10 percent of the wells in the CAMP study unit, and the results for these samples were used to evaluate the quality of the data for the groundwater samples. Blanks rarely contained detectable concentrations of any constituent, suggesting that contamination from sample collection procedures was not a significant source of bias in the data for the groundwater samples. Replicate samples generally were within the limits of acceptable analytical reproducibility. Matrix-spike recoveries were within the acceptable range (70 to 130 percent) for approximately 90 percent of the compounds. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, untreated groundwater typically is treated, disinfected, and (or) blended with other waters to maintain water quality. Regulatory benchmarks apply to water that is served to the consumer, not to untreated groundwater. However, to provide some context for the results, concentrations of constituents measured in the untreated groundwater were compared with regulatory and non-regulatory health-based benchmarks established by the U.S. Environmental Protection Agency (USEPA) and CDPH, and to non-regulatory benchmarks established for aesthetic concerns by CDPH. Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks. All organic constituents and most inorganic constituents that were detected in groundwater samples from the 90 grid wells in the CAMP study unit were detected at concentrations less than drinking-water benchmarks. Of the 148 organic constituents analyzed, 27 were detected in groundwater samples; concentrations of all detected constituents were less than regulatory and nonregulatory health-based benchmarks, and all were less than 1/10 of benchmark levels. One or more organic constituents were detected in 52 percent of the grid wells in the CAMP study unit: VOCs were detected in 30 percent, and pesticides and pesticide degradates were detected in 31 percent. Trace elements, major ions, nutrients, and radioactive constituents were sampled for at 90 grid wells in the CAMP study unit, and most detected concentrations were less than health-based benchmarks. Exceptions include three detections of arsenic greater than the USEPA maximum contaminant level (MCL-US) of 10 micrograms per liter (µg/L), two detections of boron greater than the CDPH notification level (NL-CA) of 1,000 µg/L, two detections of molybdenum greater than the USEPA lifetime health advisory level (HAL-US) of 40 µg/L, two detections of vanadium greater than the CDPH notification level (NL-CA) of 50 µg/L, one detection of nitrate, as nitrogen, greater than the MCL-US of 10 milligrams per liter (mg/L), two detections of uranium greater than the MCL-US of 30 µg/L and the MCL-CA of 20 picocuries per liter (pCi/L), one detection of radon-222 greater than the proposed MCL-US of 4,000 pCi/L, and two detections of gross alpha particle activity greater than the MCL-US of 15 pCi/L. Results for inorganic constituents with non-regulatory benchmarks set for aesthetic concerns showed that iron concentrations greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 300 µg/L were detected in four grid wells. Manganese concentrations greater than the SMCL-CA of 50 µg/L were detected in nine grid wells. Chloride and TDS were detected at concentrations greater than the upper SMCL-CA benchmarks of 500 mg/L and 1,000 mg/L, respectively, in one grid well. Microbial indicators (total coliform and Escherichia coli [E. coli]) were detected in 11 percent of the 83 grid wells sampled for these analyses in the CAMP study unit. The presence of total coliform was detected in nine grid wells, and the presence of E. coli was detected in one of these same grid wells.

  1. Development of a reference Phasor Measurement Unit (PMU) for the monitoring and control of grid stability and quality

    NASA Astrophysics Data System (ADS)

    Ndilimabaka, Hervé; Blanc, Isabelle

    2014-08-01

    This paper discusses the details of the development of a Phasor Measurement Unit regarding the requirements of the IEEE C37.118-2005 synchrophasor standard relative to steady-state conditions on grid monitoring and control. This phasor measurement unit is intended to be used for field tests sooner.

  2. Selforganization of modular activity of grid cells

    PubMed Central

    Urdapilleta, Eugenio; Si, Bailu

    2017-01-01

    Abstract A unique topographical representation of space is found in the concerted activity of grid cells in the rodent medial entorhinal cortex. Many among the principal cells in this region exhibit a hexagonal firing pattern, in which each cell expresses its own set of place fields (spatial phases) at the vertices of a triangular grid, the spacing and orientation of which are typically shared with neighboring cells. Grid spacing, in particular, has been found to increase along the dorso‐ventral axis of the entorhinal cortex but in discrete steps, that is, with a modular structure. In this study, we show that such a modular activity may result from the self‐organization of interacting units, which individually would not show discrete but rather continuously varying grid spacing. Within our “adaptation” network model, the effect of a continuously varying time constant, which determines grid spacing in the isolated cell model, is modulated by recurrent collateral connections, which tend to produce a few subnetworks, akin to magnetic domains, each with its own grid spacing. In agreement with experimental evidence, the modular structure is tightly defined by grid spacing, but also involves grid orientation and distortion, due to interactions across modules. Thus, our study sheds light onto a possible mechanism, other than simply assuming separate networks a priori, underlying the formation of modular grid representations. PMID:28768062

  3. Auction-based distributed efficient economic operations of microgrid systems

    NASA Astrophysics Data System (ADS)

    Zou, Suli; Ma, Zhongjing; Liu, Xiangdong

    2014-12-01

    This paper studies the economic operations of the microgrid in a distributed way such that the operational schedule of each of the units, like generators, load units, storage units, etc., in a microgrid system, is implemented by autonomous agents. We apply and generalise the progressive second price (PSP) auction mechanism which was proposed by Lazar and Semret to efficiently allocate the divisible network resources. Considering the economic operation for the microgrid systems, the generators play as sellers to supply energy and the load units play as the buyers to consume energy, while a storage unit, like battery, super capacitor, etc., may transit between buyer and seller, such that it is a buyer when it charges and becomes a seller when it discharges. Furthermore in a connected mode, each individual unit competes against not only the other individual units in the microgrid but also the exogenous main grid possessing fixed electricity price and infinite trade capacity; that is to say, the auctioneer assigns the electricity among all individual units and the main grid with respect to the submitted bid strategies of all individual units in the microgrid in an economic way. Due to these distinct characteristics, the underlying auction games are distinct from those studied in the literature. We show that under mild conditions, the efficient economic operation strategy is a Nash equilibrium (NE) for the PSP auction games, and propose a distributed algorithm under which the system can converge to an NE. We also show that the performance of worst NE can be bounded with respect to the system parameters, say the energy trading price with the main grid, and based upon that, the implemented NE is unique and efficient under some conditions.

  4. Performance evaluation of cognitive radio in advanced metering infrastructure communication

    NASA Astrophysics Data System (ADS)

    Hiew, Yik-Kuan; Mohd Aripin, Norazizah; Din, Norashidah Md

    2016-03-01

    Smart grid is an intelligent electricity grid system. A reliable two-way communication system is required to transmit both critical and non-critical smart grid data. However, it is difficult to locate a huge chunk of dedicated spectrum for smart grid communications. Hence, cognitive radio based communication is applied. Cognitive radio allows smart grid users to access licensed spectrums opportunistically with the constraint of not causing harmful interference to licensed users. In this paper, a cognitive radio based smart grid communication framework is proposed. Smart grid framework consists of Home Area Network (HAN) and Advanced Metering Infrastructure (AMI), while AMI is made up of Neighborhood Area Network (NAN) and Wide Area Network (WAN). In this paper, the authors only report the findings for AMI communication. AMI is smart grid domain that comprises smart meters, data aggregator unit, and billing center. Meter data are collected by smart meters and transmitted to data aggregator unit by using cognitive 802.11 technique; data aggregator unit then relays the data to billing center using cognitive WiMAX and TV white space. The performance of cognitive radio in AMI communication is investigated using Network Simulator 2. Simulation results show that cognitive radio improves the latency and throughput performances of AMI. Besides, cognitive radio also improves spectrum utilization efficiency of WiMAX band from 5.92% to 9.24% and duty cycle of TV band from 6.6% to 10.77%.

  5. Groundwater-quality data in the Monterey–Salinas shallow aquifer study unit, 2013: Results from the California GAMA Program

    USGS Publications Warehouse

    Goldrath, Dara A.; Kulongoski, Justin T.; Davis, Tracy A.

    2016-09-01

    Groundwater quality in the 3,016-square-mile Monterey–Salinas Shallow Aquifer study unit was investigated by the U.S. Geological Survey (USGS) from October 2012 to May 2013 as part of the California State Water Resources Control Board Groundwater Ambient Monitoring and Assessment (GAMA) Program’s Priority Basin Project. The GAMA Monterey–Salinas Shallow Aquifer study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the shallow-aquifer systems in parts of Monterey and San Luis Obispo Counties and to facilitate statistically consistent comparisons of untreated-groundwater quality throughout California. The shallow-aquifer system in the Monterey–Salinas Shallow Aquifer study unit was defined as those parts of the aquifer system shallower than the perforated depth intervals of public-supply wells, which generally corresponds to the part of the aquifer system used by domestic wells. Groundwater quality in the shallow aquifers can differ from the quality in the deeper water-bearing zones; shallow groundwater can be more vulnerable to surficial contamination.Samples were collected from 170 sites that were selected by using a spatially distributed, randomized grid-based method. The study unit was divided into 4 study areas, each study area was divided into grid cells, and 1 well was sampled in each of the 100 grid cells (grid wells). The grid wells were domestic wells or wells with screen depths similar to those in nearby domestic wells. A greater spatial density of data was achieved in 2 of the study areas by dividing grid cells in those study areas into subcells, and in 70 subcells, samples were collected from exterior faucets at sites where there were domestic wells or wells with screen depths similar to those in nearby domestic wells (shallow-well tap sites).Field water-quality indicators (dissolved oxygen, water temperature, pH, and specific conductance) were measured, and samples for analysis of inorganic constituents (trace elements, nutrients, major and minor ions, silica, total dissolved solids, and alkalinity) were collected at all 170 sites. In addition to these constituents, the samples from grid wells were analyzed for organic constituents (volatile organic compounds, pesticides and pesticide degradates), constituents of special interest (perchlorate and N-nitrosodimethylamine, or NDMA), radioactive constituents (radon-222 and gross-alpha and gross-beta radioactivity), and geochemical and age-dating tracers (stable isotopes of carbon in dissolved inorganic carbon, carbon-14 abundances, stable isotopes of hydrogen and oxygen in water, and tritium activities).Three types of quality-control samples (blanks, replicates, and matrix spikes) were collected at up to 11 percent of the wells in the Monterey–Salinas Shallow Aquifer study unit, and the results for these samples were used to evaluate the quality of the data from the groundwater samples. With the exception of trace elements, blanks rarely contained detectable concentrations of any constituent, indicating that contamination from sample-collection procedures was not a significant source of bias in the data for the groundwater samples. Low concentrations of some trace elements were detected in blanks; therefore, the data were re-censored at higher reporting levels. Replicate samples generally were within the limits of acceptable analytical reproducibility. The median values of matrix-spike recoveries were within the acceptable range (70 to 130 percent) for the volatile organic compounds (VOCs) and N-nitrosodimethylamine (NDMA), but were only approximately 64 percent for pesticides and pesticide degradates.The sample-collection protocols used in this study were designed to obtain representative samples of groundwater. The quality of groundwater can differ from the quality of drinking water because water chemistry can change as a result of contact with plumbing systems or the atmosphere; because of treatment, disinfection, or blending with water from other sources; or some combination of these. Water quality in domestic wells is not regulated in California, however, to provide context for the water-quality data presented in this report, results were compared to benchmarks established for drinking-water quality. The primary comparison benchmarks were maximum contaminant levels established by the U.S. Environmental Protection Agency and the State of California (MCL-US and MCL-CA, respectively). Non-regulatory benchmarks were used for constituents without maximum contaminant levels (MCLs), including Health Based Screening Levels (HBSLs) developed by the USGS and State of California secondary maximum contaminant levels (SMCL-CA) and notification levels. Most constituents detected in samples from the Monterey–Salinas Shallow Aquifer study unit had concentrations less than their respective benchmarks.Of the 148 organic constituents analyzed in the 100 grid-well samples, 38 were detected, and all concentrations were less than the benchmarks. Volatile organic compounds were detected in 26 of the grid wells, and pesticides and pesticide degradates were detected in 28 grid wells. The special-interest constituent NDMA was detected above the HBSL in three samples, one of which also had a perchlorate concentration greater than the MCL-CA.Of the inorganic constituents, 6 were detected at concentrations above their respective MCL benchmarks in grid-well samples: arsenic (5 grid wells above the MCL of 10 micrograms per liter, μg/L), selenium (3 grid wells, MCL of 50 μg/L), uranium (4 grid wells, MCL of 30 μg/L), nitrate (16 grid wells, MCL of 10 milligrams per liter, mg/L), adjusted gross alpha particle activity (10 grid wells, MCL of 15 picocuries per liter, pCi/L), and gross beta particle activity (1 grid well, MCL of 50 pCi/L). An additional 4 inorganic constituents were detected at concentrations above their respective HBSL benchmarks in grid-well samples: boron (1 grid well above the HBSL of 6,000 μg/L), manganese (8 grid wells, HBSL of 300 μg/L), molybdenum (6 grid wells, HBSL of 40 μg/L), and strontium (6 grid wells, HBSL of 4,000 μg/L). Of the inorganic constituents, 4 were detected at concentrations above their non-health based SMCL benchmarks in grid-well samples: iron (9 grid wells above the SMCL of 300 μg/L), chloride (7 grid wells, SMCL of 500 mg/L), sulfate (14 grid wells, SMCL of 500 mg/L), and total dissolved solids (27 grid wells, SMCL of 1,000 mg/L).Of the inorganic constituents analyzed in the 70 shallow-well tap sites, 10 were detected at concentrations above the benchmarks. Of the inorganic constituents, 3 were detected at concentrations above their respective MCL benchmarks in shallow-well tap sites: arsenic (2 shallow-well tap sites above the MCL of 10 μg/L), uranium (2 shallow-well tap sites, MCL of 30 μg/L), and nitrate (24 shallow-well tap sites, MCL of 10 mg/L). An additional 3 inorganic constituents were detected above their respective HBSL benchmarks in shallow-well tap sites: manganese (4 shallow-well tap sites above the HBSL of 300 μg/L), molybdenum (4 shallow-well tap sites, HBSL of 40 μg/L), and zinc (2 shallow-well tap sites, HBSL of 2,000 μg/L). Of the inorganic constituents, 4 were detected at concentrations above their non-health based SMCL benchmarks in shallow-well tap sites: iron (6 shallow-well tap sites above the SMCL of 300 μg/L), chloride (1 shallow-well tap site, SMCL of 500 mg/L), sulfate (9 shallow-well tap sites, SMCL of 500 mg/L), and total dissolved solids (15 shallow-well tap sites, SMCL of 1,000 mg/L).

  6. A decision modeling for phasor measurement unit location selection in smart grid systems

    NASA Astrophysics Data System (ADS)

    Lee, Seung Yup

    As a key technology for enhancing the smart grid system, Phasor Measurement Unit (PMU) provides synchronized phasor measurements of voltages and currents of wide-area electric power grid. With various benefits from its application, one of the critical issues in utilizing PMUs is the optimal site selection of units. The main aim of this research is to develop a decision support system, which can be used in resource allocation task for smart grid system analysis. As an effort to suggest a robust decision model and standardize the decision modeling process, a harmonized modeling framework, which considers operational circumstances of component, is proposed in connection with a deterministic approach utilizing integer programming. With the results obtained from the optimal PMU placement problem, the advantages and potential that the harmonized modeling process possesses are assessed and discussed.

  7. Integrating Renewable Generation into Grid Operations: Four International Experiences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weimar, Mark R.; Mylrea, Michael E.; Levin, Todd

    International experiences with power sector restructuring and the resultant impacts on bulk power grid operations and planning may provide insight into policy questions for the evolving United States power grid as resource mixes are changing in response to fuel prices, an aging generation fleet and to meet climate goals. Australia, Germany, Japan and the UK were selected to represent a range in the level and attributes of electricity industry liberalization in order to draw comparisons across a variety of regions in the United States such as California, ERCOT, the Southwest Power Pool and the Southeast Reliability Region. The study drawsmore » conclusions through a literature review of the four case study countries with regards to the changing resource mix and the electricity industry sector structure and their impact on grid operations and planning. This paper derives lessons learned and synthesizes implications for the United States based on answers to the above questions and the challenges faced by the four selected countries. Each country was examined to determine the challenges to their bulk power sector based on their changing resource mix, market structure, policies driving the changing resource mix, and policies driving restructuring. Each countries’ approach to solving those changes was examined, as well as how each country’s market structure either exacerbated or mitigated the approaches to solving the challenges to their bulk power grid operations and planning. All countries’ policies encourage renewable energy generation. One significant finding included the low- to zero-marginal cost of intermittent renewables and its potential negative impact on long-term resource adequacy. No dominant solution has emerged although a capacity market was introduced in the UK and is being contemplated in Japan. Germany has proposed the Energy Market 2.0 to encourage flexible generation investment. The grid operator in Australia proposed several approaches to maintaining synchronous generation. Interconnections to other regions provides added opportunities for balancing that would not be available otherwise, and at this point, has allowed for integration of renewables.« less

  8. Increasing the resilience and security of the United States' power infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Happenny, Sean F.

    2015-08-01

    The United States' power infrastructure is aging, underfunded, and vulnerable to cyber attack. Emerging smart grid technologies may take some of the burden off of existing systems and make the grid as a whole more efficient, reliable, and secure. The Pacific Northwest National Laboratory (PNNL) is funding research into several aspects of smart grid technology and grid security, creating a software simulation tool that will allow researchers to test power infrastructure control and distribution paradigms by utilizing different smart grid technologies to determine how the grid and these technologies react under different circumstances. Understanding how these systems behave in real-worldmore » conditions will lead to new ways to make our power infrastructure more resilient and secure. Demonstrating security in embedded systems is another research area PNNL is tackling. Many of the systems controlling the U.S. critical infrastructure, such as the power grid, lack integrated security and the aging networks protecting them are becoming easier to attack.« less

  9. Groundwater-quality data in the Santa Cruz, San Gabriel, and Peninsular Ranges Hard Rock Aquifers study unit, 2011-2012: results from the California GAMA program

    USGS Publications Warehouse

    Davis, Tracy A.; Shelton, Jennifer L.

    2014-01-01

    Results for constituents with nonregulatory benchmarks set for aesthetic concerns showed that iron concentrations greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 300 μg/L were detected in samples from 19 grid wells. Manganese concentrations greater than the SMCL-CA of 50 μg/L were detected in 27 grid wells. Chloride was detected at a concentration greater than the SMCL-CA upper benchmark of 500 mg/L in one grid well. TDS concentrations in three grid wells were greater than the SMCL-CA upper benchmark of 1,000 mg/L.

  10. Multidisciplinary Simulation Acceleration using Multiple Shared-Memory Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Kemal, Jonathan Yashar

    For purposes of optimizing and analyzing turbomachinery and other designs, the unsteady Favre-averaged flow-field differential equations for an ideal compressible gas can be solved in conjunction with the heat conduction equation. We solve all equations using the finite-volume multiple-grid numerical technique, with the dual time-step scheme used for unsteady simulations. Our numerical solver code targets CUDA-capable Graphical Processing Units (GPUs) produced by NVIDIA. Making use of MPI, our solver can run across networked compute notes, where each MPI process can use either a GPU or a Central Processing Unit (CPU) core for primary solver calculations. We use NVIDIA Tesla C2050/C2070 GPUs based on the Fermi architecture, and compare our resulting performance against Intel Zeon X5690 CPUs. Solver routines converted to CUDA typically run about 10 times faster on a GPU for sufficiently dense computational grids. We used a conjugate cylinder computational grid and ran a turbulent steady flow simulation using 4 increasingly dense computational grids. Our densest computational grid is divided into 13 blocks each containing 1033x1033 grid points, for a total of 13.87 million grid points or 1.07 million grid points per domain block. To obtain overall speedups, we compare the execution time of the solver's iteration loop, including all resource intensive GPU-related memory copies. Comparing the performance of 8 GPUs to that of 8 CPUs, we obtain an overall speedup of about 6.0 when using our densest computational grid. This amounts to an 8-GPU simulation running about 39.5 times faster than running than a single-CPU simulation.

  11. Groundwater-quality data in the Borrego Valley, Central Desert, and Low-Use Basins of the Mojave and Sonoran Deserts study unit, 2008-2010--Results from the California GAMA Program

    USGS Publications Warehouse

    Mathany, Timothy M.; Wright, Michael T.; Beuttel, Brandon S.; Belitz, Kenneth

    2012-01-01

    Groundwater quality in the 12,103-square-mile Borrego Valley, Central Desert, and Low-Use Basins of the Mojave and Sonoran Deserts (CLUB) study unit was investigated by the U.S. Geological Survey (USGS) from December 2008 to March 2010, as part of the California State Water Resources Control Board (SWRCB) Groundwater Ambient Monitoring and Assessment (GAMA) Program's Priority Basin Project (PBP). The GAMA-PBP was developed in response to the California Groundwater Quality Monitoring Act of 2001 and is being conducted in collaboration with the SWRCB and Lawrence Livermore National Laboratory (LLNL). The CLUB study unit was the twenty-eighth study unit to be sampled as part of the GAMA-PBP. The GAMA CLUB study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the primary aquifer systems, and to facilitate statistically consistent comparisons of untreated-groundwater quality throughout California. The primary aquifer systems (hereinafter referred to as primary aquifers) are defined as parts of aquifers corresponding to the perforation intervals of wells listed in the California Department of Public Health (CDPH) database for the CLUB study unit. The quality of groundwater in shallow or deep water-bearing zones may differ from the quality of groundwater in the primary aquifers; shallow groundwater may be more vulnerable to surficial contamination. In the CLUB study unit, groundwater samples were collected from 52 wells in 3 study areas (Borrego Valley, Central Desert, and Low-Use Basins of the Mojave and Sonoran Deserts) in San Bernardino, Riverside, Kern, San Diego, and Imperial Counties. Forty-nine of the wells were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the study unit (grid wells), and three wells were selected to aid in evaluation of water-quality issues (understanding wells). The groundwater samples were analyzed for organic constituents (volatile organic compounds [VOCs], pesticides and pesticide degradates, and pharmaceutical compounds), constituents of special interest (perchlorate and N-nitrosodimethylamine [NDMA]), naturally-occurring inorganic constituents (trace elements, nutrients, major and minor ions, silica, total dissolved solids [TDS], alkalinity, and species of inorganic chromium), and radioactive constituents (radon-222, radium isotopes, and gross alpha and gross beta radioactivity). Naturally-occurring isotopes (stable isotopes of hydrogen, oxygen, boron, and strontium in water, stable isotopes of carbon in dissolved inorganic carbon, activities of tritium, and carbon-14 abundance) and dissolved noble gases also were measured to help identify the sources and ages of sampled groundwater. In total, 223 constituents and 12 water-quality indicators were investigated. Three types of quality-control samples (blanks, replicates, and matrix spikes) were collected at up to 10 percent of the wells in the CLUB study unit, and the results for these samples were used to evaluate the quality of the data for the groundwater samples. Field blanks rarely contained detectable concentrations of any constituent, suggesting that contamination from sample collection procedures was not a significant source of bias in the data for the groundwater samples. Replicate samples generally were within the limits of acceptable analytical reproducibility. Median matrix-spike recoveries were within the acceptable range (70 to 130 percent) for approximately 85 percent of the compounds. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, untreated groundwater typically is treated, disinfected, and (or) blended with other waters to maintain water quality. Regulatory benchmarks apply to water that is delivered to the consumer, not to untreated groundwater. However, to provide some context for the results, concentrations of constituents measured in the untreated groundwater were compared with regulatory and non-regulatory health-based benchmarks established by the U.S. Environmental Protection Agency (USEPA) and CDPH, and to non-regulatory benchmarks established for aesthetic concerns by CDPH. Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks. Most inorganic constituents detected in groundwater samples from the 49 grid wells were detected at concentrations less than drinking-water benchmarks. In addition, all detections of organic constituents from the CLUB study-unit grid-well samples were less than health-based benchmarks. In total, VOCs were detected in 17 of the 49 grid wells sampled (approximately 35 percent), pesticides and pesticide degradates were detected in 5 of the 47 grid wells sampled (approximately 11 percent), and perchlorate was detected in 41 of 49 grid wells sampled (approximately 84 percent). Trace elements, major and minor ions, and nutrients were sampled for at 39 grid wells, and radioactive constituents were sampled for at 23 grid wells; most detected concentrations were less than health-based benchmarks. Exceptions in the grid-well samples include seven detections of arsenic greater than the USEPA maximum contaminant level (MCL-US) of 10 micrograms per liter (μg/L); four detections of boron greater than the CDPH notification level (NL-CA) of 1,000 μg/L; six detections of molybdenum greater than the USEPA lifetime health advisory level (HAL-US) of 40 μg/L; two detections of uranium greater than the MCL-US of 30 μg/L; nine detections of fluoride greater than the CDPH maximum contaminant level (MCL-CA) of 2 milligrams per liter (mg/L); one detection of nitrite plus nitrate (NO2-+NO3-), as nitrogen, greater than the MCL-US of 10 mg/L; and four detections of gross alpha radioactivity (72-hour count), and one detection of gross alpha radioactivity (30-day count), greater than the MCL-US of 15 picocuries per liter. Results for constituents with non-regulatory benchmarks set for aesthetic concerns showed that a manganese concentration greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 50 μg/L was detected in one grid well. Chloride concentrations greater than the recommended SMCL-CA benchmark of 250 mg/L were detected in three grid wells, and one of these wells also had a concentration that was greater than the upper SMCL-CA benchmark of 500 mg/L. Sulfate concentrations greater than the recommended SMCL-CA benchmark of 250 mg/L were measured in six grid wells. TDS concentrations greater than the SMCL-CA recommended benchmark of 500 mg/L were measured in 20 grid wells, and concentrations in 2 of these wells also were greater than the SMCL-CA upper benchmark of 1,000 mg/L.

  12. Emissions & Generation Resource Integrated Database (eGRID) Questions and Answers

    EPA Pesticide Factsheets

    eGRID is a comprehensive source of data on the environmental characteristics of almost all electric power generated in the United States. eGRID is based on available plant-specific data for all U.S. electricity generating plants that report data.

  13. Method of constructing dished ion thruster grids to provide hole array spacing compensation

    NASA Technical Reports Server (NTRS)

    Banks, B. A. (Inventor)

    1976-01-01

    The center-to-center spacings of a photoresist pattern for an array of holes applied to a thin metal sheet are increased by uniformly stretching the thin metal sheet in all directions along the plane of the sheet. The uniform stretching is provided by securely clamping the periphery of the sheet and applying an annular force against the face of the sheet, within the periphery of the sheet and around the photoresist pattern. The technique is used in the construction of ion thruster grid units where the outer or downstream grid is subjected to uniform stretching prior to convex molding. The technique provides alignment of the holes of grid pairs so as to direct the ion beamlets in a direction parallel to the axis of the grid unit and thereby provide optimization of the available thrust.

  14. Residential scene classification for gridded population sampling in developing countries using deep convolutional neural networks on satellite imagery.

    PubMed

    Chew, Robert F; Amer, Safaa; Jones, Kasey; Unangst, Jennifer; Cajka, James; Allpress, Justine; Bruhn, Mark

    2018-05-09

    Conducting surveys in low- and middle-income countries is often challenging because many areas lack a complete sampling frame, have outdated census information, or have limited data available for designing and selecting a representative sample. Geosampling is a probability-based, gridded population sampling method that addresses some of these issues by using geographic information system (GIS) tools to create logistically manageable area units for sampling. GIS grid cells are overlaid to partition a country's existing administrative boundaries into area units that vary in size from 50 m × 50 m to 150 m × 150 m. To avoid sending interviewers to unoccupied areas, researchers manually classify grid cells as "residential" or "nonresidential" through visual inspection of aerial images. "Nonresidential" units are then excluded from sampling and data collection. This process of manually classifying sampling units has drawbacks since it is labor intensive, prone to human error, and creates the need for simplifying assumptions during calculation of design-based sampling weights. In this paper, we discuss the development of a deep learning classification model to predict whether aerial images are residential or nonresidential, thus reducing manual labor and eliminating the need for simplifying assumptions. On our test sets, the model performs comparable to a human-level baseline in both Nigeria (94.5% accuracy) and Guatemala (96.4% accuracy), and outperforms baseline machine learning models trained on crowdsourced or remote-sensed geospatial features. Additionally, our findings suggest that this approach can work well in new areas with relatively modest amounts of training data. Gridded population sampling methods like geosampling are becoming increasingly popular in countries with outdated or inaccurate census data because of their timeliness, flexibility, and cost. Using deep learning models directly on satellite images, we provide a novel method for sample frame construction that identifies residential gridded aerial units. In cases where manual classification of satellite images is used to (1) correct for errors in gridded population data sets or (2) classify grids where population estimates are unavailable, this methodology can help reduce annotation burden with comparable quality to human analysts.

  15. Micro-electro-fluidic grids for nematodes: a lens-less, image-sensor-less approach for on-chip tracking of nematode locomotion.

    PubMed

    Liu, Peng; Martin, Richard J; Dong, Liang

    2013-02-21

    This paper reports on the development of a lens-less and image-sensor-less micro-electro-fluidic (MEF) approach for real-time monitoring of the locomotion of microscopic nematodes. The technology showed promise for overcoming the constraint of the limited field of view of conventional optical microscopy, with relatively low cost, good spatial resolution, and high portability. The core of the device was microelectrode grids formed by orthogonally arranging two identical arrays of microelectrode lines. The two microelectrode arrays were spaced by a microfluidic chamber containing a liquid medium of interest. As a nematode (e.g., Caenorhabditis elegans) moved inside the chamber, the invasion of part of its body into some intersection regions between the microelectrodes caused changes in the electrical resistance of these intersection regions. The worm's presence at, or absence from, a detection unit was determined by a comparison between the measured resistance variation of this unit and a pre-defined threshold resistance variation. An electronic readout circuit was designed to address all the detection units and read out their individual electrical resistances. By this means, it was possible to obtain the electrical resistance profile of the whole MEF grid, and thus, the physical pattern of the swimming nematode. We studied the influence of a worm's body on the resistance of an addressed unit. We also investigated how the full-frame scanning and readout rates of the electronic circuit and the dimensions of a detection unit posed an impact on the spatial resolution of the reconstructed images of the nematode. Other important issues, such as the manufacturing-induced initial non-uniformity of the grids and the electrotaxic behaviour of nematodes, were also studied. A drug resistance screening experiment was conducted by using the grids with a good resolution of 30 × 30 μm(2). The phenotypic differences in the locomotion behaviours (e.g., moving speed and oscillation frequency extracted from the reconstructed images with the help of software) between the wild-type (N2) and mutant (lev-8) C. elegans worms in response to different doses of the anthelmintic drug, levamisole, were investigated. The locomotive parameters obtained by the MEF grids agreed well with those obtained by optical microscopy. Therefore, this technology will benefit whole-animal assays by providing a structurally simple, potentially cost-effective device capable of tracking the movement and phenotypes of important nematodes in various microenvironments.

  16. Renewables-Friendly Grid Development Strategies. Experience in the United States, Potential Lessons for China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurlbut, David; Zhou, Ella; Porter, Kevin

    2015-10-01

    This report aims to help China's reform effort by providing a concise summary of experience in the United States with "renewables-friendly"" grid management, focusing on experiences that might be applicable to China. It focuses on utility-scale renewables and sets aside issues related to distributed generation.

  17. On-site fuel cell field test support program

    NASA Astrophysics Data System (ADS)

    Staniunas, J. W.; Merten, G. P.

    1982-01-01

    In order to assess the impact of grid connection on the potential market for fuel cell service, applications studies were conducted to identify the fuel cell operating modes and corresponding fuel cell sizing criteria which offer the most potential for initial commercial service. The market for grid-connected fuel cell service was quantified using United's market analysis program and computerized building data base. Electric and gas consumption data for 268 buildings was added to our surveyed building data file, bringing the total to 407 buildings. These buildings were analyzed for grid-isolated and grid-connected fuel cell service. The results of the analyses indicated that the nursing home, restaurant and health club building sectors offer significant potential for fuel cell service.

  18. Mapping Mars' northern plains: origins, evolution and response to climate change - an overview of the grid mapping method.

    NASA Astrophysics Data System (ADS)

    Ramsdale, Jason; Balme, Matthew; Conway, Susan

    2015-04-01

    An International Space Science Institute (ISSI) team project has been convened to study the northern plains of Mars. The northern plains are younger and at lower elevation than the majority of the martian surface and are thought to be the remnants of an ancient ocean. Understanding the surface geology and geomorphology of the Northern Plains is complex, because the surface has been subtly modified many times, making traditional unit-boundaries hard to define. Our ISSI team project aims to answer the following questions: 1) "What is the distribution of ice-related landforms in the northern plains, and can it be related to distinct latitude bands or different geological or geomorphological units?" 2) "What is the relationship between the latitude dependent mantle (LDM; a draping unit believed to comprise of ice and dust thought to be deposited under periods of high axial obliquity) and (i) landforms indicative of ground ice, and (ii) other geological units in the northern plains?" 3) "What are the distributions and associations of recent landforms indicative of thaw of ice or snow?" With increasing coverage of high-resolution images of the surface of we are able to identify increasing numbers and varieties of small-scale landforms on Mars. Many such landforms are too small to represent on regional maps, yet determining their presence or absence across large areas can form the observational basis for developing hypotheses on the nature and history of an area. The combination of improved spatial resolution with near-continuous coverage increases the time required to analyse the data. This becomes problematic when attempting regional or global-scale studies of metre-scale landforms. Here, we describe an approach to mapping small features across large areas. Rather than traditional mapping with points, lines and polygons, we used a grid "tick box" approach to locate specific landforms. The mapping strips were divided into 15×150 grid of squares, each approximately 20×20 km, for each study area. Orbital images at 6-15m/pix were then viewed systematically for each grid square and the presence or absence of each of the basic suite of landforms recorded. The landforms were recorded as being either "present", "dominant", "possible", or "absent" in each grid square. The result is a series of coarse-resolution "rasters" showing the distribution of the different types of landforms across the strip. We have found this approach to be efficient, scalable and appropriate for teams of people mapping remotely. It is easily scalable because, carrying the "absent" values forward to finer grids from the larger grids would mean only areas with positive values for that landform would need to be examined to increase the resolution for the whole strip. As each sub-grid only requires the presence or absence of a landform ascertaining, it therefore removes an individual's decision as to where to draw boundaries, making the method efficient and repeatable.

  19. Improved Grid-Array Millimeter-Wave Amplifier

    NASA Technical Reports Server (NTRS)

    Rosenberg, James J.; Rutledge, David B.; Smith, R. Peter; Weikle, Robert

    1993-01-01

    Improved grid-array amplifiers operating at millimeter and submillimeter wavelengths developed for use in communications and radar. Feedback suppressed by making input polarizations orthogonal to output polarizations. Amplifier made to oscillate by introducing some feedback. Several grid-array amplifiers concatenated to form high-gain beam-amplifying unit.

  20. Groundwater-quality data in the Western San Joaquin Valley study unit, 2010 - Results from the California GAMA Program

    USGS Publications Warehouse

    Mathany, Timothy M.; Landon, Matthew K.; Shelton, Jennifer L.; Belitz, Kenneth

    2013-01-01

    Groundwater quality in the approximately 2,170-square-mile Western San Joaquin Valley (WSJV) study unit was investigated by the U.S. Geological Survey (USGS) from March to July 2010, as part of the California State Water Resources Control Board (SWRCB) Groundwater Ambient Monitoring and Assessment (GAMA) Program's Priority Basin Project (PBP). The GAMA-PBP was developed in response to the California Groundwater Quality Monitoring Act of 2001 and is being conducted in collaboration with the SWRCB and Lawrence Livermore National Laboratory (LLNL). The WSJV study unit was the twenty-ninth study unit to be sampled as part of the GAMA-PBP. The GAMA Western San Joaquin Valley study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the primary aquifer system, and to facilitate statistically consistent comparisons of untreated groundwater quality throughout California. The primary aquifer system is defined as parts of aquifers corresponding to the perforation intervals of wells listed in the California Department of Public Health (CDPH) database for the WSJV study unit. Groundwater quality in the primary aquifer system may differ from the quality in the shallower or deeper water-bearing zones; shallow groundwater may be more vulnerable to surficial contamination. In the WSJV study unit, groundwater samples were collected from 58 wells in 2 study areas (Delta-Mendota subbasin and Westside subbasin) in Stanislaus, Merced, Madera, Fresno, and Kings Counties. Thirty-nine of the wells were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the study unit (grid wells), and 19 wells were selected to aid in the understanding of aquifer-system flow and related groundwater-quality issues (understanding wells). The groundwater samples were analyzed for organic constituents (volatile organic compounds [VOCs], low-level fumigants, and pesticides and pesticide degradates), constituents of special interest (perchlorate, N-nitrosodimethylamine [NDMA], and 1,2,3-trichloropropane [1,2,3-TCP]), and naturally occurring inorganic constituents (trace elements, nutrients, dissolved organic carbon [DOC], major and minor ions, silica, total dissolved solids [TDS], alkalinity, total arsenic and iron [unfiltered] and arsenic, chromium, and iron species [filtered]). Isotopic tracers (stable isotopes of hydrogen, oxygen, and boron in water, stable isotopes of nitrogen and oxygen in dissolved nitrate, stable isotopes of sulfur in dissolved sulfate, isotopic ratios of strontium in water, stable isotopes of carbon in dissolved inorganic carbon, activities of tritium, and carbon-14 abundance), dissolved standard gases (methane, carbon dioxide, nitrogen, oxygen, and argon), and dissolved noble gases (argon, helium-4, krypton, neon, and xenon) were measured to help identify sources and ages of sampled groundwater. In total, 245 constituents and 8 water-quality indicators were measured. Quality-control samples (blanks, replicates, or matrix spikes) were collected at 16 percent of the wells in the WSJV study unit, and the results for these samples were used to evaluate the quality of the data from the groundwater samples. Blanks rarely contained detectable concentrations of any constituent, suggesting that contamination from sample collection procedures was not a significant source of bias in the data for the groundwater samples. Replicate samples all were within acceptable limits of variability. Matrix-spike recoveries were within the acceptable range (70 to 130 percent) for approximately 87 percent of the compounds. This study did not evaluate the quality of water delivered to consumers. After withdrawal, groundwater typically is treated, disinfected, and (or) blended with other waters to maintain water quality. Regulatory benchmarks apply to water that is delivered to the consumer, not to untreated groundwater. However, to provide some context for the results, concentrations of constituents measured in the untreated groundwater were compared with regulatory and non-regulatory health-based benchmarks established by the U.S. Environmental Protection Agency (USEPA) and CDPH, and to non-regulatory benchmarks established for aesthetic concerns by CDPH. Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks. Most inorganic constituents detected in groundwater samples from the 39 grid wells were detected at concentrations less than health-based benchmarks. Detections of organic and special-interest constituents from grid wells sampled in the WSJV study unit also were less than health-based benchmarks. In total, VOCs were detected in 12 of the 39 grid wells sampled (approximately 31 percent), pesticides and pesticide degradates were detected in 9 grid wells (approximately 23 percent), and perchlorate was detected in 15 grid wells (approximately 38 percent). Trace elements, major and minor ions, and nutrients were sampled for at 39 grid wells; most concentrations were less than health-based benchmarks. Exceptions include two detections of arsenic greater than the USEPA maximum contaminant level (MCL-US) of 10 micrograms per liter (μg/L), 20 detections of boron greater than the CDPH notification level (NL-CA) of 1,000 μg/L, 2 detections of molybdenum greater than the USEPA lifetime health advisory level (HAL-US) of 40 μg/L, 1 detection of selenium greater than the MCL-US of 50 μg/L, 2 detections of strontium greater than the HAL-US of 4,000 μg/L, and 3 detections of nitrate greater than the MCL-US of 10 μg/L. Results for inorganic constituents with non-health-based benchmarks (iron, manganese, chloride, sulfate, and TDS) showed that iron concentrations greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 300 μg/L were detected in five grid wells. Manganese concentrations greater than the SMCL-CA of 50 μg/L were detected in 16 grid wells. Chloride concentrations greater than the recommended SMCL-CA benchmark of 250 milligrams per liter (mg/L) were detected in 14 grid wells, and concentrations in 5 of these wells also were greater than the upper SMCL-CA benchmark of 500 mg/L. Sulfate concentrations greater than the recommended SMCL-CA benchmark of 250 mg/L were measured in 21 grid wells, and concentrations in 13 of these wells also were greater than the SMCL-CA upper benchmark of 500 mg/L. TDS concentrations greater than the SMCL-CA recommended benchmark of 500 mg/L were measured in 36 grid wells, and concentrations in 20 of these wells also were greater than the SMCL-CA upper benchmark of 1,000 mg/L.

  1. Hybrid PV/Wind Power Systems Incorporating Battery Storage and Considering the Stochastic Nature of Renewable Resources

    NASA Astrophysics Data System (ADS)

    Barnawi, Abdulwasa Bakr

    Hybrid power generation system and distributed generation technology are attracting more investments due to the growing demand for energy nowadays and the increasing awareness regarding emissions and their environmental impacts such as global warming and pollution. The price fluctuation of crude oil is an additional reason for the leading oil producing countries to consider renewable resources as an alternative. Saudi Arabia as the top oil exporter country in the word announced the "Saudi Arabia Vision 2030" which is targeting to generate 9.5 GW of electricity from renewable resources. Two of the most promising renewable technologies are wind turbines (WT) and photovoltaic cells (PV). The integration or hybridization of photovoltaics and wind turbines with battery storage leads to higher adequacy and redundancy for both autonomous and grid connected systems. This study presents a method for optimal generation unit planning by installing a proper number of solar cells, wind turbines, and batteries in such a way that the net present value (NPV) is minimized while the overall system redundancy and adequacy is maximized. A new renewable fraction technique (RFT) is used to perform the generation unit planning. RFT was tested and validated with particle swarm optimization and HOMER Pro under the same conditions and environment. Renewable resources and load randomness and uncertainties are considered. Both autonomous and grid-connected system designs were adopted in the optimal generation units planning process. An uncertainty factor was designed and incorporated in both autonomous and grid connected system designs. In the autonomous hybrid system design model, the strategy including an additional amount of operation reserve as a percent of the hourly load was considered to deal with resource uncertainty since the battery storage system is the only backup. While in the grid-connected hybrid system design model, demand response was incorporated to overcome the impact of uncertainty and perform energy trading between the hybrid grid utility and main grid utility in addition to the designed uncertainty factor. After the generation unit planning was carried out and component sizing was determined, adequacy evaluation was conducted by calculating the loss of load expectation adequacy index for different contingency criteria considering probability of equipment failure. Finally, a microgrid planning was conducted by finding the proper size and location to install distributed generation units in a radial distribution network.

  2. Fourth International Workshop on Grid Simulator Testing of Wind Turbine

    Science.gov Websites

    , United Kingdom Smart Reconfiguration and Protection in Advanced Electric Distribution Grids - Mayank Capabilities in Kinectrics - Nicolas Wrathall, Kinectrics, Canada Discussion Day 2: April 26, 2017 Advanced Grid Emulation Methods Advanced PHIL Interface for Multi-MW Scale Inverter Testing - Przemyslaw

  3. Sensor Transmission Power Schedule for Smart Grids

    NASA Astrophysics Data System (ADS)

    Gao, C.; Huang, Y. H.; Li, J.; Liu, X. D.

    2017-11-01

    Smart grid has attracted much attention by the requirement of new generation renewable energy. Nowadays, the real-time state estimation, with the help of phasor measurement unit, plays an important role to keep smart grid stable and efficient. However, the limitation of the communication channel is not considered by related work. Considering the familiar limited on-board batteries wireless sensor in smart grid, transmission power schedule is designed in this paper, which minimizes energy consumption with proper EKF filtering performance requirement constrain. Based on the event-triggered estimation theory, the filtering algorithm is also provided to utilize the information contained in the power schedule. Finally, its feasibility and performance is demonstrated using the standard IEEE 39-bus system with phasor measurement units (PMUs).

  4. Grid parity analysis of stand-alone hybrid microgrids: A comparative study of Germany, Pakistan, South Africa and the United States

    NASA Astrophysics Data System (ADS)

    Siddiqui, Jawad M.

    Grid parity for alternative energy resources occurs when the cost of electricity generated from the source is lower than or equal to the purchasing price of power from the electricity grid. This thesis aims to quantitatively analyze the evolution of hybrid stand-alone microgrids in the US, Germany, Pakistan and South Africa to determine grid parity for a solar PV/Diesel/Battery hybrid system. The Energy System Model (ESM) and NREL's Hybrid Optimization of Multiple Energy Resources (HOMER) software are used to simulate the microgrid operation and determine a Levelized Cost of Electricity (LCOE) figure for each location. This cost per kWh is then compared with two distinct estimates of future retail electricity prices at each location to determine grid parity points. Analysis results reveal that future estimates of LCOE for such hybrid stand-alone microgrids range within the 35-55 cents/kWh over the 25 year study period. Grid parity occurs earlier in locations with higher power prices or unreliable grids. For Pakistan grid parity is already here, while Germany hits parity between the years 2023-2029. Results for South Africa suggest a parity time range of the years 2040-2045. In the US, places with low grid prices do not hit parity during the study period. Sensitivity analysis results reveal the significant impact of financing and the cost of capital on these grid parity points, particularly in developing markets of Pakistan and South Africa. Overall, the study helps conclude that variations in energy markets may determine the fate of emerging energy technologies like microgrids. However, policy interventions have a significant impact on the final outcome, such as the grid parity in this case. Measures such as eliminating uncertainty in policies and improving financing can help these grids overcome barriers in developing economies, where they may find a greater use much earlier in time.

  5. Groundwater-quality data in the North San Francisco Bay Shallow Aquifer study unit, 2012: results from the California GAMA Program

    USGS Publications Warehouse

    Bennett, George L.; Fram, Miranda S.

    2014-01-01

    Results for constituents with non-regulatory benchmarks set for aesthetic concerns from the grid wells showed that iron concentrations greater than the CDPH secondary maximum contaminant level (SMCL-CA) of 300 μg/L were detected in 13 grid wells. Chloride was detected at a concentration greater than the SMCL-CA recommended benchmark of 250 mg/L in two grid wells. Sulfate concentrations greater than the SMCL-CA recommended benchmark of 250 mg/L were measured in two grid wells, and the concentration in one of these wells was also greater than the SMCL-CA upper benchmark of 500 mg/L. TDS concentrations greater than the SMCL-CA recommended benchmark of 500 mg/L were measured in 15 grid wells, and concentrations in 4 of these wells were also greater than the SMCL-CA upper benchmark of 1,000 mg/L.

  6. A study using a Monte Carlo method of the optimal configuration of a distribution network in terms of power loss sensing.

    PubMed

    Moon, Hyun Ho; Lee, Jong Joo; Choi, Sang Yule; Cha, Jae Sang; Kang, Jang Mook; Kim, Jong Tae; Shin, Myong Chul

    2011-01-01

    Recently there have been many studies of power systems with a focus on "New and Renewable Energy" as part of "New Growth Engine Industry" promoted by the Korean government. "New And Renewable Energy"-especially focused on wind energy, solar energy and fuel cells that will replace conventional fossil fuels-is a part of the Power-IT Sector which is the basis of the SmartGrid. A SmartGrid is a form of highly-efficient intelligent electricity network that allows interactivity (two-way communications) between suppliers and consumers by utilizing information technology in electricity production, transmission, distribution and consumption. The New and Renewable Energy Program has been driven with a goal to develop and spread through intensive studies, by public or private institutions, new and renewable energy which, unlike conventional systems, have been operated through connections with various kinds of distributed power generation systems. Considerable research on smart grids has been pursued in the United States and Europe. In the United States, a variety of research activities on the smart power grid have been conducted within EPRI's IntelliGrid research program. The European Union (EU), which represents Europe's Smart Grid policy, has focused on an expansion of distributed generation (decentralized generation) and power trade between countries with improved environmental protection. Thus, there is current emphasis on a need for studies that assesses the economic efficiency of such distributed generation systems. In this paper, based on the cost of distributed power generation capacity, calculations of the best profits obtainable were made by a Monte Carlo simulation. Monte Carlo simulations that rely on repeated random sampling to compute their results take into account the cost of electricity production, daily loads and the cost of sales and generate a result faster than mathematical computations. In addition, we have suggested the optimal design, which considers the distribution loss associated with power distribution systems focus on sensing aspect and distributed power generation.

  7. Monthly fractional green vegetation cover associated with land cover classes of the conterminous USA

    USGS Publications Warehouse

    Gallo, Kevin P.; Tarpley, Dan; Mitchell, Ken; Csiszar, Ivan; Owen, Timothy W.; Reed, Bradley C.

    2001-01-01

    The land cover classes developed under the coordination of the International Geosphere-Biosphere Programme Data and Information System (IGBP-DIS) have been analyzed for a study area that includes the Conterminous United States and portions of Mexico and Canada. The 1-km resolution data have been analyzed to produce a gridded data set that includes within each 20-km grid cell: 1) the three most dominant land cover classes, 2) the fractional area associated with each of the three dominant classes, and 3) the fractional area covered by water. Additionally, the monthly fraction of green vegetation cover (fgreen) associated with each of the three dominant land cover classes per grid cell was derived from a 5-year climatology of 1-km resolution NOAA-AVHRR data. The variables derived in this study provide a potential improvement over the use of monthly fgreen linked to a single land cover class per model grid cell.

  8. A grid amplifier

    NASA Technical Reports Server (NTRS)

    Kim, Moonil; Weikle, Robert M., II; Hacker, Jonathan B.; Delisio, Michael P.; Rutledge, David B.; Rosenberg, James J.; Smith, R. P.

    1991-01-01

    A 50-MESFET grid amplifier is reported that has a gain of 11 dB at 3.3 GHz. The grid isolates the input from the output by using vertical polarization for the input beam and horizontal polarization for the transmitted output beam. The grid unit cell is a two-MESFET differential amplifier. A simple calibration procedure allows the gain to be calculated from a relative power measurement. This grid is a hybrid circuit, but the structure is suitable for fabrication as a monolithic wafer-scale integrated circuit, particularly at millimeter wavelengths.

  9. Decentralized control of units in smart grids for the support of renewable energy supply

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sonnenschein, Michael, E-mail: Michael.Sonnenschein@Uni-Oldenburg.DE; Lünsdorf, Ontje, E-mail: Ontje.Luensdorf@OFFIS.DE; Bremer, Jörg, E-mail: Joerg.Bremer@Uni-Oldenburg.DE

    Due to the significant environmental impact of power production from fossil fuels and nuclear fission, future energy systems will increasingly rely on distributed and renewable energy sources (RES). The electrical feed-in from photovoltaic (PV) systems and wind energy converters (WEC) varies greatly both over short and long time periods (from minutes to seasons), and (not only) by this effect the supply of electrical power from RES and the demand for electrical power are not per se matching. In addition, with a growing share of generation capacity especially in distribution grids, the top-down paradigm of electricity distribution is gradually replaced bymore » a bottom-up power supply. This altogether leads to new problems regarding the safe and reliable operation of power grids. In order to address these challenges, the notion of Smart Grids has been introduced. The inherent flexibilities, i.e. the set of feasible power schedules, of distributed power units have to be controlled in order to support demand–supply matching as well as stable grid operation. Controllable power units are e.g. combined heat and power plants, power storage systems such as batteries, and flexible power consumers such as heat pumps. By controlling the flexibilities of these units we are particularly able to optimize the local utilization of RES feed-in in a given power grid by integrating both supply and demand management measures with special respect to the electrical infrastructure. In this context, decentralized systems, autonomous agents and the concept of self-organizing systems will become key elements of the ICT based control of power units. In this contribution, we first show how a decentralized load management system for battery charging/discharging of electrical vehicles (EVs) can increase the locally used share of supply from PV systems in a low voltage grid. For a reliable demand side management of large sets of appliances, dynamic clustering of these appliances into uniformly controlled appliance sets is necessary. We introduce a method for self-organized clustering for this purpose and show how control of such clusters can affect load peaks in distribution grids. Subsequently, we give a short overview on how we are going to expand the idea of self-organized clusters of units into creating a virtual control center for dynamic virtual power plants (DVPP) offering products at a power market. For an efficient organization of DVPPs, the flexibilities of units have to be represented in a compact and easy to use manner. We give an introduction how the problem of representing a set of possibly 10{sup 100} feasible schedules can be solved by a machine-learning approach. In summary, this article provides an overall impression how we use agent based control techniques and methods of self-organization to support the further integration of distributed and renewable energy sources into power grids and energy markets. - Highlights: • Distributed load management for electrical vehicles supports local supply from PV. • Appliances can self-organize into so called virtual appliances for load control. • Dynamic VPPs can be controlled by extensively decentralized control centers. • Flexibilities of units can efficiently be represented by support-vector descriptions.« less

  10. Maps and grids of hydrogeologic information created from standardized water-well drillers’ records of the glaciated United States

    USGS Publications Warehouse

    Bayless, E. Randall; Arihood, Leslie D.; Reeves, Howard W.; Sperl, Benjamin J.S.; Qi, Sharon L.; Stipe, Valerie E.; Bunch, Aubrey R.

    2017-01-18

    As part of the National Water Availability and Use Program established by the U.S. Geological Survey (USGS) in 2005, this study took advantage of about 14 million records from State-managed collections of water-well drillers’ records and created a database of hydrogeologic properties for the glaciated United States. The water-well drillers’ records were standardized to be relatively complete and error-free and to provide consistent variables and naming conventions that span all State boundaries.Maps and geospatial grids were developed for (1) total thickness of glacial deposits, (2) total thickness of coarse-grained deposits, (3) specific-capacity based transmissivity and hydraulic conductivity, and (4) texture-based estimated equivalent horizontal and vertical hydraulic conductivity and transmissivity. The information included in these maps and grids is required for most assessments of groundwater availability, in addition to having applications to studies of groundwater flow and transport. The texture-based estimated equivalent horizontal and vertical hydraulic conductivity and transmissivity were based on an assumed range of hydraulic conductivity values for coarse- and fine-grained deposits and should only be used with complete awareness of the methods used to create them. However, the maps and grids of texture-based estimated equivalent hydraulic conductivity and transmissivity may be useful for application to areas where a range of measured values is available for re-scaling.Maps of hydrogeologic information for some States are presented as examples in this report but maps and grids for all States are available electronically at the project Web site (USGS Glacial Aquifer System Groundwater Availability Study, http://mi.water.usgs.gov/projects/WaterSmart/Map-SIR2015-5105.html) and the Science Base Web site, https://www.sciencebase.gov/catalog/item/58756c7ee4b0a829a3276352.

  11. Asbestos Air Monitoring Results at Eleven Family Housing Areas throughout the United States.

    DTIC Science & Technology

    1991-05-23

    limits varied depending on sampling volumes and grid openings scanned. Therefore, the detection limits presented in the results summary tables vary...1 f/10 grid squares) (855 mm 2) (1 liter) = 3054 liters (0.005 f/cc) (0.0056 mm 2) (1000 cc) Where: * 1 f/10 grid squares (the maximum recommended...diameter filter. * 0.0056 mm 2 is the area of each grid square (75 /Jm per side) in a 200 mesh electron microscope grid . This value will vary from 0.0056

  12. 76 FR 67793 - Notification of Expanded Pricing Grid for Precious Metals Products Containing Platinum and Gold...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-02

    ... DEPARTMENT OF THE TREASURY United States Mint Notification of Expanded Pricing Grid for Precious... in the Federal Register on January 6, 2009, outlining the new pricing methodology for numismatic... considerably, and is approaching the upper bracket of the pricing grid. As a result, it is necessary to expand...

  13. Development of a pressure based multigrid solution method for complex fluid flows

    NASA Technical Reports Server (NTRS)

    Shyy, Wei

    1991-01-01

    In order to reduce the computational difficulty associated with a single grid (SG) solution procedure, the multigrid (MG) technique was identified as a useful means for improving the convergence rate of iterative methods. A full MG full approximation storage (FMG/FAS) algorithm is used to solve the incompressible recirculating flow problems in complex geometries. The algorithm is implemented in conjunction with a pressure correction staggered grid type of technique using the curvilinear coordinates. In order to show the performance of the method, two flow configurations, one a square cavity and the other a channel, are used as test problems. Comparisons are made between the iterations, equivalent work units, and CPU time. Besides showing that the MG method can yield substantial speed-up with wide variations in Reynolds number, grid distributions, and geometry, issues such as the convergence characteristics of different grid levels, the choice of convection schemes, and the effectiveness of the basic iteration smoothers are studied. An adaptive grid scheme is also combined with the MG procedure to explore the effects of grid resolution on the MG convergence rate as well as the numerical accuracy.

  14. Groundwater-quality data in the Bear Valley and Selected Hard Rock Areas study unit, 2010: Results from the California GAMA Program

    USGS Publications Warehouse

    Mathany, Timothy M.; Belitz, Kenneth

    2013-01-01

    Groundwater quality in the 112-square-mile Bear Valley and Selected Hard Rock Areas (BEAR) study unit was investigated by the U.S. Geological Survey (USGS) from April to August 2010, as part of the California State Water Resources Control Board (SWRCB) Groundwater Ambient Monitoring and Assessment (GAMA) Program’s Priority Basin Project (PBP). The GAMA-PBP was developed in response to the California Groundwater Quality Monitoring Act of 2001 and is being conducted in collaboration with the SWRCB and Lawrence Livermore National Laboratory (LLNL). The BEAR study unit was the thirty-first study unit to be sampled as part of the GAMA-PBP. The GAMA Bear Valley and Selected Hard Rock Areas study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the primary aquifer system and to facilitate statistically consistent comparisons of untreated groundwater quality throughout California. The primary aquifer system is defined as the zones corresponding to the perforation intervals of wells listed in the California Department of Public Health (CDPH) database for the BEAR study unit. Groundwater quality in the primary aquifer system may differ from the quality in the shallow or deep water-bearing zones; shallow groundwater may be more vulnerable to surficial contamination. In the BEAR study unit, groundwater samples were collected from two study areas (Bear Valley and Selected Hard Rock Areas) in San Bernardino County. Of the 38 sampling sites, 27 were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the primary aquifer system in the study unit (grid sites), and the remaining 11 sites were selected to aid in the understanding of the potential groundwater-quality issues associated with septic tank use and with ski areas in the study unit (understanding sites). The groundwater samples were analyzed for organic constituents (volatile organic compounds [VOCs], pesticides and pesticide degradates, pharmaceutical compounds, and wastewater indicator compounds [WICs]), constituents of special interest (perchlorate, N-nitrosodimethylamine [NDMA], and 1,2,3-trichloropropane [1,2,3-TCP]), and inorganic constituents (trace elements, nutrients, dissolved organic carbon [DOC], major and minor ions, silica, total dissolved solids [TDS], alkalinity, and arsenic and iron species), and uranium and other radioactive constituents (radon-222 and activities of tritium and carbon-14). Isotopic tracers (of hydrogen and oxygen in water, of nitrogen and oxygen in dissolved nitrate, of dissolved boron, isotopic ratios of strontium in water, and of carbon in dissolved inorganic carbon) and dissolved noble gases (argon, helium-4, krypton, neon, and xenon) were measured to help identify the sources and ages of sampled groundwater. In total, groundwater samples were analyzed for 289 unique constituents and 8 water-quality indicators in the BEAR study unit. Quality-control samples (blanks, replicate pairs, or matrix spikes) were collected at 13 percent of the sites in the BEAR study unit, and the results for these samples were used to evaluate the quality of the data from the groundwater samples. Blank samples rarely contained detectable concentrations of any constituent, indicating that contamination from sample collection or analysis was not a significant source of bias in the data for the groundwater samples. Replicate pair samples all were within acceptable limits of variability. Matrix-spike sample recoveries were within the acceptable range (70 to 130 percent) for approximately 84 percent of the compounds. This study did not evaluate the quality of water delivered to consumers. After withdrawal, groundwater typically is treated, disinfected, and (or) blended with other waters to maintain water quality. Regulatory benchmarks apply to water that is delivered to the consumer, not to untreated groundwater. However, to provide some context for the results, concentrations of constituents measured in the untreated groundwater were compared with regulatory and non-regulatory health-based benchmarks established by the U.S. Environmental Protection Agency (USEPA) and CDPH, and to non-health-based benchmarks established for aesthetic concerns by CDPH. Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks. All concentrations of organic and special-interest constituents from grid sites sampled in the BEAR study unit were less than health-based benchmarks. In total, VOCs were detected in 17 of the 27 grid sites sampled (approximately 63 percent), pesticides and pesticide degradates were detected in 4 grid sites (approximately 15 percent), and perchlorate was detected in 21 grid sites (approximately 78 percent). Inorganic constituents (trace elements, major and minor ions, nutrients, and uranium and other radioactive constituents) were sampled for at 27 grid sites; most concentrations were less than health-based benchmarks. Exceptions include one detection of arsenic greater than the USEPA maximum contaminant level (MCL-US) of 10 micrograms per liter (μg/L), three detections of uranium greater than the MCL-US of 30 μg/L, nine detections of radon-222 greater than the proposed MCL-US of 4,000 picocuries per liter (pCi/L), and one detection of fluoride greater than the CDPH maximum contaminant level (MCL-CA) of 2 milligrams per liter. Concentrations of inorganic constituents with non-health-based benchmarks (iron, manganese, chloride, and TDS) were less than the CDPH secondary maximum contaminant level (SMCL-CA) in most grid sites. Exceptions include two detections of iron greater than the SMCL-CA of 300 μg/L and one detection of manganese greater than the SMCL-CA of 50 μg/L.

  15. Measurement of neutron dose equivalent outside and inside of the treatment vault of GRID therapy.

    PubMed

    Wang, Xudong; Charlton, Michael A; Esquivel, Carlos; Eng, Tony Y; Li, Ying; Papanikolaou, Nikos

    2013-09-01

    To evaluate the neutron and photon dose equivalent rates at the treatment vault entrance (Hn,D and HG), and to study the secondary radiation to the patient in GRID therapy. The radiation activation on the grid was studied. A Varian Clinac 23EX accelerator was working at 18 MV mode with a grid manufactured by .decimal, Inc. The Hn,D and HG were measured using an Andersson-Braun neutron REM meter, and a Geiger Müller counter. The radiation activation on the grid was measured after the irradiation with an ion chamber γ-ray survey meter. The secondary radiation dose equivalent to patient was evaluated by etched track detectors and OSL detectors on a RANDO(®) phantom. Within the measurement uncertainty, there is no significant difference between the Hn,D and HG with and without a grid. However, the neutron dose equivalent to the patient with the grid is, on average, 35.3% lower than that without the grid when using the same field size and the same amount of monitor unit. The photon dose equivalent to the patient with the grid is, on average, 44.9% lower. The measured average half-life of the radiation activation in the grid is 12.0 (± 0.9) min. The activation can be categorized into a fast decay component and a slow decay component with half-lives of 3.4 (± 1.6) min and 15.3 (± 4.0) min, respectively. There was no detectable radioactive contamination found on the surface of the grid through a wipe test. This work indicates that there is no significant change of the Hn,D and HG in GRID therapy, compared with a conventional external beam therapy. However, the neutron and scattered photon dose equivalent to the patient decrease dramatically with the grid and can be clinical irrelevant. Meanwhile, the users of a grid should be aware of the possible high dose to the radiation worker from the radiation activation on the surface of the grid. A delay in handling the grid after the beam delivery is suggested.

  16. Assimilation of Gridded GRACE Terrestrial Water Storage Estimates in the North American Land Data Assimilation System

    NASA Technical Reports Server (NTRS)

    Kumar, Sujay V.; Zaitchik, Benjamin F.; Peters-Lidard, Christa D.; Rodell, Matthew; Reichle, Rolf; Li, Bailing; Jasinski, Michael; Mocko, David; Getirana, Augusto; De Lannoy, Gabrielle; hide

    2016-01-01

    The objective of the North American Land Data Assimilation System (NLDAS) is to provide best available estimates of near-surface meteorological conditions and soil hydrological status for the continental United States. To support the ongoing efforts to develop data assimilation (DA) capabilities for NLDAS, the results of Gravity Recovery and Climate Experiment (GRACE) DA implemented in a manner consistent with NLDAS development are presented. Following previous work, GRACE terrestrial water storage (TWS) anomaly estimates are assimilated into the NASA Catchment land surface model using an ensemble smoother. In contrast to many earlier GRACE DA studies, a gridded GRACE TWS product is assimilated, spatially distributed GRACE error estimates are accounted for, and the impact that GRACE scaling factors have on assimilation is evaluated. Comparisons with quality-controlled in situ observations indicate that GRACE DA has a positive impact on the simulation of unconfined groundwater variability across the majority of the eastern United States and on the simulation of surface and root zone soil moisture across the country. Smaller improvements are seen in the simulation of snow depth, and the impact of GRACE DA on simulated river discharge and evapotranspiration is regionally variable. The use of GRACE scaling factors during assimilation improved DA results in the western United States but led to small degradations in the eastern United States. The study also found comparable performance between the use of gridded and basin averaged GRACE observations in assimilation. Finally, the evaluations presented in the paper indicate that GRACE DA can be helpful in improving the representation of droughts.

  17. 'Renewables-Friendly' Grid Development Strategies: Experience in the United States, Potential Lessons for China (Chinese Translation) (in Chinese)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurlbut, David; Zhou, Ella; Porter, Kevin

    2015-10-03

    This is a Chinese translation of NREL/TP-6A20-64940. This report aims to help China's reform effort by providing a concise summary of experience in the United States with 'renewables-friendly' grid management, focusing on experiences that might be applicable to China. It focuses on utility-scale renewables and sets aside issues related to distributed generation.

  18. The importance of topography controlled sub-grid process heterogeneity in distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Nijzink, R. C.; Samaniego, L.; Mai, J.; Kumar, R.; Thober, S.; Zink, M.; Schäfer, D.; Savenije, H. H. G.; Hrachowitz, M.

    2015-12-01

    Heterogeneity of landscape features like terrain, soil, and vegetation properties affect the partitioning of water and energy. However, it remains unclear to which extent an explicit representation of this heterogeneity at the sub-grid scale of distributed hydrological models can improve the hydrological consistency and the robustness of such models. In this study, hydrological process complexity arising from sub-grid topography heterogeneity was incorporated in the distributed mesoscale Hydrologic Model (mHM). Seven study catchments across Europe were used to test whether (1) the incorporation of additional sub-grid variability on the basis of landscape-derived response units improves model internal dynamics, (2) the application of semi-quantitative, expert-knowledge based model constraints reduces model uncertainty; and (3) the combined use of sub-grid response units and model constraints improves the spatial transferability of the model. Unconstrained and constrained versions of both, the original mHM and mHMtopo, which allows for topography-based sub-grid heterogeneity, were calibrated for each catchment individually following a multi-objective calibration strategy. In addition, four of the study catchments were simultaneously calibrated and their feasible parameter sets were transferred to the remaining three receiver catchments. In a post-calibration evaluation procedure the probabilities of model and transferability improvement, when accounting for sub-grid variability and/or applying expert-knowledge based model constraints, were assessed on the basis of a set of hydrological signatures. In terms of the Euclidian distance to the optimal model, used as overall measure for model performance with respect to the individual signatures, the model improvement achieved by introducing sub-grid heterogeneity to mHM in mHMtopo was on average 13 %. The addition of semi-quantitative constraints to mHM and mHMtopo resulted in improvements of 13 and 19 % respectively, compared to the base case of the unconstrained mHM. Most significant improvements in signature representations were, in particular, achieved for low flow statistics. The application of prior semi-quantitative constraints further improved the partitioning between runoff and evaporative fluxes. Besides, it was shown that suitable semi-quantitative prior constraints in combination with the transfer function based regularization approach of mHM, can be beneficial for spatial model transferability as the Euclidian distances for the signatures improved on average by 2 %. The effect of semi-quantitative prior constraints combined with topography-guided sub-grid heterogeneity on transferability showed a more variable picture of improvements and deteriorations, but most improvements were observed for low flow statistics.

  19. The importance of topography-controlled sub-grid process heterogeneity and semi-quantitative prior constraints in distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Nijzink, Remko C.; Samaniego, Luis; Mai, Juliane; Kumar, Rohini; Thober, Stephan; Zink, Matthias; Schäfer, David; Savenije, Hubert H. G.; Hrachowitz, Markus

    2016-03-01

    Heterogeneity of landscape features like terrain, soil, and vegetation properties affects the partitioning of water and energy. However, it remains unclear to what extent an explicit representation of this heterogeneity at the sub-grid scale of distributed hydrological models can improve the hydrological consistency and the robustness of such models. In this study, hydrological process complexity arising from sub-grid topography heterogeneity was incorporated into the distributed mesoscale Hydrologic Model (mHM). Seven study catchments across Europe were used to test whether (1) the incorporation of additional sub-grid variability on the basis of landscape-derived response units improves model internal dynamics, (2) the application of semi-quantitative, expert-knowledge-based model constraints reduces model uncertainty, and whether (3) the combined use of sub-grid response units and model constraints improves the spatial transferability of the model. Unconstrained and constrained versions of both the original mHM and mHMtopo, which allows for topography-based sub-grid heterogeneity, were calibrated for each catchment individually following a multi-objective calibration strategy. In addition, four of the study catchments were simultaneously calibrated and their feasible parameter sets were transferred to the remaining three receiver catchments. In a post-calibration evaluation procedure the probabilities of model and transferability improvement, when accounting for sub-grid variability and/or applying expert-knowledge-based model constraints, were assessed on the basis of a set of hydrological signatures. In terms of the Euclidian distance to the optimal model, used as an overall measure of model performance with respect to the individual signatures, the model improvement achieved by introducing sub-grid heterogeneity to mHM in mHMtopo was on average 13 %. The addition of semi-quantitative constraints to mHM and mHMtopo resulted in improvements of 13 and 19 %, respectively, compared to the base case of the unconstrained mHM. Most significant improvements in signature representations were, in particular, achieved for low flow statistics. The application of prior semi-quantitative constraints further improved the partitioning between runoff and evaporative fluxes. In addition, it was shown that suitable semi-quantitative prior constraints in combination with the transfer-function-based regularization approach of mHM can be beneficial for spatial model transferability as the Euclidian distances for the signatures improved on average by 2 %. The effect of semi-quantitative prior constraints combined with topography-guided sub-grid heterogeneity on transferability showed a more variable picture of improvements and deteriorations, but most improvements were observed for low flow statistics.

  20. Emissions & Generation Resource Integrated Database (eGRID), eGRID2002 (with years 1996 - 2000 data)

    EPA Pesticide Factsheets

    The Emissions & Generation Resource Integrated Database (eGRID) is a comprehensive source of data on the environmental characteristics of almost all electric power generated in the United States. These environmental characteristics include air emissions for nitrogen oxides, sulfur dioxide, carbon dioxide, methane, nitrous oxide, and mercury; emissions rates; net generation; resource mix; and many other attributes. eGRID2002 (years 1996 through 2000 data) contains 16 Excel spreadsheets and the Technical Support Document, as well as the eGRID Data Browser, User's Manual, and Readme file. Archived eGRID data can be viewed as spreadsheets or by using the eGRID Data Browser. The eGRID spreadsheets can be manipulated by data users and enables users to view all the data underlying eGRID. The eGRID Data Browser enables users to view key data using powerful search features. Note that the eGRID Data Browser will not run on a Mac-based machine without Windows emulation.

  1. Simulation of Etching in Chlorine Discharges Using an Integrated Feature Evolution-Plasma Model

    NASA Technical Reports Server (NTRS)

    Hwang, Helen H.; Bose, Deepak; Govindan, T. R.; Meyyappan, M.; Biegel, Bryan (Technical Monitor)

    2002-01-01

    To better utilize its vast collection of heterogeneous resources that are geographically distributed across the United States, NASA is constructing a computational grid called the Information Power Grid (IPG). This paper describes various tools and techniques that we are developing to measure and improve the performance of a broad class of NASA applications when run on the IPG. In particular, we are investigating the areas of grid benchmarking, grid monitoring, user-level application scheduling, and decentralized system-level scheduling.

  2. Impacts of P-f & Q-V Droop Control on MicroGrids Transient Stability

    NASA Astrophysics Data System (ADS)

    Zhao-xia, Xiao; Hong-wei, Fang

    Impacts of P-f & Q-V droop control on MicroGrid transient stability was investigated with a wind unit of asynchronous generator in the MicroGrid. The system frequency stability was explored when the motor load starts and its load power changes, and faults of different types and different locations occurs. The simulations were done by PSCAD/EMTDC.

  3. A Study Using a Monte Carlo Method of the Optimal Configuration of a Distribution Network in Terms of Power Loss Sensing

    PubMed Central

    Moon, Hyun Ho; Lee, Jong Joo; Choi, Sang Yule; Cha, Jae Sang; Kang, Jang Mook; Kim, Jong Tae; Shin, Myong Chul

    2011-01-01

    Recently there have been many studies of power systems with a focus on “New and Renewable Energy” as part of “New Growth Engine Industry” promoted by the Korean government. “New And Renewable Energy”—especially focused on wind energy, solar energy and fuel cells that will replace conventional fossil fuels—is a part of the Power-IT Sector which is the basis of the SmartGrid. A SmartGrid is a form of highly-efficient intelligent electricity network that allows interactivity (two-way communications) between suppliers and consumers by utilizing information technology in electricity production, transmission, distribution and consumption. The New and Renewable Energy Program has been driven with a goal to develop and spread through intensive studies, by public or private institutions, new and renewable energy which, unlike conventional systems, have been operated through connections with various kinds of distributed power generation systems. Considerable research on smart grids has been pursued in the United States and Europe. In the United States, a variety of research activities on the smart power grid have been conducted within EPRI’s IntelliGrid research program. The European Union (EU), which represents Europe’s Smart Grid policy, has focused on an expansion of distributed generation (decentralized generation) and power trade between countries with improved environmental protection. Thus, there is current emphasis on a need for studies that assesses the economic efficiency of such distributed generation systems. In this paper, based on the cost of distributed power generation capacity, calculations of the best profits obtainable were made by a Monte Carlo simulation. Monte Carlo simulations that rely on repeated random sampling to compute their results take into account the cost of electricity production, daily loads and the cost of sales and generate a result faster than mathematical computations. In addition, we have suggested the optimal design, which considers the distribution loss associated with power distribution systems focus on sensing aspect and distributed power generation. PMID:22164047

  4. Groundwater-quality data in the Klamath Mountains study unit, 2010: results from the California GAMA Program

    USGS Publications Warehouse

    Mathany, Timothy M.; Belitz, Kenneth

    2014-01-01

    Groundwater quality in the 8,806-square-mile Klamath Mountains (KLAM) study unit was investigated by the U.S. Geological Survey (USGS) from October to December 2010, as part of the California State Water Resources Control Board (SWRCB) Groundwater Ambient Monitoring and Assessment (GAMA) Program’s Priority Basin Project (PBP). The GAMA-PBP was developed in response to the California Groundwater Quality Monitoring Act of 2001 and is being conducted in collaboration with the SWRCB and Lawrence Livermore National Laboratory (LLNL). The KLAM study unit was the thirty-third study unit to be sampled as part of the GAMA-PBP. The GAMA Klamath Mountains study was designed to provide a spatially unbiased assessment of untreated-groundwater quality in the primary aquifer system and to facilitate statistically consistent comparisons of untreated-groundwater quality throughout California. The primary aquifer system is defined by the perforation intervals of wells listed in the California Department of Public Health (CDPH) database for the KLAM study unit. Groundwater quality in the primary aquifer system may differ from the quality in the shallower or deeper water-bearing zones; shallower groundwater may be more vulnerable to surficial contamination. In the KLAM study unit, groundwater samples were collected from sites in Del Norte, Siskiyou, Humboldt, Trinity, Tehama, and Shasta Counties, California. Of the 39 sites sampled, 38 were selected by using a spatially distributed, randomized grid-based method to provide statistical representation of the primary aquifer system in the study unit (grid sites), and the remaining site was non-randomized (understanding site). The groundwater samples were analyzed for basic field parameters, organic constituents (volatile organic compounds [VOCs] and pesticides and pesticide degradates), inorganic constituents (trace elements, nutrients, major and minor ions, total dissolved solids [TDS]), radon-222, gross alpha and gross beta radioactivity, and microbial indicators (total coliform and Escherichia coli [E. coli]). Isotopic tracers (stable isotopes of hydrogen and oxygen in water, isotopic ratios of dissolved strontium in water, and stable isotopes of carbon in dissolved inorganic carbon), dissolved noble gases, and age-dating tracers (tritium and carbon-14) were measured to help identify sources and ages of sampled groundwater. Quality-control samples (field blanks, replicate sample pairs, and matrix spikes) were collected at 13 percent of the sites in the KLAM study unit, and the results were used to evaluate the quality of the data from the groundwater samples. Field blank samples rarely contained detectable concentrations of any constituent, indicating that contamination from sample collection or analysis was not a significant source of bias in the data for the groundwater samples. More than 99 percent of the replicate pair samples were within acceptable limits of variability. Matrix-spike sample recoveries were within the acceptable range (70 to 130 percent) for approximately 91 percent of the compounds. This study did not evaluate the quality of water delivered to consumers. After withdrawal, groundwater typically is treated, disinfected, and (or) blended with other waters to maintain water quality. Regulatory benchmarks apply to water that is delivered to the consumer, not to untreated groundwater. However, to provide some context for the results, concentrations of constituents measured in the untreated groundwater were compared with regulatory and non-regulatory health-based benchmarks established by the U.S. Environmental Protection Agency (USEPA) and CDPH, and to non-health-based benchmarks established for aesthetic concerns by the CDPH. Comparisons between data collected for this study and benchmarks for drinking water are for illustrative purposes only and are not indicative of compliance or non-compliance with those benchmarks. All concentrations of organic constituents from grid sites sampled in the KLAM study unit were less than health-based benchmarks. In total, VOCs were detected in 16 of the 38 grid sites sampled (approximately 42 percent), pesticides and pesticide degradates were detected in 8 grid sites (about 21 percent), and microbial indicators were detected in 14 grid sites (approximately 37 percent). Inorganic constituents (trace elements, major and minor ions, nutrients, and uranium and other radioactive constituents) and microbial indicators were sampled for at 38 grid sites, and all concentrations were less than health-based benchmarks, with the exception of one detection of boron greater than the CDPH notification level of 1,000 micrograms per liter (μg/L). Generally, concentrations of inorganic constituents with non-health-based benchmarks (iron, manganese, chloride, and TDS) were less than the CDPH secondary maximum contaminant level (SMCL-CA). Exceptions include three detections of iron greater than the SMCL-CA of 300 μg/L, four detections of manganese greater than the SMCL-CA of 50 μg/L, one detection of chloride greater than the recommended SMCL-CA of 250 μg/L, and one detection of TDS greater than the recommended SMCL-CA of 500 μg/L.

  5. Determining and representing width of soil boundaries using electrical conductivity and MultiGrid

    NASA Astrophysics Data System (ADS)

    Greve, Mogens Humlekrog; Greve, Mette Balslev

    2004-07-01

    In classical soil mapping, map unit boundaries are considered crisp even though all experienced survey personnel are aware of the fact, that soil boundaries really are transition zones of varying width. However, classification of transition zone width on site is difficult in a practical survey. The objective of this study is to present a method for determining soil boundary width and a way of representing continuous soil boundaries in GIS. A survey was performed using the non-contact conductivity meter EM38 from Geonics Inc., which measures the bulk Soil Electromagnetic Conductivity (SEC). The EM38 provides an opportunity to classify the width of transition zones in an unbiased manner. By calculating the spatial rate of change in the interpolated EM38 map across the crisp map unit delineations from a classical soil mapping, a measure of transition zone width can be extracted. The map unit delineations are represented as transition zones in a GIS through a concept of multiple grid layers, a MultiGrid. Each layer corresponds to a soil type and the values in a layer represent the percentage of that soil type in each cell. As a test, the subsoil texture was mapped at the Vindum field in Denmark using both the classical mapping method with crisp representation of the boundaries and the new map with MultiGrid and continuous boundaries. These maps were then compared to an independent reference map of subsoil texture. The improvement of the prediction of subsoil texture, using continuous boundaries instead of crisp, was in the case of the Vindum field, 15%.

  6. Concept of Smart Cyberspace for Smart Grid Implementation

    NASA Astrophysics Data System (ADS)

    Zhukovskiy, Y.; Malov, D.

    2018-05-01

    The concept of Smart Cyberspace for Smart Grid (SG) implementation is presented in the paper. The classification of electromechanical units, based on the amount of analysing data, the classification of electromechanical units, based on the data processing speed; and the classification of computational network organization, based on required resources, are proposed in this paper. The combination of the considered classifications is formalized, which can be further used in organizing and planning of SG.

  7. Dynamic of small photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Mehrmann, A.; Kleinkauf, W.; Pigorsch, W.; Steeb, H.

    The results of 1.5 yr of field-testing of two photovoltaic (PV) power plants, one equipped with an electrolyzer and H2 storage, are reported. Both systems were interconnected with the grid and featured the PV module, a power conditioning unit, ac and dc load connections, and control units. The rated power of both units was 100 Wp. The system with electrolysis was governed by control laws which maximized the electrolyzer current. The tests underscored the preference for a power conditioning unit, rather than direct output to load connections. A 1 kWp system was developed in a follow-up program and will be tested in concert with electrolysis and interconnection with several grid customers. The program is geared to eventual development of larger units for utility-size applications.

  8. Quantifying Power Grid Risk from Geomagnetic Storms

    NASA Astrophysics Data System (ADS)

    Homeier, N.; Wei, L. H.; Gannon, J. L.

    2012-12-01

    We are creating a statistical model of the geophysical environment that can be used to quantify the geomagnetic storm hazard to power grid infrastructure. Our model is developed using a database of surface electric fields for the continental United States during a set of historical geomagnetic storms. These electric fields are derived from the SUPERMAG compilation of worldwide magnetometer data and surface impedances from the United States Geological Survey. This electric field data can be combined with a power grid model to determine GICs per node and reactive MVARs at each minute during a storm. Using publicly available substation locations, we derive relative risk maps by location by combining magnetic latitude and ground conductivity. We also estimate the surface electric fields during the August 1972 geomagnetic storm that caused a telephone cable outage across the middle of the United States. This event produced the largest surface electric fields in the continental U.S. in at least the past 40 years.

  9. Discrepancies and Uncertainties in Bottom-up Gridded Inventories of Livestock Methane Emissions for the Contiguous United States

    NASA Astrophysics Data System (ADS)

    Randles, C. A.; Hristov, A. N.; Harper, M.; Meinen, R.; Day, R.; Lopes, J.; Ott, T.; Venkatesh, A.

    2017-12-01

    In this analysis we used a spatially-explicit, bottom-up approach, based on animal inventories, feed intake, and feed intake-based emission factors to estimate county-level enteric (cattle) and manure (cattle, swine, and poultry) livestock methane emissions for the contiguous United States. Combined enteric and manure emissions were highest for counties in California's Central Valley. Overall, this analysis yielded total livestock methane emissions (8,916 Gg/yr; lower and upper bounds of 6,423 and 11,840 Gg/yr, respectively) for 2012 that are comparable to the current USEPA estimates for 2012 (9,295 Gg/yr) and to estimates from the global gridded Emission Database for Global Atmospheric Research (EDGAR) inventory (8,728 Gg/yr), used previously in a number of top-down studies. However, the spatial distribution of emissions developed in this analysis differed significantly from that of EDGAR. As an example, methane emissions from livestock in Texas and California (highest contributors to the national total) in this study were 36% lesser and 100% greater, respectively, than estimates by EDGAR. Thespatial distribution of emissions in gridded inventories (e.g., EDGAR) likely strongly impacts the conclusions of top-down approaches that use them, especially in the source attribution of resulting (posterior) emissions, and hence conclusions from such studies should be interpreted with caution.

  10. Algebraic grid generation using tensor product B-splines. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Saunders, B. V.

    1985-01-01

    Finite difference methods are more successful if the accompanying grid has lines which are smooth and nearly orthogonal. The development of an algorithm which produces such a grid when given the boundary description. Topological considerations in structuring the grid generation mapping are discussed. The concept of the degree of a mapping and how it can be used to determine what requirements are necessary if a mapping is to produce a suitable grid is examined. The grid generation algorithm uses a mapping composed of bicubic B-splines. Boundary coefficients are chosen so that the splines produce Schoenberg's variation diminishing spline approximation to the boundary. Interior coefficients are initially chosen to give a variation diminishing approximation to the transfinite bilinear interpolant of the function mapping the boundary of the unit square onto the boundary grid. The practicality of optimizing the grid by minimizing a functional involving the Jacobian of the grid generation mapping at each interior grid point and the dot product of vectors tangent to the grid lines is investigated. Grids generated by using the algorithm are presented.

  11. Modeling and Economic Analysis of Power Grid Operations in a Water Constrained System

    NASA Astrophysics Data System (ADS)

    Zhou, Z.; Xia, Y.; Veselka, T.; Yan, E.; Betrie, G.; Qiu, F.

    2016-12-01

    The power sector is the largest water user in the United States. Depending on the cooling technology employed at a facility, steam-electric power stations withdrawal and consume large amounts of water for each megawatt hour of electricity generated. The amounts are dependent on many factors, including ambient air and water temperatures, cooling technology, etc. Water demands from most economic sectors are typically highest during summertime. For most systems, this coincides with peak electricity demand and consequently a high demand for thermal power plant cooling water. Supplies however are sometimes limited due to seasonal precipitation fluctuations including sporadic droughts that lead to water scarcity. When this occurs there is an impact on both unit commitments and the real-time dispatch. In this work, we model the cooling efficiency of several different types of thermal power generation technologies as a function of power output level and daily temperature profiles. Unit specific relationships are then integrated in a power grid operational model that minimizes total grid production cost while reliably meeting hourly loads. Grid operation is subject to power plant physical constraints, transmission limitations, water availability and environmental constraints such as power plant water exit temperature limits. The model is applied to a standard IEEE-118 bus system under various water availability scenarios. Results show that water availability has a significant impact on power grid economics.

  12. Regional and seasonal estimates of fractional storm coverage based on station precipitation observations

    NASA Technical Reports Server (NTRS)

    Gong, Gavin; Entekhabi, Dara; Salvucci, Guido D.

    1994-01-01

    Simulated climates using numerical atmospheric general circulation models (GCMs) have been shown to be highly sensitive to the fraction of GCM grid area assumed to be wetted during rain events. The model hydrologic cycle and land-surface water and energy balance are influenced by the parameter bar-kappa, which is the dimensionless fractional wetted area for GCM grids. Hourly precipitation records for over 1700 precipitation stations within the contiguous United States are used to obtain observation-based estimates of fractional wetting that exhibit regional and seasonal variations. The spatial parameter bar-kappa is estimated from the temporal raingauge data using conditional probability relations. Monthly bar-kappa values are estimated for rectangular grid areas over the contiguous United States as defined by the Goddard Institute for Space Studies 4 deg x 5 deg GCM. A bias in the estimates is evident due to the unavoidably sparse raingauge network density, which causes some storms to go undetected by the network. This bias is corrected by deriving the probability of a storm escaping detection by the network. A Monte Carlo simulation study is also conducted that consists of synthetically generated storm arrivals over an artificial grid area. It is used to confirm the bar-kappa estimation procedure and to test the nature of the bias and its correction. These monthly fractional wetting estimates, based on the analysis of station precipitation data, provide an observational basis for assigning the influential parameter bar-kappa in GCM land-surface hydrology parameterizations.

  13. Filling in the GAPS: evaluating completeness and coverage of open-access biodiversity databases in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Troia, Matthew J.; McManamay, Ryan A.

    Primary biodiversity data constitute observations of particular species at given points in time and space. Open-access electronic databases provide unprecedented access to these data, but their usefulness in characterizing species distributions and patterns in biodiversity depend on how complete species inventories are at a given survey location and how uniformly distributed survey locations are along dimensions of time, space, and environment. Our aim was to compare completeness and coverage among three open-access databases representing ten taxonomic groups (amphibians, birds, freshwater bivalves, crayfish, freshwater fish, fungi, insects, mammals, plants, and reptiles) in the contiguous United States. We compiled occurrence records frommore » the Global Biodiversity Information Facility (GBIF), the North American Breeding Bird Survey (BBS), and federally administered fish surveys (FFS). In this study, we aggregated occurrence records by 0.1° × 0.1° grid cells and computed three completeness metrics to classify each grid cell as well-surveyed or not. Next, we compared frequency distributions of surveyed grid cells to background environmental conditions in a GIS and performed Kolmogorov–Smirnov tests to quantify coverage through time, along two spatial gradients, and along eight environmental gradients. The three databases contributed >13.6 million reliable occurrence records distributed among >190,000 grid cells. The percent of well-surveyed grid cells was substantially lower for GBIF (5.2%) than for systematic surveys (BBS and FFS; 82.5%). Still, the large number of GBIF occurrence records produced at least 250 well-surveyed grid cells for six of nine taxonomic groups. Coverages of systematic surveys were less biased across spatial and environmental dimensions but were more biased in temporal coverage compared to GBIF data. GBIF coverages also varied among taxonomic groups, consistent with commonly recognized geographic, environmental, and institutional sampling biases. Lastly, this comprehensive assessment of biodiversity data across the contiguous United States provides a prioritization scheme to fill in the gaps by contributing existing occurrence records to the public domain and planning future surveys.« less

  14. Filling in the GAPS: evaluating completeness and coverage of open-access biodiversity databases in the United States

    DOE PAGES

    Troia, Matthew J.; McManamay, Ryan A.

    2016-06-12

    Primary biodiversity data constitute observations of particular species at given points in time and space. Open-access electronic databases provide unprecedented access to these data, but their usefulness in characterizing species distributions and patterns in biodiversity depend on how complete species inventories are at a given survey location and how uniformly distributed survey locations are along dimensions of time, space, and environment. Our aim was to compare completeness and coverage among three open-access databases representing ten taxonomic groups (amphibians, birds, freshwater bivalves, crayfish, freshwater fish, fungi, insects, mammals, plants, and reptiles) in the contiguous United States. We compiled occurrence records frommore » the Global Biodiversity Information Facility (GBIF), the North American Breeding Bird Survey (BBS), and federally administered fish surveys (FFS). In this study, we aggregated occurrence records by 0.1° × 0.1° grid cells and computed three completeness metrics to classify each grid cell as well-surveyed or not. Next, we compared frequency distributions of surveyed grid cells to background environmental conditions in a GIS and performed Kolmogorov–Smirnov tests to quantify coverage through time, along two spatial gradients, and along eight environmental gradients. The three databases contributed >13.6 million reliable occurrence records distributed among >190,000 grid cells. The percent of well-surveyed grid cells was substantially lower for GBIF (5.2%) than for systematic surveys (BBS and FFS; 82.5%). Still, the large number of GBIF occurrence records produced at least 250 well-surveyed grid cells for six of nine taxonomic groups. Coverages of systematic surveys were less biased across spatial and environmental dimensions but were more biased in temporal coverage compared to GBIF data. GBIF coverages also varied among taxonomic groups, consistent with commonly recognized geographic, environmental, and institutional sampling biases. Lastly, this comprehensive assessment of biodiversity data across the contiguous United States provides a prioritization scheme to fill in the gaps by contributing existing occurrence records to the public domain and planning future surveys.« less

  15. Life cycle assessment study on polishing units for use of treated wastewater in agricultural reuse.

    PubMed

    Büyükkamacı, Nurdan; Karaca, Gökçe

    2017-12-01

    A life cycle assessment (LCA) approach was used in the assessment of environmental impacts of some polishing units for reuse of wastewater treatment plant effluents in agricultural irrigation. These alternative polishing units were assessed: (1) microfiltration and ultraviolet (UV) disinfection, (2) cartridge filter and ultrafiltration (UF), and (3) just UV disinfection. Two different energy sources, electric grid mix and natural gas, were considered to assess the environmental impacts of them. Afterwards, the effluent of each case was evaluated against the criteria required for irrigation of sensitive crops corresponding to Turkey regulations. Evaluation of environmental impacts was carried out with GaBi 6.1 LCA software. The overall conclusion of this study is that higher electricity consumption causes higher environmental effects. The results of the study revealed that cartridge filter and UF in combination with electric grid mix has the largest impact on the environment for almost all impact categories. In general, the most environmentally friendly solution is UV disinfection. The study revealed environmental impacts for three alternatives drawing attention to the importance of the choice of the most appropriate polishing processes and energy sources for reuse applications.

  16. An ergonomic study on the navigation structure and information units of websites with multimedia content. A case study of the Xbox 360 promotional website.

    PubMed

    Ariel, Eduardo; de Moraes, Anamaria

    2012-01-01

    This paper presents an ergonomic study on the navigation structures and information units of entertainment sites with multimedia content. This research is a case study on the XBOX 360 promotional website. It analyzes the presentation of the content on a grid that simulates the spatial displacement of the screen's elements and evaluates the interaction that the page allows for, from the users' point of view.

  17. Consolidating Data of Global Urban Populations: a Comparative Approach

    NASA Astrophysics Data System (ADS)

    Blankespoor, B.; Khan, A.; Selod, H.

    2017-12-01

    Global data on city populations are essential for the study of urbanization, city growth and the spatial distribution of human settlements. Such data are either gathered by combining official estimates of urban populations from across countries or extracted from gridded population models that combine these estimates with geospatial data. These data sources provide varying estimates of urban populations and each approach has its advantages and limitations. In particular, official figures suffer from a lack of consistency in defining urban units (across both space and time) and often provide data for jurisdictions rather than the functionally meaningful urban area. On the other hand, gridded population models require a user-imposed definition to identify urban areas and are constrained by the modelling techniques and input data employed. To address these drawbacks, we combine these approaches by consolidating information from three established sources: (i) the Citypopulation.de (Brinkhoff, 2016); (ii) the World Urban Prospects data (United Nations, 2014); and (iii) the Global Human Settlements population grid (GHS-POP) (EC - JRC, 2015). We create urban footprints with GHS-POP and spatially merge georeferenced city points from both UN WUP and Citypopulation.de with these urban footprints to identify city points that belong to a single agglomeration. We create a consolidated dataset by combining population data from the UN WUP and Citypopulation.de. The flexible framework outlined can incorporate information from alternative inputs to identify urban clusters e.g. by using night-time lights, built-up area or alternative gridded population models (e.g WorldPop or Landscan) and the parameters employed (e.g. density thresholds for urban footprints) may also be adjusted, e.g., as a function of city-specific characteristics. Our consolidated dataset provides a wider and more accurate coverage of city populations to support studies of urbanization. We apply the data to re-examine Zipf's Law. Brinkhoff, Thomas. 2016. City Population.EC - JRC; Columbia University, CIESIN. 2015. GHS population grid, derived from GPW4, multi-temporal (1975, 1990, 2000, 2015).United Nations, Department of Economic and Social Affairs, Population Division. 2014. World Urbanization Prospects: 2014 Revision.

  18. A Probabilistic Risk Mitigation Model for Cyber-Attacks to PMU Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mousavian, Seyedamirabbas; Valenzuela, Jorge; Wang, Jianhui

    The power grid is becoming more dependent on information and communication technologies. Complex networks of advanced sensors such as phasor measurement units (PMUs) are used to collect real time data to improve the observability of the power system. Recent studies have shown that the power grid has significant cyber vulnerabilities which could increase when PMUs are used extensively. Therefore, recognizing and responding to vulnerabilities are critical to the security of the power grid. This paper proposes a risk mitigation model for optimal response to cyber-attacks to PMU networks. We model the optimal response action as a mixed integer linear programmingmore » (MILP) problem to prevent propagation of the cyber-attacks and maintain the observability of the power system.« less

  19. Analysis of the World Experience of Smart Grid Deployment: Economic Effectiveness Issues

    NASA Astrophysics Data System (ADS)

    Ratner, S. V.; Nizhegorodtsev, R. M.

    2018-06-01

    Despite the positive dynamics in the growth of RES-based power production in electric power systems of many countries, the further development of commercially mature technologies of wind and solar generation is often constrained by the existing grid infrastructure and conventional energy supply practices. The integration of large wind and solar power plants into a single power grid and the development of microgeneration require the widespread introduction of a new smart grid technology cluster (smart power grids), whose technical advantages over the conventional ones have been fairly well studied, while issues of their economic effectiveness remain open. Estimation and forecasting potential economic effects from the introduction of innovative technologies in the power sector during the stage preceding commercial development is a methodologically difficult task that requires the use of knowledge from different sciences. This paper contains the analysis of smart grid project implementation in Europe and the United States. Interval estimates are obtained for their basic economic parameters. It was revealed that the majority of smart grid implemented projects are not yet commercially effective, since their positive externalities are usually not recognized on the revenue side due to the lack of universal methods for public benefits monetization. The results of the research can be used in modernization and development planning for the existing grid infrastructure both at the federal level and at the level of certain regions and territories.

  20. Non-isolated high gain DC-DC converter for smart grid- A review

    NASA Astrophysics Data System (ADS)

    Divya Navamani, J.; Vijayakumar, K.; Lavanya, A.; Mano Raj, A. Jason

    2018-04-01

    Smart grids are becoming the most interesting and promising alternative for an electric grid system. Power conditioning units and control over the distribution of power is the essential feature for the smart grid system. In this paper, we reviewed several non-isolated high gain topologies derived from boost converter for providing required voltage to the grid tie inverter from renewable energy sources. Steady state analysis of all the topologies is analyzed to compare the performance of the topologies. Simulation is carried out in nL5 simulator and the results are compared and validated with the theoretical results. This paper is a guide to the researchers to choose the best topology for the smart grid application.

  1. SU-E-T-374: Evaluation and Verification of Dose Calculation Accuracy with Different Dose Grid Sizes for Intracranial Stereotactic Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, C; Schultheiss, T

    Purpose: In this study, we aim to evaluate the effect of dose grid size on the accuracy of calculated dose for small lesions in intracranial stereotactic radiosurgery (SRS), and to verify dose calculation accuracy with radiochromic film dosimetry. Methods: 15 intracranial lesions from previous SRS patients were retrospectively selected for this study. The planning target volume (PTV) ranged from 0.17 to 2.3 cm{sup 3}. A commercial treatment planning system was used to generate SRS plans using the volumetric modulated arc therapy (VMAT) technique using two arc fields. Two convolution-superposition-based dose calculation algorithms (Anisotropic Analytical Algorithm and Acuros XB algorithm) weremore » used to calculate volume dose distribution with dose grid size ranging from 1 mm to 3 mm with 0.5 mm step size. First, while the plan monitor units (MU) were kept constant, PTV dose variations were analyzed. Second, with 95% of the PTV covered by the prescription dose, variations of the plan MUs as a function of dose grid size were analyzed. Radiochomic films were used to compare the delivered dose and profile with the calculated dose distribution with different dose grid sizes. Results: The dose to the PTV, in terms of the mean dose, maximum, and minimum dose, showed steady decrease with increasing dose grid size using both algorithms. With 95% of the PTV covered by the prescription dose, the total MU increased with increasing dose grid size in most of the plans. Radiochromic film measurements showed better agreement with dose distributions calculated with 1-mm dose grid size. Conclusion: Dose grid size has significant impact on calculated dose distribution in intracranial SRS treatment planning with small target volumes. Using the default dose grid size could lead to under-estimation of delivered dose. A small dose grid size should be used to ensure calculation accuracy and agreement with QA measurements.« less

  2. Nonlinear adaptive control of grid-connected three-phase inverters for renewable energy applications

    NASA Astrophysics Data System (ADS)

    Mahdian-Dehkordi, N.; Namvar, M.; Karimi, H.; Piya, P.; Karimi-Ghartemani, M.

    2017-01-01

    Distributed generation (DG) units are often interfaced to the main grid using power electronic converters including voltage-source converters (VSCs). A VSC offers dc/ac power conversion, high controllability, and fast dynamic response. Because of nonlinearities, uncertainties, and system parameters' changes involved in the nature of a grid-connected renewable DG system, conventional linear control methods cannot completely and efficiently address all control objectives. In this paper, a nonlinear adaptive control scheme based on adaptive backstepping strategy is presented to control the operation of a grid-connected renewable DG unit. As compared to the popular vector control technique, the proposed controller offers smoother transient responses, and lower level of current distortions. The Lyapunov approach is used to establish global asymptotic stability of the proposed control system. Linearisation technique is employed to develop guidelines for parameters tuning of the controller. Extensive time-domain digital simulations are performed and presented to verify the performance of the proposed controller when employed in a VSC to control the operation of a two-stage DG unit and also that of a single-stage solar photovoltaic system. Desirable and superior performance of the proposed controller is observed.

  3. Adaptive Energy Forecasting and Information Diffusion for Smart Power Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simmhan, Yogesh; Agarwal, Vaibhav; Aman, Saim

    2012-05-16

    Smart Power Grids exemplify an emerging class of Cyber Physical Applications that exhibit dynamic, distributed and data intensive (D3) characteristics along with an always-on paradigm to support operational needs. Smart Grids are an outcome of instrumentation, such as Phasor Measurement Units and Smart Power Meters, that is being deployed across the transmission and distribution network of electric grids. These sensors provide utilities with improved situation awareness on near-realtime electricity usage by individual consumers, and the power quality and stability of the transmission network.

  4. Design and Parameter Study of Integrated Microfluidic Platform for CTC Isolation and Enquiry; A Numerical Approach.

    PubMed

    Shamloo, Amir; Ahmad, Saba; Momeni, Maede

    2018-06-18

    Being the second cause of mortality across the globe, there is now a persistent effort to establish new cancer medication and therapies. Any accomplishment in treating cancers entails the existence of accurate identification systems empowering the early diagnosis. Recent studies indicate CTCs’ potential in cancer prognosis as well as therapy monitoring. The chief shortcoming with CTCs is that they are exceedingly rare cells in their clinically relevant concentration. Here, we simulated a microfluidic construct devised for immunomagnetic separation of the particles of interest from the background cells. This separation unit is integrated with a mixer subunit. The mixer is envisioned for mixing the CTC enriched stream with lysis buffer to extract the biological material of the cell. Some modification was proposed on mixing geometry improving the efficacy of the functional unit. A valuation of engaged forces was made and some forces were neglected due to their order of magnitude. The position of the magnet was also optimized by doing parametric study. For the mixer unit, the effect of applied voltage and frequency on mixing index was studied to find the optimal voltage and frequency which provides better mixing. Above-mentioned studies were done on isolated units and the effect of each functional unit on the other is not studied. As the final step, an integrated microfluidic platform composed of both functional subunits was simulated simultaneously. To ensure the independence of results from the grid, grid studies were also performed. The studies carried out on the construct reveal its potential for diagnostic application.

  5. Evolving Distributed Generation Support Mechanisms: Case Studies from United States, Germany, United Kingdom, and Australia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lowder, Travis; Zhou, Ella; Tian, Tian

    This report expands on a previous National Renewable Energy Laboratory (NREL) technical report (Lowder et al. 2015) that focused on the United States' unique approach to distributed generation photovoltaics (DGPV) support policies and business models. While the focus of that report was largely historical (i.e., detailing the policies and market developments that led to the growth of DGPV in the United States), this report looks forward, narrating recent changes to laws and regulations as well as the ongoing dialogues over how to incorporate distributed generation (DG) resources onto the electric grid. This report also broadens the scope of Lowder etmore » al. (2015) to include additional countries and technologies. DGPV and storage are the principal technologies under consideration (owing to market readiness and deployment volumes), but the report also contemplates any generation resource that is (1) on the customer side of the meter, (2) used to, at least partly, offset a host's energy consumption, and/or (3) potentially available to provide grid support (e.g., through peak shaving and load shifting, ancillary services, and other means).« less

  6. An optimized top contact design for solar cell concentrators

    NASA Technical Reports Server (NTRS)

    Desalvo, Gregory C.; Barnett, Allen M.

    1985-01-01

    A new grid optimization scheme is developed for point focus solar cell concentrators which employs a separated grid and busbar concept. Ideally, grid lines act as the primary current collectors and receive all of the current from the semiconductor region. Busbars are the secondary collectors which pick up current from the grids and carry it out of the active region of the solar cell. This separation of functions leads to a multithickness metallization design, where the busbars are made larger in cross section than the grids. This enables the busbars to carry more current per unit area of shading, which is advantageous under high solar concentration where large current densities are generated. Optimized grid patterns using this multilayer concept can provide a 1.6 to 20 percent increase in output power efficiency over optimized single thickness grids.

  7. Low-cost wireless voltage & current grid monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hines, Jacqueline

    This report describes the development and demonstration of a novel low-cost wireless power distribution line monitoring system. This system measures voltage, current, and relative phase on power lines of up to 35 kV-class. The line units operate without any batteries, and without harvesting energy from the power line. Thus, data on grid condition is provided even in outage conditions, when line current is zero. This enhances worker safety by detecting the presence of voltage and current that may appear from stray sources on nominally isolated lines. Availability of low-cost power line monitoring systems will enable widespread monitoring of the distributionmore » grid. Real-time data on local grid operating conditions will enable grid operators to optimize grid operation, implement grid automation, and understand the impact of solar and other distributed sources on grid stability. The latter will enable utilities to implement eneygy storage and control systems to enable greater penetration of solar into the grid.« less

  8. Smart Grid Adoption Likeliness Framework: Comparing Idaho and National Residential Consumers' Perceptions

    NASA Astrophysics Data System (ADS)

    Baiya, Evanson G.

    New energy technologies that provide real-time visibility of the electricity grid's performance, along with the ability to address unusual events in the grid and allow consumers to manage their energy use, are being developed in the United States. Primary drivers for the new technologies include the growing energy demand, tightening environmental regulations, aging electricity infrastructure, and rising consumer demand to become more involved in managing individual energy usage. In the literature and in practice, it is unclear if, and to what extent, residential consumers will adopt smart grid technologies. The purpose of this quantitative study was to examine the relationships between demographic characteristics, perceptions, and the likelihood of adopting smart grid technologies among residential energy consumers. The results of a 31-item survey were analyzed for differences within the Idaho consumers and compared against national consumers. Analysis of variance was used to examine possible differences between the dependent variable of likeliness to adopt smart grid technologies and the independent variables of age, gender, residential ownership, and residential location. No differences were found among Idaho consumers in their likeliness to adopt smart grid technologies. An independent sample t-test was used to examine possible differences between the two groups of Idaho consumers and national consumers in their level of interest in receiving detailed feedback information on energy usage, the added convenience of the smart grid, renewable energy, the willingness to pay for infrastructure costs, and the likeliness to adopt smart grid technologies. The level of interest in receiving detailed feedback information on energy usage was significantly different between the two groups (t = 3.11, p = .0023), while the other variables were similar. The study contributes to technology adoption research regarding specific consumer perceptions and provides a framework that estimates the likeliness of adopting smart grid technologies by residential consumers. The study findings could assist public utility managers and technology adoption researchers as they develop strategies to enable wide-scale adoption of smart grid technologies as a solution to the energy problem. Future research should be conducted among commercial and industrial energy consumers to further validate the findings and conclusions of this research.

  9. A Future-Based Risk Assessment for the Survivability of Long Range Strike Systems

    DTIC Science & Technology

    2007-03-01

    Aeronautics and Space Administration ( NASA ) investigated alternative futures to help generate a viable science strategy to address the future aerospace...World American World View ΔTeK World Power Grid Name 1 Global Exponential Dispersed DIGITAL CACOPHONY 2 Global Exponential Concentrated STAR TREK ...The United States has become the “United Kingdom of the Twenty-first Century.” 2.2.3. NASA Study (1997) In the NASA study, the National Research

  10. Digital Systems Validation Handbook. Volume 2

    DTIC Science & Technology

    1989-02-01

    power. 2. A grid of wires, solid sheet, or foil. 3. A wire from circuit to grounding block or case. 4. A wire from circuit to structure. 5. Shield...RETURN. (11) 1. Structure, for power, fault, and "discrete" circuits. 2. A grid of wires, solid sheet, or foil. 3. A wire from circuit load back to...TV (14) Television TWTD (13) Thin Wire Time Domain TX (5) Transmit U.K. (13,141 United Kingdom U.S. (14) United States UART (15) Universal Asynchronous

  11. Emissions & Generation Resource Integrated Database (eGRID), eGRID2012

    EPA Pesticide Factsheets

    The Emissions & Generation Resource Integrated Database (eGRID) is a comprehensive source of data on the environmental characteristics of almost all electric power generated in the United States. These environmental characteristics include air emissions for nitrogen oxides, sulfur dioxide, carbon dioxide, methane, and nitrous oxide; emissions rates; net generation; resource mix; and many other attributes. eGRID2012 Version 1.0 is the eighth edition of eGRID, which contains the complete release of year 2009 data, as well as year 2007, 2005, and 2004 data. For year 2009 data, all the data are contained in a single Microsoft Excel workbook, which contains boiler, generator, plant, state, power control area, eGRID subregion, NERC region, U.S. total and grid gross loss factor tabs. Full documentation, summary data, eGRID subregion and NERC region representational maps, and GHG emission factors are also released in this edition. The fourth edition of eGRID, eGRID2002 Version 2.01, containing year 1996 through 2000 data is located on the eGRID Archive page (http://www.epa.gov/cleanenergy/energy-resources/egrid/archive.html). The current edition of eGRID and the archived edition of eGRID contain the following years of data: 1996 - 2000, 2004, 2005, and 2007. eGRID has no other years of data.

  12. Greening the Grid: Solar and Wind Grid Integration Study for the Luzon-Visayas System of the Philippines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrows, Clayton P.; Katz, Jessica R.; Cochran, Jaquelin M.

    The Republic of the Philippines is home to abundant solar, wind, and other renewable energy (RE) resources that contribute to the national government's vision to ensure sustainable, secure, sufficient, accessible, and affordable energy. Because solar and wind resources are variable and uncertain, significant generation from these resources necessitates an evolution in power system planning and operation. To support Philippine power sector planners in evaluating the impacts and opportunities associated with achieving high levels of variable RE penetration, the Department of Energy of the Philippines (DOE) and the United States Agency for International Development (USAID) have spearheaded this study along withmore » a group of modeling representatives from across the Philippine electricity industry, which seeks to characterize the operational impacts of reaching high solar and wind targets in the Philippine power system, with a specific focus on the integrated Luzon-Visayas grids.« less

  13. C library for topological study of the electronic charge density.

    PubMed

    Vega, David; Aray, Yosslen; Rodríguez, Jesús

    2012-12-05

    The topological study of the electronic charge density is useful to obtain information about the kinds of bonds (ionic or covalent) and the atom charges on a molecule or crystal. For this study, it is necessary to calculate, at every space point, the electronic density and its electronic density derivatives values up to second order. In this work, a grid-based method for these calculations is described. The library, implemented for three dimensions, is based on a multidimensional Lagrange interpolation in a regular grid; by differentiating the resulting polynomial, the gradient vector, the Hessian matrix and the Laplacian formulas were obtained for every space point. More complex functions such as the Newton-Raphson method (to find the critical points, where the gradient is null) and the Cash-Karp Runge-Kutta method (used to make the gradient paths) were programmed. As in some crystals, the unit cell has angles different from 90°, the described library includes linear transformations to correct the gradient and Hessian when the grid is distorted (inclined). Functions were also developed to handle grid containing files (grd from DMol® program, CUBE from Gaussian® program and CHGCAR from VASP® program). Each one of these files contains the data for a molecular or crystal electronic property (such as charge density, spin density, electrostatic potential, and others) in a three-dimensional (3D) grid. The library can be adapted to make the topological study in any regular 3D grid by modifying the code of these functions. Copyright © 2012 Wiley Periodicals, Inc.

  14. A simplified analysis of the multigrid V-cycle as a fast elliptic solver

    NASA Technical Reports Server (NTRS)

    Decker, Naomi H.; Taasan, Shlomo

    1988-01-01

    For special model problems, Fourier analysis gives exact convergence rates for the two-grid multigrid cycle and, for more general problems, provides estimates of the two-grid convergence rates via local mode analysis. A method is presented for obtaining mutigrid convergence rate estimates for cycles involving more than two grids (using essentially the same analysis as for the two-grid cycle). For the simple cast of the V-cycle used as a fast Laplace solver on the unit square, the k-grid convergence rate bounds obtained by this method are sharper than the bounds predicted by the variational theory. Both theoretical justification and experimental evidence are presented.

  15. Grid-based precision aim system and method for disrupting suspect objects

    DOEpatents

    Gladwell, Thomas Scott; Garretson, Justin; Hobart, Clinton G.; Monda, Mark J.

    2014-06-10

    A system and method for disrupting at least one component of a suspect object is provided. The system has a source for passing radiation through the suspect object, a grid board positionable adjacent the suspect object (the grid board having a plurality of grid areas, the radiation from the source passing through the grid board), a screen for receiving the radiation passing through the suspect object and generating at least one image, a weapon for deploying a discharge, and a targeting unit for displaying the image of the suspect object and aiming the weapon according to a disruption point on the displayed image and deploying the discharge into the suspect object to disable the suspect object.

  16. Global Population Distribution (1990),Terrestrial Area and Country Name Information on a One by One Degree Grid Cell Basis

    DOE Data Explorer

    Li, Yi-Fan [Canadian Global Emissions Inventory Centre, Downsview, Ontario (Canada); Brenkert, A. L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    1996-01-01

    This data base contains gridded (one degree by one degree) information on the world-wide distribution of the population for 1990 and country-specific information on the percentage of the country's population present in each grid cell (Li, 1996a). Secondly, the data base contains the percentage of a country's total area in a grid cell and the country's percentage of the grid cell that is terrestrial (Li, 1996b). Li (1996b) also developed an indicator signifying how many countries are represented in a grid cell and if a grid cell is part of the sea; this indicator is only relevant for the land, countries, and sea-partitioning information of the grid cell. Thirdly, the data base includes the latitude and longitude coordinates of each grid cell; a grid code number, which is a translation of the latitude/longitude value and is used in the Global Emission Inventory Activity (GEIA) data bases; the country or region's name; and the United Nations three-digit country code that represents that name.

  17. Feasibility analysis of a smart grid photovoltaics system for the subarctic rural region in Alaska

    NASA Astrophysics Data System (ADS)

    Yao, Lei

    A smart grid photovoltaics system was developed to demonstrate that the system is feasible for a similar off-grid rural community in the subarctic region in Alaska. A system generation algorithm and a system business model were developed to determine feasibility. Based on forecasts by the PV F-Chart software, a 70° tilt angle in winter, and a 34° tilt angle in summer were determined to be the best angles for electrical output. The proposed system's electricity unit cost was calculated at 32.3 cents/kWh that is cheaper than current unsubsidized electricity price (46.8 cents/kWh) in off-grid rural communities. Given 46.8 cents/kWh as the electricity unit price, the system provider can break even when 17.3 percent of the total electrical revenue through power generated by the proposed system is charged. Given these results, the system can be economically feasible during the life-cycle period. With further incentives, the system may have a competitive advantage.

  18. Status and understanding of groundwater quality in the northern San Joaquin Basin, 2005

    USGS Publications Warehouse

    Bennett, George L.; Fram, Miranda S.; Belitz, Kenneth; Jurgens, Bryant C.

    2010-01-01

    Groundwater quality in the 2,079 square mile Northern San Joaquin Basin (Northern San Joaquin) study unit was investigated from December 2004 through February 2005 as part of the Priority Basin Project of the Groundwater Ambient Monitoring and Assessment (GAMA) Program. The GAMA Priority Basin Project was developed in response to the Groundwater Quality Monitoring Act of 2001 that was passed by the State of California and is being conducted by the California State Water Resources Control Board in collaboration with the U.S. Geological Survey and the Lawrence Livermore National Laboratory. The Northern San Joaquin study unit was the third study unit to be designed and sampled as part of the Priority Basin Project. Results of the study provide a spatially unbiased assessment of the quality of raw (untreated) groundwater, as well as a statistically consistent basis for comparing water quality throughout California. Samples were collected from 61 wells in parts of Alameda, Amador, Calaveras, Contra Costa, San Joaquin, and Stanislaus Counties; 51 of the wells were selected using a spatially distributed, randomized grid-based approach to provide statistical representation of the study area (grid wells), and 10 of the wells were sampled to increase spatial density and provide additional information for the evaluation of water chemistry in the study unit (understanding/flowpath wells). The primary aquifer systems (hereinafter, primary aquifers) assessed in this study are defined by the depth intervals of the wells in the California Department of Public Health database for each study unit. The quality of groundwater in shallow or deep water-bearing zones may differ from quality of groundwater in the primary aquifers; shallow groundwater may be more vulnerable to contamination from the surface. Two types of assessments were made: (1) status, assessment of the current quality of the groundwater resource; and (2) understanding, identification of the natural and human factors affecting groundwater quality. Relative-concentrations (sample concentrations divided by benchmark concentrations) were used for evaluating groundwater quality for those constituents that have Federal or California regulatory or non-regulatory benchmarks for drinking-water quality. Benchmarks used in this study were either health-based (regulatory and non-regulatory) or aesthetic based (non-regulatory). For inorganic constituents, relative-concentrations were classified as high (equal to or greater than 1.0), indicating relative-concentrations greater than benchmarks; moderate (equal to or greater than 0.5, and less than 1.0); or, low (less than 0.5). For organic and special- interest constituents [1,2,3-trichloropropane (1,2,3-TCP), N-nitrosodimethylamine (NDMA), and perchlorate], relative- concentrations were classified as high (equal to or greater than 1.0); moderate (equal to or greater than 0.1 and less than 1.0); or, low (less than 0.1). Aquifer-scale proportion was used as the primary metric in the status assessment for groundwater quality. High aquifer- scale proportion is defined as the percentage of the primary aquifer with relative-concentrations greater than 1.0; moderate and low aquifer-scale proportions are defined as the percentage of the primary aquifer with moderate and low relative- concentrations, respectively. The methods used to calculate aquifer-scale proportions are based on an equal-area grid; thus, the proportions are areal rather than volumetric. Two statistical approaches - grid-based, which used one value per grid cell, and spatially weighted, which used the full dataset - were used to calculate aquifer-scale proportions for individual constituents and classes of constituents. The spatially weighted estimates of high aquifer-scale proportions were within the 90-percent confidence intervals of the grid-based estimates in all cases. The understanding assessment used statistical correlations between constituent relative-concentrations and

  19. Nonhydrostatic nested climate modeling: A case study of the 2010 summer season over the western United States

    NASA Astrophysics Data System (ADS)

    Lebassi-Habtezion, Bereket; Diffenbaugh, Noah S.

    2013-10-01

    potential importance of local-scale climate phenomena motivates development of approaches to enable computationally feasible nonhydrostatic climate simulations. To that end, we evaluate the potential viability of nested nonhydrostatic model approaches, using the summer climate of the western United States (WUSA) as a case study. We use the Weather Research and Forecast (WRF) model to carry out five simulations of summer 2010. This suite allows us to test differences between nonhydrostatic and hydrostatic resolutions, single and multiple nesting approaches, and high- and low-resolution reanalysis boundary conditions. WRF simulations were evaluated against station observations, gridded observations, and reanalysis data over domains that cover the 11 WUSA states at nonhydrostatic grid spacing of 4 km and hydrostatic grid spacing of 25 km and 50 km. Results show that the nonhydrostatic simulations more accurately resolve the heterogeneity of surface temperature, precipitation, and wind speed features associated with the topography and orography of the WUSA region. In addition, we find that the simulation in which the nonhydrostatic grid is nested directly within the regional reanalysis exhibits the greatest overall agreement with observational data. Results therefore indicate that further development of nonhydrostatic nesting approaches is likely to yield important insights into the response of local-scale climate phenomena to increases in global greenhouse gas concentrations. However, the biases in regional precipitation, atmospheric circulation, and moisture flux identified in a subset of the nonhydrostatic simulations suggest that alternative nonhydrostatic modeling approaches such as superparameterization and variable-resolution global nonhydrostatic modeling will provide important complements to the nested approaches tested here.

  20. [Analysis on difference of richness of traditional Chinese medicine resources in Chongqing based on grid technology].

    PubMed

    Zhang, Xiao-Bo; Qu, Xian-You; Li, Meng; Wang, Hui; Jing, Zhi-Xian; Liu, Xiang; Zhang, Zhi-Wei; Guo, Lan-Ping; Huang, Lu-Qi

    2017-11-01

    After the end of the national and local medicine resources census work, a large number of Chinese medicine resources and distribution of data will be summarized. The species richness between the regions is a valid indicator for objective reflection of inter-regional resources of Chinese medicine. Due to the large difference in the size of the county area, the assessment of the intercropping of the resources of the traditional Chinese medicine by the county as a statistical unit will lead to the deviation of the regional abundance statistics. Based on the rule grid or grid statistical methods, the size of the statistical unit due to different can be reduced, the differences in the richness of traditional Chinese medicine resources are caused. Taking Chongqing as an example, based on the existing survey data, the difference of richness of traditional Chinese medicine resources under different grid scale were compared and analyzed. The results showed that the 30 km grid could be selected and the richness of Chinese medicine resources in Chongqing could reflect the objective situation of intercropping resources richness in traditional Chinese medicine better. Copyright© by the Chinese Pharmaceutical Association.

  1. Power system observability and dynamic state estimation for stability monitoring using synchrophasor measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Kai; Qi, Junjian; Kang, Wei

    2016-08-01

    Growing penetration of intermittent resources such as renewable generations increases the risk of instability in a power grid. This paper introduces the concept of observability and its computational algorithms for a power grid monitored by the wide-area measurement system (WAMS) based on synchrophasors, e.g. phasor measurement units (PMUs). The goal is to estimate real-time states of generators, especially for potentially unstable trajectories, the information that is critical for the detection of rotor angle instability of the grid. The paper studies the number and siting of synchrophasors in a power grid so that the state of the system can be accuratelymore » estimated in the presence of instability. An unscented Kalman filter (UKF) is adopted as a tool to estimate the dynamic states that are not directly measured by synchrophasors. The theory and its computational algorithms are illustrated in detail by using a 9-bus 3-generator power system model and then tested on a 140-bus 48-generator Northeast Power Coordinating Council power grid model. Case studies on those two systems demonstrate the performance of the proposed approach using a limited number of synchrophasors for dynamic state estimation for stability assessment and its robustness against moderate inaccuracies in model parameters.« less

  2. Station blackout transient at the Browns Ferry Unit 1 Plant: a severe accident sequence analysis (SASA) program study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schultz, R.R.

    1982-01-01

    Operating plant transients are of great interest for many reasons, not the least of which is the potential for a mild transient to degenerate to a severe transient yielding core damage. Using the Browns Ferry (BF) Unit-1 plant as a basis of study, the station blackout sequence was investigated by the Severe Accident Sequence Analysis (SASA) Program in support of the Nuclear Regulatory Commission's Unresolved Safety Issue A-44: Station Blackout. A station blackout transient occurs when the plant's AC power from a comemrcial power grid is lost and cannot be restored by the diesel generators. Under normal operating conditions, fmore » a loss of offsite power (LOSP) occurs (i.e., a complete severance of the BF plants from the Tennessee Valley Authority (TVA) power grid), the eight diesel generators at the three BF units would quickly start and power the emergency AC buses. Of the eight diesel generators, only six are needed to safely shut down all three units. Examination of BF-specific data show that LOSP frequency is low at Unit 1. The station blackout frequency is even lower (5.7 x 10/sup -4/ events per year) and hinges on whether the diesel generators start. The frequency of diesel generator failure is dictated in large measure by the emergency equipment cooling water (EECW) system that cools the diesel generators.« less

  3. Mapping Atmospheric Moisture Climatologies across the Conterminous United States

    PubMed Central

    Daly, Christopher; Smith, Joseph I.; Olson, Keith V.

    2015-01-01

    Spatial climate datasets of 1981–2010 long-term mean monthly average dew point and minimum and maximum vapor pressure deficit were developed for the conterminous United States at 30-arcsec (~800m) resolution. Interpolation of long-term averages (twelve monthly values per variable) was performed using PRISM (Parameter-elevation Relationships on Independent Slopes Model). Surface stations available for analysis numbered only 4,000 for dew point and 3,500 for vapor pressure deficit, compared to 16,000 for previously-developed grids of 1981–2010 long-term mean monthly minimum and maximum temperature. Therefore, a form of Climatologically-Aided Interpolation (CAI) was used, in which the 1981–2010 temperature grids were used as predictor grids. For each grid cell, PRISM calculated a local regression function between the interpolated climate variable and the predictor grid. Nearby stations entering the regression were assigned weights based on the physiographic similarity of the station to the grid cell that included the effects of distance, elevation, coastal proximity, vertical atmospheric layer, and topographic position. Interpolation uncertainties were estimated using cross-validation exercises. Given that CAI interpolation was used, a new method was developed to allow uncertainties in predictor grids to be accounted for in estimating the total interpolation error. Local land use/land cover properties had noticeable effects on the spatial patterns of atmospheric moisture content and deficit. An example of this was relatively high dew points and low vapor pressure deficits at stations located in or near irrigated fields. The new grids, in combination with existing temperature grids, enable the user to derive a full suite of atmospheric moisture variables, such as minimum and maximum relative humidity, vapor pressure, and dew point depression, with accompanying assumptions. All of these grids are available online at http://prism.oregonstate.edu, and include 800-m and 4-km resolution data, images, metadata, pedigree information, and station inventory files. PMID:26485026

  4. An interactive grid generation procedure for axial and radial flow turbomachinery

    NASA Technical Reports Server (NTRS)

    Beach, Timothy A.

    1989-01-01

    A combination algebraic/elliptic technique is presented for the generation of three dimensional grids about turbo-machinery blade rows for both axial and radial flow machinery. The technique is built around use of an advanced engineering workstation to construct several two dimensional grids interactively on predetermined blade-to-blade surfaces. A three dimensional grid is generated by interpolating these surface grids onto an axisymmetric grid. On each blade-to-blade surface, a grid is created using algebraic techniques near the blade to control orthogonality within the boundary layer region and elliptic techniques in the mid-passage to achieve smoothness. The interactive definition of bezier curves as internal boundaries is the key to simple construction. This procedure lends itself well to zonal grid construction, an important example being the tip clearance region. Calculations done to date include a space shuttle main engine turbopump blade, a radial inflow turbine blade, and the first stator of the United Technologies Research Center large scale rotating rig. A finite Navier-Stokes solver was used in each case.

  5. Solar Power Data for Integration Studies | Grid Modernization | NREL

    Science.gov Websites

    Power Data for Integration Studies Solar Power Data for Integration Studies NREL's Solar Power Data for Integration Studies are synthetic solar photovoltaic (PV) power plant data points for the United States representing the year 2006. The data are intended for use by energy professionals-such as

  6. Performance Trials of an Integrated Loran/GPS/IMU Navigation System, Part 1

    DTIC Science & Technology

    2005-01-27

    differences are used to correct the grid values in the absence of a local ASF monitor station . Performance of the receiver using different ASF grids...United States is served by the North American Loran-C system made up of 29 stations organized into 10 chains (see Figure 1). Loran coverage is...the absence of a local ASF monitor station . Performance of the receiver using different ASF grids and interpolation techniques and corrected using the

  7. The development of an hourly gridded rainfall product for hydrological applications in England and Wales

    NASA Astrophysics Data System (ADS)

    Liguori, Sara; O'Loughlin, Fiachra; Souvignet, Maxime; Coxon, Gemma; Freer, Jim; Woods, Ross

    2014-05-01

    This research presents a newly developed observed sub-daily gridded precipitation product for England and Wales. Importantly our analysis specifically allows a quantification of rainfall errors from grid to the catchment scale, useful for hydrological model simulation and the evaluation of prediction uncertainties. Our methodology involves the disaggregation of the current one kilometre daily gridded precipitation records available for the United Kingdom[1]. The hourly product is created using information from: 1) 2000 tipping-bucket rain gauges; and 2) the United Kingdom Met-Office weather radar network. These two independent datasets provide rainfall estimates at temporal resolutions much smaller than the current daily gridded rainfall product; thus allowing the disaggregation of the daily rainfall records to an hourly timestep. Our analysis is conducted for the period 2004 to 2008, limited by the current availability of the datasets. We analyse the uncertainty components affecting the accuracy of this product. Specifically we explore how these uncertainties vary spatially, temporally and with climatic regimes. Preliminary results indicate scope for improvement of hydrological model performance by the utilisation of this new hourly gridded rainfall product. Such product will improve our ability to diagnose and identify structural errors in hydrological modelling by including the quantification of input errors. References [1] Keller V, Young AR, Morris D, Davies H (2006) Continuous Estimation of River Flows. Technical Report: Estimation of Precipitation Inputs. in Agency E (ed.). Environmental Agency.

  8. Evaluating gridded crop model simulations of evapotranspiration and irrigation using survey and remotely sensed data

    NASA Astrophysics Data System (ADS)

    Lopez Bobeda, J. R.

    2017-12-01

    The increasing use of groundwater for irrigation of crops has exacerbated groundwater sustainability issues faced by water limited regions. Gridded, process-based crop models have the potential to help farmers and policymakers asses the effects water shortages on yield and devise new strategies for sustainable water use. Gridded crop models are typically calibrated and evaluated using county-level survey data of yield, planting dates, and maturity dates. However, little is known about the ability of these models to reproduce observed crop evapotranspiration and water use at regional scales. The aim of this work is to evaluate a gridded version of the Decision Support System for Agrotechnology Transfer (DSSAT) crop model over the continental United States. We evaluated crop seasonal evapotranspiration over 5 arc-minute grids, and irrigation water use at the county level. Evapotranspiration was assessed only for rainfed agriculture to test the model evapotranspiration equations separate from the irrigation algorithm. Model evapotranspiration was evaluated against the Atmospheric Land Exchange Inverse (ALEXI) modeling product. Using a combination of the USDA crop land data layer (CDL) and the USGS Moderate Resolution Imaging Spectroradiometer Irrigated Agriculture Dataset for the United States (MIrAD-US), we selected only grids with more than 60% of their area planted with the simulated crops (corn, cotton, and soybean), and less than 20% of their area irrigated. Irrigation water use was compared against the USGS county level irrigated agriculture water use survey data. Simulated gridded data were aggregated to county level using USDA CDL and USGS MIrAD-US. Only counties where 70% or more of the irrigated land was corn, cotton, or soybean were selected for the evaluation. Our results suggest that gridded crop models can reasonably reproduce crop evapotranspiration at the country scale (RRMSE = 10%).

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    This report captures the discussions and takeaways from the U.K.-U.S. Grid Modernization Workshop on February 28-March 2, 2017 at the National Renewable Energy Laboratory. Speakers from across the United States and Europe convened to discuss the challenges associated with grid modernization for the 21st century, while identifying transatlantic solutions and opportunities for collaboration.

  10. Climate Change Impacts on Freshwater Recreational Fishing in the United States

    EPA Science Inventory

    Using a geographic information system, a spatially explicit modeling framework was developed consisting grid cells organized into 2,099 eight-digit hydrologic unit code (HUC-8) polygons for the coterminous United States. Projected temperature and precipitation changes associated...

  11. Power grid operation risk management: V2G deployment for sustainable development

    NASA Astrophysics Data System (ADS)

    Haddadian, Ghazale J.

    The production, transmission, and delivery of cost--efficient energy to supply ever-increasing peak loads along with a quest for developing a low-carbon economy require significant evolutions in the power grid operations. Lower prices of vast natural gas resources in the United States, Fukushima nuclear disaster, higher and more intense energy consumptions in China and India, issues related to energy security, and recent Middle East conflicts, have urged decisions makers throughout the world to look into other means of generating electricity locally. As the world look to combat climate changes, a shift from carbon-based fuels to non-carbon based fuels is inevitable. However, the variability of distributed generation assets in the electricity grid has introduced major reliability challenges for power grid operators. While spearheading sustainable and reliable power grid operations, this dissertation develops a multi-stakeholder approach to power grid operation design; aiming to address economic, security, and environmental challenges of the constrained electricity generation. It investigates the role of Electric Vehicle (EV) fleets integration, as distributed and mobile storage assets to support high penetrations of renewable energy sources, in the power grid. The vehicle-to-grid (V2G) concept is considered to demonstrate the bidirectional role of EV fleets both as a provider and consumer of energy in securing a sustainable power grid operation. The proposed optimization modeling is the application of Mixed-Integer Linear Programing (MILP) to large-scale systems to solve the hourly security-constrained unit commitment (SCUC) -- an optimal scheduling concept in the economic operation of electric power systems. The Monte Carlo scenario-based approach is utilized to evaluate different scenarios concerning the uncertainties in the operation of power grid system. Further, in order to expedite the real-time solution of the proposed approach for large-scale power systems, it considers a two-stage model using the Benders Decomposition (BD). The numerical simulation demonstrate that the utilization of smart EV fleets in power grid systems would ensure a sustainable grid operation with lower carbon footprints, smoother integration of renewable sources, higher security, and lower power grid operation costs. The results, additionally, illustrate the effectiveness of the proposed MILP approach and its potentials as an optimization tool for sustainable operation of large scale electric power systems.

  12. Evolving Distributed Generation Support Mechanisms: Case Studies from United States, Germany, United Kingdom, and Australia (Chinese translation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Shengru; Lowder, Travis R; Tian, Tian

    This is the Chinese translation of NREL/TP-6A20-67613. This report expands on a previous National Renewable Energy Laboratory (NREL) technical report (Lowder et al. 2015) that focused on the United States' unique approach to distributed generation photovoltaics (DGPV) support policies and business models. While the focus of that report was largely historical (i.e., detailing the policies and market developments that led to the growth of DGPV in the United States), this report looks forward, narrating recent changes to laws and regulations as well as the ongoing dialogues over how to incorporate distributed generation (DG) resources onto the electric grid. This reportmore » also broadens the scope of Lowder et al. (2015) to include additional countries and technologies. DGPV and storage are the principal technologies under consideration (owing to market readiness and deployment volumes), but the report also contemplates any generation resource that is (1) on the customer side of the meter, (2) used to, at least partly, offset a host's energy consumption, and/or (3) potentially available to provide grid support (e.g., through peak shaving and load shifting, ancillary services, and other means).« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Timothy M.; Kadavil, Rahul; Palmintier, Bryan

    The 21st century electric power grid is transforming with an unprecedented increase in demand and increase in new technologies. In the United States Energy Independence and Security Act of 2007, Title XIII sets the tenets for modernizing the electricity grid through what is known as the 'Smart Grid Initiative.' This initiative calls for increased design, deployment, and integration of distributed energy resources, smart technologies and appliances, and advanced storage devices. The deployment of these new technologies requires rethinking and re-engineering the traditional boundaries between different electric power system domains.

  14. Cyber-Physical System Security of Smart Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dagle, Jeffery E.

    2012-01-31

    Abstract—This panel presentation will provide perspectives of cyber-physical system security of smart grids. As smart grid technologies are deployed, the interconnected nature of these systems is becoming more prevalent and more complex, and the cyber component of this cyber-physical system is increasing in importance. Studying system behavior in the face of failures (e.g., cyber attacks) allows a characterization of the systems’ response to failure scenarios, loss of communications, and other changes in system environment (such as the need for emergent updates and rapid reconfiguration). The impact of such failures on the availability of the system can be assessed and mitigationmore » strategies considered. Scenarios associated with confidentiality, integrity, and availability are considered. The cyber security implications associated with the American Recovery and Reinvestment Act of 2009 in the United States are discussed.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Happenny, Sean F.

    The United States’ power infrastructure is aging, underfunded, and vulnerable to cyber attack. Emerging smart grid technologies may take some of the burden off of existing systems and make the grid as a whole more efficient, reliable, and secure. The Pacific Northwest National Laboratory (PNNL) is funding research into several aspects of smart grid technology and grid security, creating a software simulation tool that will allow researchers to test power distribution networks utilizing different smart grid technologies to determine how the grid and these technologies react under different circumstances. Demonstrating security in embedded systems is another research area PNNL ismore » tackling. Many of the systems controlling the U.S. critical infrastructure, such as the power grid, lack integrated security and the networks protecting them are becoming easier to breach. Providing a virtual power substation network to each student team at the National Collegiate Cyber Defense Competition, thereby supporting the education of future cyber security professionals, is another way PNNL is helping to strengthen the security of the nation’s power infrastructure.« less

  16. Examining Interior Grid Nudging Techniques Using Two-Way Nesting in the WRF Model for Regional Climate Modeling

    EPA Science Inventory

    This study evaluates interior nudging techniques using the Weather Research and Forecasting (WRF) model for regional climate modeling over the conterminous United States (CONUS) using a two-way nested configuration. NCEP–Department of Energy Atmospheric Model Intercomparison Pro...

  17. Updated population metadata for United States historical climatology network stations

    USGS Publications Warehouse

    Owen, T.W.; Gallo, K.P.

    2000-01-01

    The United States Historical Climatology Network (HCN) serial temperature dataset is comprised of 1221 high-quality, long-term climate observing stations. The HCN dataset is available in several versions, one of which includes population-based temperature modifications to adjust urban temperatures for the "heat-island" effect. Unfortunately, the decennial population metadata file is not complete as missing values are present for 17.6% of the 12 210 population values associated with the 1221 individual stations during the 1900-90 interval. Retrospective grid-based populations. Within a fixed distance of an HCN station, were estimated through the use of a gridded population density dataset and historically available U.S. Census county data. The grid-based populations for the HCN stations provide values derived from a consistent methodology compared to the current HCN populations that can vary as definitions of the area associated with a city change over time. The use of grid-based populations may minimally be appropriate to augment populations for HCN climate stations that lack any population data, and are recommended when consistent and complete population data are required. The recommended urban temperature adjustments based on the HCN and grid-based methods of estimating station population can be significantly different for individual stations within the HCN dataset.

  18. The smart meter and a smarter consumer: quantifying the benefits of smart meter implementation in the United States.

    PubMed

    Cook, Brendan; Gazzano, Jerrome; Gunay, Zeynep; Hiller, Lucas; Mahajan, Sakshi; Taskan, Aynur; Vilogorac, Samra

    2012-04-23

    The electric grid in the United States has been suffering from underinvestment for years, and now faces pressing challenges from rising demand and deteriorating infrastructure. High congestion levels in transmission lines are greatly reducing the efficiency of electricity generation and distribution. In this paper, we assess the faults of the current electric grid and quantify the costs of maintaining the current system into the future. While the proposed "smart grid" contains many proposals to upgrade the ailing infrastructure of the electric grid, we argue that smart meter installation in each U.S. household will offer a significant reduction in peak demand on the current system. A smart meter is a device which monitors a household's electricity consumption in real-time, and has the ability to display real-time pricing in each household. We conclude that these devices will provide short-term and long-term benefits to utilities and consumers. The smart meter will enable utilities to closely monitor electricity consumption in real-time, while also allowing households to adjust electricity consumption in response to real-time price adjustments.

  19. Polar2Grid 2.0: Reprojecting Satellite Data Made Easy

    NASA Astrophysics Data System (ADS)

    Hoese, D.; Strabala, K.

    2015-12-01

    Polar-orbiting multi-band meteorological sensors such as those on the Suomi National Polar-orbiting Partnership (SNPP) satellite pose substantial challenges for taking imagery the last mile to forecast offices, scientific analysis environments, and the general public. To do this quickly and easily, the Cooperative Institute for Meteorological Satellite Studies (CIMSS) at the University of Wisconsin has created an open-source, modular application system, Polar2Grid. This bundled solution automates tools for converting various satellite products like those from VIIRS and MODIS into a variety of output formats, including GeoTIFFs, AWIPS compatible NetCDF files, and NinJo forecasting workstation compatible TIFF images. In addition to traditional visible and infrared imagery, Polar2Grid includes three perceptual enhancements for the VIIRS Day-Night Band (DNB), as well as providing the capability to create sharpened true color, sharpened false color, and user-defined RGB images. Polar2Grid performs conversions and projections in seconds on large swaths of data. Polar2Grid is currently providing VIIRS imagery over the Continental United States, as well as Alaska and Hawaii, from various Direct-Broadcast antennas to operational forecasters at the NOAA National Weather Service (NWS) offices in their AWIPS terminals, within minutes of an overpass of the Suomi NPP satellite. Three years after Polar2Grid development started, the Polar2Grid team is now releasing version 2.0 of the software; supporting more sensors, generating more products, and providing all of its features in an easy to use command line interface.

  20. Evaluation of the Monotonic Lagrangian Grid and Lat-Long Grid for Air Traffic Management

    NASA Technical Reports Server (NTRS)

    Kaplan, Carolyn; Dahm, Johann; Oran, Elaine; Alexandrov, Natalia; Boris, Jay

    2011-01-01

    The Air Traffic Monotonic Lagrangian Grid (ATMLG) is used to simulate a 24 hour period of air traffic flow in the National Airspace System (NAS). During this time period, there are 41,594 flights over the United States, and the flight plan information (departure and arrival airports and times, and waypoints along the way) are obtained from an Federal Aviation Administration (FAA) Enhanced Traffic Management System (ETMS) dataset. Two simulation procedures are tested and compared: one based on the Monotonic Lagrangian Grid (MLG), and the other based on the stationary Latitude-Longitude (Lat- Long) grid. Simulating one full day of air traffic over the United States required the following amounts of CPU time on a single processor of an SGI Altix: 88 s for the MLG method, and 163 s for the Lat-Long grid method. We present a discussion of the amount of CPU time required for each of the simulation processes (updating aircraft trajectories, sorting, conflict detection and resolution, etc.), and show that the main advantage of the MLG method is that it is a general sorting algorithm that can sort on multiple properties. We discuss how many MLG neighbors must be considered in the separation assurance procedure in order to ensure a five-mile separation buffer between aircraft, and we investigate the effect of removing waypoints from aircraft trajectories. When aircraft choose their own trajectory, there are more flights with shorter duration times and fewer CD&R maneuvers, resulting in significant fuel savings.

  1. National Offshore Wind Energy Grid Interconnection Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniel, John P.; Liu, Shu; Ibanez, Eduardo

    2014-07-30

    The National Offshore Wind Energy Grid Interconnection Study (NOWEGIS) considers the availability and potential impacts of interconnecting large amounts of offshore wind energy into the transmission system of the lower 48 contiguous United States. A total of 54GW of offshore wind was assumed to be the target for the analyses conducted. A variety of issues are considered including: the anticipated staging of offshore wind; the offshore wind resource availability; offshore wind energy power production profiles; offshore wind variability; present and potential technologies for collection and delivery of offshore wind energy to the onshore grid; potential impacts to existing utility systemsmore » most likely to receive large amounts of offshore wind; and regulatory influences on offshore wind development. The technologies considered the reliability of various high-voltage ac (HVAC) and high-voltage dc (HVDC) technology options and configurations. The utility system impacts of GW-scale integration of offshore wind are considered from an operational steady-state perspective and from a regional and national production cost perspective.« less

  2. MIGS-GPU: Microarray Image Gridding and Segmentation on the GPU.

    PubMed

    Katsigiannis, Stamos; Zacharia, Eleni; Maroulis, Dimitris

    2017-05-01

    Complementary DNA (cDNA) microarray is a powerful tool for simultaneously studying the expression level of thousands of genes. Nevertheless, the analysis of microarray images remains an arduous and challenging task due to the poor quality of the images that often suffer from noise, artifacts, and uneven background. In this study, the MIGS-GPU [Microarray Image Gridding and Segmentation on Graphics Processing Unit (GPU)] software for gridding and segmenting microarray images is presented. MIGS-GPU's computations are performed on the GPU by means of the compute unified device architecture (CUDA) in order to achieve fast performance and increase the utilization of available system resources. Evaluation on both real and synthetic cDNA microarray images showed that MIGS-GPU provides better performance than state-of-the-art alternatives, while the proposed GPU implementation achieves significantly lower computational times compared to the respective CPU approaches. Consequently, MIGS-GPU can be an advantageous and useful tool for biomedical laboratories, offering a user-friendly interface that requires minimum input in order to run.

  3. DOCUMENTATION FOR THE GRIDDED HOURLY ATRAZINE EMISSIONS DATA SET FOR THE LAKE MICHIGAN MASS BALANCE STUDY: A FINAL CONTRACT REPORT

    EPA Science Inventory

    In order to develop effective strategies for toxics management, the Great Lakes National Program Office (GLNPO) of the United States Environmental Protection Agency (U.S. EPA), in 1994, launched an ambitious five year program to conduct a mass balance study of selected toxics p...

  4. High Quality Data for Grid Integration Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clifton, Andrew; Draxl, Caroline; Sengupta, Manajit

    As variable renewable power penetration levels increase in power systems worldwide, renewable integration studies are crucial to ensure continued economic and reliable operation of the power grid. The existing electric grid infrastructure in the US in particular poses significant limitations on wind power expansion. In this presentation we will shed light on requirements for grid integration studies as far as wind and solar energy are concerned. Because wind and solar plants are strongly impacted by weather, high-resolution and high-quality weather data are required to drive power system simulations. Future data sets will have to push limits of numerical weather predictionmore » to yield these high-resolution data sets, and wind data will have to be time-synchronized with solar data. Current wind and solar integration data sets are presented. The Wind Integration National Dataset (WIND) Toolkit is the largest and most complete grid integration data set publicly available to date. A meteorological data set, wind power production time series, and simulated forecasts created using the Weather Research and Forecasting Model run on a 2-km grid over the continental United States at a 5-min resolution is now publicly available for more than 126,000 land-based and offshore wind power production sites. The National Solar Radiation Database (NSRDB) is a similar high temporal- and spatial resolution database of 18 years of solar resource data for North America and India. The need for high-resolution weather data pushes modeling towards finer scales and closer synchronization. We also present how we anticipate such datasets developing in the future, their benefits, and the challenges with using and disseminating such large amounts of data.« less

  5. Network integration of distributed power generation

    NASA Astrophysics Data System (ADS)

    Dondi, Peter; Bayoumi, Deia; Haederli, Christoph; Julian, Danny; Suter, Marco

    The world-wide move to deregulation of the electricity and other energy markets, concerns about the environment, and advances in renewable and high efficiency technologies has led to major emphasis being placed on the use of small power generation units in a variety of forms. The paper reviews the position of distributed generation (DG, as these small units are called in comparison with central power plants) with respect to the installation and interconnection of such units with the classical grid infrastructure. In particular, the status of technical standards both in Europe and USA, possible ways to improve the interconnection situation, and also the need for decisions that provide a satisfactory position for the network operator (who remains responsible for the grid, its operation, maintenance and investment plans) are addressed.

  6. 77 FR 64935 - Reliability Standards for Geomagnetic Disturbances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-24

    ... Ridge Study'') on the effects of electromagnetic pulses on the Bulk-Power System. Available at http... . \\6\\ Oak Ridge National Laboratory, Electromagnetic Pulse: Effects on the U.S. Power Grid: Meta-R-319... issued reports assessing the threat to the United States from Electromagnetic Pulse (EMP) attack in 2004...

  7. Privacy protection in HealthGrid: distributing encryption management over the VO.

    PubMed

    Torres, Erik; de Alfonso, Carlos; Blanquer, Ignacio; Hernández, Vicente

    2006-01-01

    Grid technologies have proven to be very successful in tackling challenging problems in which data access and processing is a bottleneck. Notwithstanding the benefits that Grid technologies could have in Health applications, privacy leakages of current DataGrid technologies due to the sharing of data in VOs and the use of remote resources, compromise its widespreading. Privacy control for Grid technology has become a key requirement for the adoption of Grids in the Healthcare sector. Encrypted storage of confidential data effectively reduces the risk of disclosure. A self-enforcing scheme for encrypted data storage can be achieved by combining Grid security systems with distributed key management and classical cryptography techniques. Virtual Organizations, as the main unit of user management in Grid, can provide a way to organize key sharing, access control lists and secure encryption management. This paper provides programming models and discusses the value, costs and behavior of such a system implemented on top of one of the latest Grid middlewares. This work is partially funded by the Spanish Ministry of Science and Technology in the frame of the project Investigación y Desarrollo de Servicios GRID: Aplicación a Modelos Cliente-Servidor, Colaborativos y de Alta Productividad, with reference TIC2003-01318.

  8. Hydrogeologic unit flow characterization using transition probability geostatistics.

    PubMed

    Jones, Norman L; Walker, Justin R; Carle, Steven F

    2005-01-01

    This paper describes a technique for applying the transition probability geostatistics method for stochastic simulation to a MODFLOW model. Transition probability geostatistics has some advantages over traditional indicator kriging methods including a simpler and more intuitive framework for interpreting geologic relationships and the ability to simulate juxtapositional tendencies such as fining upward sequences. The indicator arrays generated by the transition probability simulation are converted to layer elevation and thickness arrays for use with the new Hydrogeologic Unit Flow package in MODFLOW 2000. This makes it possible to preserve complex heterogeneity while using reasonably sized grids and/or grids with nonuniform cell thicknesses.

  9. Active power control of solar PV generation for large interconnection frequency regulation and oscillation damping

    DOE PAGES

    Liu, Yong; Zhu, Lin; Zhan, Lingwei; ...

    2015-06-23

    Because of zero greenhouse gas emission and decreased manufacture cost, solar photovoltaic (PV) generation is expected to account for a significant portion of future power grid generation portfolio. Because it is indirectly connected to the power grid via power electronic devices, solar PV generation system is fully decoupled from the power grid, which will influence the interconnected power grid dynamic characteristics as a result. In this study, the impact of solar PV penetration on large interconnected power system frequency response and inter-area oscillation is evaluated, taking the United States Eastern Interconnection (EI) as an example. Furthermore, based on the constructedmore » solar PV electrical control model with additional active power control loops, the potential contributions of solar PV generation to power system frequency regulation and oscillation damping are examined. The advantages of solar PV frequency support over that of wind generator are also discussed. Finally, simulation results demonstrate that solar PV generations can effectively work as ‘actuators’ in alleviating the negative impacts they bring about.« less

  10. Uncertainty in benefit cost analysis of smart grid demonstration-projects in the U.S., China, and Italy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karali, Nihan; Flego, Gianluca; Yu, Jiancheng

    Given the substantial investments required, there has been keen interest in conducting benefits analysis, i.e., quantifying, and often monetizing, the performance of smart grid technologies. In this study, we compare two different approaches; (1) Electric Power Research Institute (EPRI)’s benefits analysis method and its adaptation to the European contexts by the European Commission, Joint Research Centre (JRC), and (2) the Analytic Hierarchy Process (AHP) and fuzzy logic decision making method. These are applied to three case demonstration projects executed in three different countries; the U.S., China, and Italy, considering uncertainty in each case. This work is conducted under the U.S.more » (United States)-China Climate Change Working Group, smart grid, with an additional major contribution by the European Commission. The following is a brief description of the three demonstration projects.« less

  11. Unit Planning Grids for Visual Arts--Grade 9-12 Advanced.

    ERIC Educational Resources Information Center

    Delaware State Dept. of Education, Dover.

    This planning grid for teaching visual arts (advanced) in grades 9-12 in Delaware outlines the following six standards for students to complete: (1) students will select and use form, media, techniques, and processes to create works of art and communicate meaning; (2) students will create ways to use visual, spatial, and temporal concepts in…

  12. High-quality weather data for grid integration studies

    NASA Astrophysics Data System (ADS)

    Draxl, C.

    2016-12-01

    As variable renewable power penetration levels increase in power systems worldwide, renewable integration studies are crucial to ensure continued economic and reliable operation of the power grid. In this talk we will shed light on requirements for grid integration studies as far as wind and solar energy are concerned. Because wind and solar plants are strongly impacted by weather, high-resolution and high-quality weather data are required to drive power system simulations. Future data sets will have to push limits of numerical weather prediction to yield these high-resolution data sets, and wind data will have to be time-synchronized with solar data. Current wind and solar integration data sets will be presented. The Wind Integration National Dataset (WIND) Toolkit is the largest and most complete grid integration data set publicly available to date. A meteorological data set, wind power production time series, and simulated forecasts created using the Weather Research and Forecasting Model run on a 2-km grid over the continental United States at a 5-min resolution is now publicly available for more than 126,000 land-based and offshore wind power production sites. The Solar Integration National Dataset (SIND) is available as time synchronized with the WIND Toolkit, and will allow for combined wind-solar grid integration studies. The National Solar Radiation Database (NSRDB) is a similar high temporal- and spatial resolution database of 18 years of solar resource data for North America and India. Grid integration studies are also carried out in various countries, which aim at increasing their wind and solar penetration through combined wind and solar integration data sets. We will present a multi-year effort to directly support India's 24x7 energy access goal through a suite of activities aimed at enabling large-scale deployment of clean energy and energy efficiency. Another current effort is the North-American-Renewable-Integration-Study, with the aim of providing a seamless data set across borders for a whole continent, to simulate and analyze the impacts of potential future large wind and solar power penetrations on bulk power system operations.

  13. Continuous Attractor Network Model for Conjunctive Position-by-Velocity Tuning of Grid Cells

    PubMed Central

    Si, Bailu; Romani, Sandro; Tsodyks, Misha

    2014-01-01

    The spatial responses of many of the cells recorded in layer II of rodent medial entorhinal cortex (MEC) show a triangular grid pattern, which appears to provide an accurate population code for animal spatial position. In layer III, V and VI of the rat MEC, grid cells are also selective to head-direction and are modulated by the speed of the animal. Several putative mechanisms of grid-like maps were proposed, including attractor network dynamics, interactions with theta oscillations or single-unit mechanisms such as firing rate adaptation. In this paper, we present a new attractor network model that accounts for the conjunctive position-by-velocity selectivity of grid cells. Our network model is able to perform robust path integration even when the recurrent connections are subject to random perturbations. PMID:24743341

  14. Efficient Double Auction Mechanisms in the Energy Grid with Connected and Islanded Microgrids

    NASA Astrophysics Data System (ADS)

    Faqiry, Mohammad Nazif

    The future energy grid is expected to operate in a decentralized fashion as a network of autonomous microgrids that are coordinated by a Distribution System Operator (DSO), which should allocate energy to them in an efficient manner. Each microgrid operating in either islanded or grid-connected mode may be considered to manage its own resources. This can take place through auctions with individual units of the microgrid as the agents. This research proposes efficient auction mechanisms for the energy grid, with is-landed and connected microgrids. The microgrid level auction is carried out by means of an intermediate agent called an aggregator. The individual consumer and producer units are modeled as selfish agents. With the microgrid in islanded mode, two aggregator-level auction classes are analyzed: (i) price-heterogeneous, and (ii) price homogeneous. Under the price heterogeneity paradigm, this research extends earlier work on the well-known, single-sided Kelly mechanism to double auctions. As in Kelly auctions, the proposed algorithm implements the bidding without using any agent level private infor-mation (i.e. generation capacity and utility functions). The proposed auction is shown to be an efficient mechanism that maximizes the social welfare, i.e. the sum of the utilities of all the agents. Furthermore, the research considers the situation where a subset of agents act as a coalition to redistribute the allocated energy and price using any other specific fairness criterion. The price homogeneous double auction algorithm proposed in this research ad-dresses the problem of price-anticipation, where each agent tries to influence the equilibri-um price of energy by placing strategic bids. As a result of this behavior, the auction's efficiency is lowered. This research proposes a novel approach that is implemented by the aggregator, called virtual bidding, where the efficiency can be asymptotically maximized, even in the presence of price anticipatory bidders. Next, an auction mechanism for the energy grid, with multiple connected mi-crogrids is considered. A globally efficient bi-level auction algorithm is proposed. At the upper-level, the algorithm takes into account physical grid constraints in allocating energy to the microgrids. It is implemented by the DSO as a linear objective quadratic constraint problem that allows price heterogeneity across the aggregators. In parallel, each aggrega-tor implements its own lower-level price homogeneous auction with virtual bidding. The research concludes with a preliminary study on extending the DSO level auc-tion to multi-period day-ahead scheduling. It takes into account storage units and conven-tional generators that are present in the grid by formulating the auction as a mixed inte-ger linear programming problem.

  15. Integration of HTS Cables in the Future Grid of the Netherlands

    NASA Astrophysics Data System (ADS)

    Zuijderduin, R.; Chevtchenko, O.; Smit, J. J.; Aanhaanen, G.; Melnik, I.; Geschiere, A.

    Due to increasing power demand, the electricity grid of the Netherlands is changing. The future transmission grid will obtain electrical power generated by decentralized renewable sources, together with large scale generation units located at the coastal region. In this way electrical power has to be distributed and transmitted over longer distances from generation to end user. Potential grid issues like: amount of distributed power, grid stability and electrical loss dissipation merit particular attention. High temperature superconductors (HTS) can play an important role in solving these grid problems. Advantages to integrate HTS components at transmission voltages are numerous: more transmittable power together with less emissions, intrinsic fault current limiting capability, lower ac loss, better control of power flow, reduced footprint, less magnetic field emissions, etc. The main obstacle at present is the relatively high price of HTS conductor. However as the price goes down, initial market penetration of several HTS components (e.g.: cables, fault current limiters) is expected by year 2015. In the full paper we present selected ways to integrate EHV AC HTS cables depending on a particular future grid scenario in the Netherlands.

  16. AC HTS Transmission Cable for Integration into the Future EHV Grid of the Netherlands

    NASA Astrophysics Data System (ADS)

    Zuijderduin, R.; Chevtchenko, O.; Smit, J. J.; Aanhaanen, G.; Melnik, I.; Geschiere, A.

    Due to increasing power demand, the electricity grid of the Netherlands is changing. The future grid must be capable to transmit all the connected power. Power generation will be more decentralized like for instance wind parks connected to the grid. Furthermore, future large scale production units are expected to be installed near coastal regions. This creates some potential grid issues, such as: large power amounts to be transmitted to consumers from west to east and grid stability. High temperature superconductors (HTS) can help solving these grid problems. Advantages to integrate HTS components at Extra High Voltage (EHV) and High Voltage (HV) levels are numerous: more power with less losses and less emissions, intrinsic fault current limiting capability, better control of power flow, reduced footprint, etc. Today's main obstacle is the relatively high price of HTS. Nevertheless, as the price goes down, initial market penetration for several HTS components is expected by year 2015 (e.g.: cables, fault current limiters). In this paper we present a design of intrinsically compensated EHV HTS cable for future grid integration. Discussed are the parameters of such cable providing an optimal power transmission in the future network.

  17. Eastern and Western Data Sets | Grid Modernization | NREL

    Science.gov Websites

    and Western Data Sets Eastern and Western Data Sets The Eastern Wind Integration Data Set and Western Wind Integration Data Set were designed to perform wind integration studies and estimate power production from hypothetical wind power plants in the United States. These data sets can help energy

  18. California | Midmarket Solar Policies in the United States | Solar Research

    Science.gov Websites

    interconnection fee ($75-$150), pay all "non-bypassable" charges for all electricity consumed from the distribution grid, non-export facilities connecting to an IOU's transmission grid and all net-metered systems Interconnection All non-exporting systems or net metering facility Fast track Exporting facility ≤3MW on a 12 kV

  19. Solution for Data Security Challenges Faced by Smart Grid Evolution - Video

    Science.gov Websites

    the same utility - different business units that are dealing with generation, transmission, and smart grid, the consumers now also have access to information about zero utilization and the different alive to sense what's going on. And then there's certainly variety with the various different elements

  20. Forest resources of southeast Alaska, 2000: results of a single-phase systematic sample.

    Treesearch

    Willem W.S. van Hees

    2003-01-01

    A baseline assessment of forest resources in southeast Alaska was made by using a single-phase, unstratified, systematic-grid sample, with ground plots established at each grid intersection. Ratio-of-means estimators were used to develop population estimates. Forests cover an estimated 48 percent of the 22.9-million-acre southeast Alaska inventory unit. Dominant forest...

  1. A Systematic Multi-Time Scale Solution for Regional Power Grid Operation

    NASA Astrophysics Data System (ADS)

    Zhu, W. J.; Liu, Z. G.; Cheng, T.; Hu, B. Q.; Liu, X. Z.; Zhou, Y. F.

    2017-10-01

    Many aspects need to be taken into consideration in a regional grid while making schedule plans. In this paper, a systematic multi-time scale solution for regional power grid operation considering large scale renewable energy integration and Ultra High Voltage (UHV) power transmission is proposed. In the time scale aspect, we discuss the problem from month, week, day-ahead, within-day to day-behind, and the system also contains multiple generator types including thermal units, hydro-plants, wind turbines and pumped storage stations. The 9 subsystems of the scheduling system are described, and their functions and relationships are elaborated. The proposed system has been constructed in a provincial power grid in Central China, and the operation results further verified the effectiveness of the system.

  2. Petroleum system modeling of the western Canada sedimentary basin - isopach grid files

    USGS Publications Warehouse

    Higley, Debra K.; Henry, Mitchell E.; Roberts, Laura N.R.

    2005-01-01

    This publication contains zmap-format grid files of isopach intervals that represent strata associated with Devonian to Holocene petroleum systems of the Western Canada Sedimentary Basin (WCSB) of Alberta, British Columbia, and Saskatchewan, Canada. Also included is one grid file that represents elevations relative to sea level of the top of the Lower Cretaceous Mannville Group. Vertical and lateral scales are in meters. The age range represented by the stratigraphic intervals comprising the grid files is 373 million years ago (Ma) to present day. File names, age ranges, formation intervals, and primary petroleum system elements are listed in table 1. Metadata associated with this publication includes information on the study area and the zmap-format files. The digital files listed in table 1 were compiled as part of the Petroleum Processes Research Project being conducted by the Central Energy Resources Team of the U.S. Geological Survey, which focuses on modeling petroleum generation, 3 migration, and accumulation through time for petroleum systems of the WCSB. Primary purposes of the WCSB study are to Construct the 1-D/2-D/3-D petroleum system models of the WCSB. Actual boundaries of the study area are documented within the metadata; excluded are northern Alberta and eastern Saskatchewan, but fringing areas of the United States are included.Publish results of the research and the grid files generated for use in the 3-D model of the WCSB.Evaluate the use of petroleum system modeling in assessing undiscovered oil and gas resources for geologic provinces across the World.

  3. Towards Effective Clustering Techniques for the Analysis of Electric Power Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogan, Emilie A.; Cotilla Sanchez, Jose E.; Halappanavar, Mahantesh

    2013-11-30

    Clustering is an important data analysis technique with numerous applications in the analysis of electric power grids. Standard clustering techniques are oblivious to the rich structural and dynamic information available for power grids. Therefore, by exploiting the inherent topological and electrical structure in the power grid data, we propose new methods for clustering with applications to model reduction, locational marginal pricing, phasor measurement unit (PMU or synchrophasor) placement, and power system protection. We focus our attention on model reduction for analysis based on time-series information from synchrophasor measurement devices, and spectral techniques for clustering. By comparing different clustering techniques onmore » two instances of realistic power grids we show that the solutions are related and therefore one could leverage that relationship for a computational advantage. Thus, by contrasting different clustering techniques we make a case for exploiting structure inherent in the data with implications for several domains including power systems.« less

  4. Improvement of Storm Forecasts Using Gridded Bayesian Linear Regression for Northeast United States

    NASA Astrophysics Data System (ADS)

    Yang, J.; Astitha, M.; Schwartz, C. S.

    2017-12-01

    Bayesian linear regression (BLR) is a post-processing technique in which regression coefficients are derived and used to correct raw forecasts based on pairs of observation-model values. This study presents the development and application of a gridded Bayesian linear regression (GBLR) as a new post-processing technique to improve numerical weather prediction (NWP) of rain and wind storm forecasts over northeast United States. Ten controlled variables produced from ten ensemble members of the National Center for Atmospheric Research (NCAR) real-time prediction system are used for a GBLR model. In the GBLR framework, leave-one-storm-out cross-validation is utilized to study the performances of the post-processing technique in a database composed of 92 storms. To estimate the regression coefficients of the GBLR, optimization procedures that minimize the systematic and random error of predicted atmospheric variables (wind speed, precipitation, etc.) are implemented for the modeled-observed pairs of training storms. The regression coefficients calculated for meteorological stations of the National Weather Service are interpolated back to the model domain. An analysis of forecast improvements based on error reductions during the storms will demonstrate the value of GBLR approach. This presentation will also illustrate how the variances are optimized for the training partition in GBLR and discuss the verification strategy for grid points where no observations are available. The new post-processing technique is successful in improving wind speed and precipitation storm forecasts using past event-based data and has the potential to be implemented in real-time.

  5. Ground-Water Quality Data in the Owens and Indian Wells Valleys Study Unit, 2006: Results from the California GAMA Program

    USGS Publications Warehouse

    Densmore, Jill N.; Fram, Miranda S.; Belitz, Kenneth

    2009-01-01

    Ground-water quality in the approximately 1,630 square-mile Owens and Indian Wells Valleys study unit (OWENS) was investigated in September-December 2006 as part of the Priority Basin Project of Groundwater Ambient Monitoring and Assessment (GAMA) Program. The GAMA Priority Basin Project was developed in response to the Groundwater Quality Monitoring Act of 2001 and is being conducted by the U.S. Geological Survey (USGS) in collaboration with the California State Water Resources Control Board (SWRCB). The Owens and Indian Wells Valleys study was designed to provide a spatially unbiased assessment of raw ground-water quality within OWENS study unit, as well as a statistically consistent basis for comparing water quality throughout California. Samples were collected from 74 wells in Inyo, Kern, Mono, and San Bernardino Counties. Fifty-three of the wells were selected using a spatially distributed, randomized grid-based method to provide statistical representation of the study area (grid wells), and 21 wells were selected to evaluate changes in water chemistry in areas of interest (understanding wells). The ground-water samples were analyzed for a large number of synthetic organic constituents [volatile organic compounds (VOCs), pesticides and pesticide degradates, pharmaceutical compounds, and potential wastewater- indicator compounds], constituents of special interest [perchlorate, N-nitrosodimethylamine (NDMA), and 1,2,3- trichloropropane (1,2,3-TCP)], naturally occurring inorganic constituents [nutrients, major and minor ions, and trace elements], radioactive constituents, and microbial indicators. Naturally occurring isotopes [tritium, and carbon-14, and stable isotopes of hydrogen and oxygen in water], and dissolved noble gases also were measured to help identify the source and age of the sampled ground water. This study evaluated the quality of raw ground water in the aquifer in the OWENS study unit and did not attempt to evaluate the quality of treated water delivered to consumers. Water supplied to consumers typically is treated after withdrawal from the ground, disinfected, and blended with other waters to maintain acceptable water quality. Regulatory thresholds apply to treated water that is served to the consumer, not to raw ground water. However, to provide some context for the results, concentrations of constituents measured in the raw ground water were compared with regulatory and non-regulatory health-based thresholds established by the U.S. Environmental Protection Agency (USEPA) and California Department of Public Health (CDPH) and non-regulatory thresholds established for aesthetic concerns (secondary maximum contamination levels, SMCL-CA) by CDPH. VOCs and pesticides were detected in samples from less than one-third of the grid wells; all detections were below health-based thresholds, and most were less than one-one hundredth of threshold values. All detections of perchlorate and nutrients in samples from OWENS were below health-based thresholds. Most detections of trace elements in ground-water samples from OWENS wells were below health-based thresholds. In samples from the 53 grid wells, three constituents were detected at concentrations above USEPA maximum contaminant levels: arsenic in 5 samples, uranium in 4 samples, and fluoride in 1 sample. Two constituents were detected at concentrations above CDPH notification levels (boron in 9 samples and vanadium in 1 sample), and two were above USEPA lifetime health advisory levels (molybdenum in 3 samples and strontium in 1 sample). Most of the samples from OWENS wells had concentrations of major elements, TDS, and trace elements below the non-enforceable standards set for aesthetic concerns. Samples from nine grid wells had concentrations of manganese, iron, or TDS above the SMCL-CAs.

  6. Precipitation characteristics of CAM5 physics at mesoscale resolution during MC3E and the impact of convective timescale choice

    DOE PAGES

    Gustafson, William I.; Ma, Po-Lun; Singh, Balwinder

    2014-12-17

    The physics suite of the Community Atmosphere Model version 5 (CAM5) has recently been implemented in the Weather Research and Forecasting (WRF) model to explore the behavior of the parameterization suite at high resolution and in the more controlled setting of a limited area model. The initial paper documenting this capability characterized the behavior for northern high latitude conditions. This present paper characterizes the precipitation characteristics for continental, mid-latitude, springtime conditions during the Midlatitude Continental Convective Clouds Experiment (MC3E) over the central United States. This period exhibited a range of convective conditions from those driven strongly by large-scale synoptic regimesmore » to more locally driven convection. The study focuses on the precipitation behavior at 32 km grid spacing to better anticipate how the physics will behave in the global model when used at similar grid spacing in the coming years. Importantly, one change to the Zhang-McFarlane deep convective parameterization when implemented in WRF was to make the convective timescale parameter an explicit function of grid spacing. This study examines the sensitivity of the precipitation to the default value of the convective timescale in WRF, which is 600 seconds for 32 km grid spacing, to the value of 3600 seconds used for 2 degree grid spacing in CAM5. For comparison, an infinite convective timescale is also used. The results show that the 600 second timescale gives the most accurate precipitation over the central United States in terms of rain amount. However, this setting has the worst precipitation diurnal cycle, with the convection too tightly linked to the daytime surface heating. Longer timescales greatly improve the diurnal cycle but result in less precipitation and produce a low bias. An analysis of rain rates shows the accurate precipitation amount with the shorter timescale is assembled from an over abundance of drizzle combined with too little heavy rain events. With longer timescales one can improve the distribution, particularly for the extreme rain rates. Ultimately, without changing other aspects of the physics, one must choose between accurate diurnal timing and rain amount when choosing an appropriate convective timescale.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gustafson, William I.; Ma, Po-Lun; Singh, Balwinder

    The physics suite of the Community Atmosphere Model version 5 (CAM5) has recently been implemented in the Weather Research and Forecasting (WRF) model to explore the behavior of the parameterization suite at high resolution and in the more controlled setting of a limited area model. The initial paper documenting this capability characterized the behavior for northern high latitude conditions. This present paper characterizes the precipitation characteristics for continental, mid-latitude, springtime conditions during the Midlatitude Continental Convective Clouds Experiment (MC3E) over the central United States. This period exhibited a range of convective conditions from those driven strongly by large-scale synoptic regimesmore » to more locally driven convection. The study focuses on the precipitation behavior at 32 km grid spacing to better anticipate how the physics will behave in the global model when used at similar grid spacing in the coming years. Importantly, one change to the Zhang-McFarlane deep convective parameterization when implemented in WRF was to make the convective timescale parameter an explicit function of grid spacing. This study examines the sensitivity of the precipitation to the default value of the convective timescale in WRF, which is 600 seconds for 32 km grid spacing, to the value of 3600 seconds used for 2 degree grid spacing in CAM5. For comparison, an infinite convective timescale is also used. The results show that the 600 second timescale gives the most accurate precipitation over the central United States in terms of rain amount. However, this setting has the worst precipitation diurnal cycle, with the convection too tightly linked to the daytime surface heating. Longer timescales greatly improve the diurnal cycle but result in less precipitation and produce a low bias. An analysis of rain rates shows the accurate precipitation amount with the shorter timescale is assembled from an over abundance of drizzle combined with too little heavy rain events. With longer timescales one can improve the distribution, particularly for the extreme rain rates. Ultimately, without changing other aspects of the physics, one must choose between accurate diurnal timing and rain amount when choosing an appropriate convective timescale.« less

  8. Vertical garden for treating greywater

    NASA Astrophysics Data System (ADS)

    McDonald, Arthur Phaoenchoke; Montoya, Alejandro; Alonso-Marroquin, Fernando

    2017-06-01

    Recent increasing concerns over the effects of climate change has prompted much debate into the issue of long term sustainability. An investigation was conducted into the feasibility of an off-grid housing unit, particularly in an Australian context. A pilot scale 3m × 2m off-grid housing unit was constructed. Forecasts for water requirements as well as an investigation into rainwater harvesting and greywater recycling was conducted. A multi-container plant and sand biological filter was constructed and filtration abilities investigated. The system met NSW government water reuse standards in terms of suspended solids and pH, achieving total suspended solid removal efficiency of up to 99%.

  9. Scaling effects on spring phenology detections from MODIS data at multiple spatial resolutions over the contiguous United States

    NASA Astrophysics Data System (ADS)

    Peng, Dailiang; Zhang, Xiaoyang; Zhang, Bing; Liu, Liangyun; Liu, Xinjie; Huete, Alfredo R.; Huang, Wenjiang; Wang, Siyuan; Luo, Shezhou; Zhang, Xiao; Zhang, Helin

    2017-10-01

    Land surface phenology (LSP) has been widely retrieved from satellite data at multiple spatial resolutions, but the spatial scaling effects on LSP detection are poorly understood. In this study, we collected enhanced vegetation index (EVI, 250 m) from collection 6 MOD13Q1 product over the contiguous United States (CONUS) in 2007 and 2008, and generated a set of multiple spatial resolution EVI data by resampling 250 m to 2 × 250 m and 3 × 250 m, 4 × 250 m, …, 35 × 250 m. These EVI time series were then used to detect the start of spring season (SOS) at various spatial resolutions. Further the SOS variation across scales was examined at each coarse resolution grid (35 × 250 m ≈ 8 km, refer to as reference grid) and ecoregion. Finally, the SOS scaling effects were associated with landscape fragment, proportion of primary land cover type, and spatial variability of seasonal greenness variation within each reference grid. The results revealed the influences of satellite spatial resolutions on SOS retrievals and the related impact factors. Specifically, SOS significantly varied lineally or logarithmically across scales although the relationship could be either positive or negative. The overall SOS values averaged from spatial resolutions between 250 m and 35 × 250 m at large ecosystem regions were generally similar with a difference less than 5 days, while the SOS values within the reference grid could differ greatly in some local areas. Moreover, the standard deviation of SOS across scales in the reference grid was less than 5 days in more than 70% of area over the CONUS, which was smaller in northeastern than in southern and western regions. The SOS scaling effect was significantly associated with heterogeneity of vegetation properties characterized using land landscape fragment, proportion of primary land cover type, and spatial variability of seasonal greenness variation, but the latter was the most important impact factor.

  10. Solving Upwind-Biased Discretizations. 2; Multigrid Solver Using Semicoarsening

    NASA Technical Reports Server (NTRS)

    Diskin, Boris

    1999-01-01

    This paper studies a novel multigrid approach to the solution for a second order upwind biased discretization of the convection equation in two dimensions. This approach is based on semi-coarsening and well balanced explicit correction terms added to coarse-grid operators to maintain on coarse-grid the same cross-characteristic interaction as on the target (fine) grid. Colored relaxation schemes are used on all the levels allowing a very efficient parallel implementation. The results of the numerical tests can be summarized as follows: 1) The residual asymptotic convergence rate of the proposed V(0, 2) multigrid cycle is about 3 per cycle. This convergence rate far surpasses the theoretical limit (4/3) predicted for standard multigrid algorithms using full coarsening. The reported efficiency does not deteriorate with increasing the cycle, depth (number of levels) and/or refining the target-grid mesh spacing. 2) The full multi-grid algorithm (FMG) with two V(0, 2) cycles on the target grid and just one V(0, 2) cycle on all the coarse grids always provides an approximate solution with the algebraic error less than the discretization error. Estimates of the total work in the FMG algorithm are ranged between 18 and 30 minimal work units (depending on the target (discretizatioin). Thus, the overall efficiency of the FMG solver closely approaches (if does not achieve) the goal of the textbook multigrid efficiency. 3) A novel approach to deriving a discrete solution approximating the true continuous solution with a relative accuracy given in advance is developed. An adaptive multigrid algorithm (AMA) using comparison of the solutions on two successive target grids to estimate the accuracy of the current target-grid solution is defined. A desired relative accuracy is accepted as an input parameter. The final target grid on which this accuracy can be achieved is chosen automatically in the solution process. the actual relative accuracy of the discrete solution approximation obtained by AMA is always better than the required accuracy; the computational complexity of the AMA algorithm is (nearly) optimal (comparable with the complexity of the FMG algorithm applied to solve the problem on the optimally spaced target grid).

  11. High-Throughput Characterization of Porous Materials Using Graphics Processing Units

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Jihan; Martin, Richard L.; Rübel, Oliver

    We have developed a high-throughput graphics processing units (GPU) code that can characterize a large database of crystalline porous materials. In our algorithm, the GPU is utilized to accelerate energy grid calculations where the grid values represent interactions (i.e., Lennard-Jones + Coulomb potentials) between gas molecules (i.e., CHmore » $$_{4}$$ and CO$$_{2}$$) and material's framework atoms. Using a parallel flood fill CPU algorithm, inaccessible regions inside the framework structures are identified and blocked based on their energy profiles. Finally, we compute the Henry coefficients and heats of adsorption through statistical Widom insertion Monte Carlo moves in the domain restricted to the accessible space. The code offers significant speedup over a single core CPU code and allows us to characterize a set of porous materials at least an order of magnitude larger than ones considered in earlier studies. For structures selected from such a prescreening algorithm, full adsorption isotherms can be calculated by conducting multiple grand canonical Monte Carlo simulations concurrently within the GPU.« less

  12. Estimates of Monthly Ground-Water Recharge to the Yakima River Basin Aquifer System, Washington, 1960-2001, for Current Land-Use and Land-Cover Conditions

    USGS Publications Warehouse

    Vaccaro, J.J.; Olsen, T.D.

    2007-01-01

    Unique ID grid with a unique value per Hydrologic Response Unit (HRU) per basin in reference to the estimated ground-water recharge for current conditions in the Yakima Basin Aquifer System, (USGS report SIR 2007-5007). Total 78,144 unique values. This grid made it easy to provide estimates of monthly ground-water recharge for water years 1960-2001in an electronic format for water managers, planners, and hydrologists, that could be related back to a spatially referenced grid by the unique ID.

  13. An adaptive grid algorithm for one-dimensional nonlinear equations

    NASA Technical Reports Server (NTRS)

    Gutierrez, William E.; Hills, Richard G.

    1990-01-01

    Richards' equation, which models the flow of liquid through unsaturated porous media, is highly nonlinear and difficult to solve. Step gradients in the field variables require the use of fine grids and small time step sizes. The numerical instabilities caused by the nonlinearities often require the use of iterative methods such as Picard or Newton interation. These difficulties result in large CPU requirements in solving Richards equation. With this in mind, adaptive and multigrid methods are investigated for use with nonlinear equations such as Richards' equation. Attention is focused on one-dimensional transient problems. To investigate the use of multigrid and adaptive grid methods, a series of problems are studied. First, a multigrid program is developed and used to solve an ordinary differential equation, demonstrating the efficiency with which low and high frequency errors are smoothed out. The multigrid algorithm and an adaptive grid algorithm is used to solve one-dimensional transient partial differential equations, such as the diffusive and convective-diffusion equations. The performance of these programs are compared to that of the Gauss-Seidel and tridiagonal methods. The adaptive and multigrid schemes outperformed the Gauss-Seidel algorithm, but were not as fast as the tridiagonal method. The adaptive grid scheme solved the problems slightly faster than the multigrid method. To solve nonlinear problems, Picard iterations are introduced into the adaptive grid and tridiagonal methods. Burgers' equation is used as a test problem for the two algorithms. Both methods obtain solutions of comparable accuracy for similar time increments. For the Burgers' equation, the adaptive grid method finds the solution approximately three times faster than the tridiagonal method. Finally, both schemes are used to solve the water content formulation of the Richards' equation. For this problem, the adaptive grid method obtains a more accurate solution in fewer work units and less computation time than required by the tridiagonal method. The performance of the adaptive grid method tends to degrade as the solution process proceeds in time, but still remains faster than the tridiagonal scheme.

  14. Modeling and control of fuel cell based distributed generation systems

    NASA Astrophysics Data System (ADS)

    Jung, Jin Woo

    This dissertation presents circuit models and control algorithms of fuel cell based distributed generation systems (DGS) for two DGS topologies. In the first topology, each DGS unit utilizes a battery in parallel to the fuel cell in a standalone AC power plant and a grid-interconnection. In the second topology, a Z-source converter, which employs both the L and C passive components and shoot-through zero vectors instead of the conventional DC/DC boost power converter in order to step up the DC-link voltage, is adopted for a standalone AC power supply. In Topology 1, two applications are studied: a standalone power generation (Single DGS Unit and Two DGS Units) and a grid-interconnection. First, dynamic model of the fuel cell is given based on electrochemical process. Second, two full-bridge DC to DC converters are adopted and their controllers are designed: an unidirectional full-bridge DC to DC boost converter for the fuel cell and a bidirectional full-bridge DC to DC buck/boost converter for the battery. Third, for a three-phase DC to AC inverter without or with a Delta/Y transformer, a discrete-time state space circuit model is given and two discrete-time feedback controllers are designed: voltage controller in the outer loop and current controller in the inner loop. And last, for load sharing of two DGS units and power flow control of two DGS units or the DGS connected to the grid, real and reactive power controllers are proposed. Particularly, for the grid-connected DGS application, a synchronization issue between an islanding mode and a paralleling mode to the grid is investigated, and two case studies are performed. To demonstrate the proposed circuit models and control strategies, simulation test-beds using Matlab/Simulink are constructed for each configuration of the fuel cell based DGS with a three-phase AC 120 V (L-N)/60 Hz/50 kVA and various simulation results are presented. In Topology 2, this dissertation presents system modeling, modified space vector PWM implementation (MSVPWM) and design of a closed-loop controller of the Z-source converter which utilizes L and C components and shoot-through zero vectors for the standalone AC power generation. The fuel cell system is modeled by an electrical R-C circuit in order to include slow dynamics of the fuel cells and a voltage-current characteristic of a cell is also considered. A discrete-time state space model is derived to implement digital control and a space vector pulse-width modulation (SVPWM) technique is modified to realize the shoot-through zero vectors that boost the DC-link voltage. Also, three discrete-time feedback controllers are designed: a discrete-time optimal voltage controller, a discrete-time sliding mode current controller, and a discrete-time PI DC-link voltage controller. Furthermore, an asymptotic observer is used to reduce the number of sensors and enhance the reliability of the system. To demonstrate the analyzed circuit model and proposed control strategy, various simulation results using Matlab/Simulink are presented under both light/heavy loads and linear/nonlinear loads for a three-phase AC 208 V (L-L)/60 Hz/10 kVA.

  15. Abstract Machines for Polymorphous Computing

    DTIC Science & Technology

    2007-12-01

    s/ /s/ MARK NOVAK WARREN H. DEBANY, Jr. Work Unit Manager Technical Advisor, Information Grid Division Information...models and LLCs have been developed for Raw, MONARCH [18][19], TRIPS [20][21], and Smart Memories [22][23]. These research projects were conducted...used here. In our approach on Raw, two key concepts are used to fully leverage the Raw architecture [34]. First, the tile grid is viewed as a

  16. Status and understanding of groundwater quality in the Bear Valley and Lake Arrowhead Watershed Study Unit, 2010: California GAMA Priority Basin Project

    USGS Publications Warehouse

    Mathany, Timothy; Burton, Carmen

    2017-06-20

    Groundwater quality in the 112-square-mile Bear Valley and Lake Arrowhead Watershed (BEAR) study unit was investigated as part of the Priority Basin Project (PBP) of the Groundwater Ambient Monitoring and Assessment (GAMA) Program. The study unit comprises two study areas (Bear Valley and Lake Arrowhead Watershed) in southern California in San Bernardino County. The GAMA-PBP is conducted by the California State Water Resources Control Board (SWRCB) in cooperation with the U.S. Geological Survey (USGS) and the Lawrence Livermore National Laboratory.The GAMA BEAR study was designed to provide a spatially balanced, robust assessment of the quality of untreated (raw) groundwater from the primary aquifer systems in the two study areas of the BEAR study unit. The assessment is based on water-quality collected by the USGS from 38 sites (27 grid and 11 understanding) during 2010 and on water-quality data from the SWRCB-Division of Drinking Water (DDW) database. The primary aquifer system is defined by springs and the perforation intervals of wells listed in the SWRCB-DDW water-quality database for the BEAR study unit.This study included two types of assessments: (1) a status assessment, which characterized the status of the quality of the groundwater resource as of 2010 by using data from samples analyzed for volatile organic compounds, pesticides, and naturally present inorganic constituents, such as major ions and trace elements, and (2) an understanding assessment, which evaluated the natural and human factors potentially affecting the groundwater quality. The assessments were intended to characterize the quality of groundwater resources in the primary aquifer system of the BEAR study unit, not the treated drinking water delivered to consumers. Bear Valley study area and the Lake Arrowhead Watershed study area were also compared statistically on the basis of water-quality results and factors potentially affecting the groundwater quality.Relative concentrations (RCs), which are sample concentration of a particular constituent divided by its associated health- or aesthetic-based benchmark concentrations, were used for evaluating the groundwater quality for those constituents that have Federal or California regulatory or non-regulatory benchmarks for drinking-water quality. An RC greater than 1.0 indicates a concentration greater than a benchmark. Organic (volatile organic compounds and pesticides) and special-interest (perchlorate) constituent RCs were classified as “high” (RC greater than 1.0), “moderate” (RC less than or equal to 1.0 and greater than 0.1), or “low” (RC less than or equal to 0.1). For inorganic (radioactive, trace element, major ion, and nutrient) constituents, the boundary between low and moderate RCs was set at 0.5.Aquifer-scale proportion was used as the primary metric in the status assessment for evaluating groundwater quality at the study-unit scale or for its component areas. High aquifer-scale proportion was defined as the percentage of the area of the primary aquifer system with a RC greater than 1.0 for a particular constituent or class of constituents; the percentage is based on area rather than volume. Moderate and low aquifer-scale proportions were defined as the percentage of the primary aquifer system with moderate and low RCs, respectively. A spatially weighted statistical approach was used to evaluate aquifer-scale proportions for individual constituents and classes of constituents.The status assessment for the Bear Valley study area found that inorganic constituents with health-based benchmarks were detected at high RCs in 9.0 percent of the primary aquifer system and at moderate RCs in 13 percent. The high RCs of inorganic constituents primarily reflected high aquifer-scale proportions of fluoride (in 5.4 percent of the primary aquifer system) and arsenic (3.6 percent). The RCs of organic constituents with health-based benchmarks were high in 1.0 percent of the primary aquifer system, moderate in 8.1 percent, and low in 70 percent. Organic constituents were detected in 79 percent of the primary aquifer system. Two groups of organic constituents and two individual organic constituents were detected at frequencies greater than 10 percent of samples from the USGS grid sites: trihalomethanes (THMs), solvents, methyl tert-butyl ether (MTBE), and simazine. The special-interest constituent perchlorate was detected in 93 percent of the primary aquifer system; it was detected at moderate RCs in 7.1 percent and at low RCs in 86 percent.The status assessment in the Lake Arrowhead Watershed study area showed that inorganic constituents with human-health benchmarks were detected at high RCs in 25 percent of the primary aquifer system and at moderate RCs in 41 percent. The high aquifer-scale proportion of inorganic constituents primarily reflected high aquifer-scale proportions of radon‑222 (in 62 percent of the primary aquifer system) and uranium (26 percent). RCs of organic constituents with health-based benchmarks were moderate in 7.7 percent of the primary aquifer system and low in 46 percent. Organic constituents were detected in 54 percent of the primary aquifer system. The only organic constituents that were detected at frequencies greater than 10 percent of samples from the USGS grid sites were THMs. Perchlorate was detected in 62 percent of the primary aquifer system at uniformly low RCs.The second component of this study, the understanding assessment, identified the natural and human factors that could have affected the groundwater quality in the BEAR study unit by evaluating statistical correlations between water-quality constituents and potential explanatory factors. The potential explanatory factors evaluated were land use (including density of septic tanks and leaking or formerly leaking underground fuel tanks), site type, aquifer lithology, well construction (well depth and depth to the top-of-perforated interval), elevation, aridity index, groundwater-age distribution, and oxidation-reduction condition (including pH and dissolved oxygen concentration). Results of the statistical evaluations were used to explain the distribution of constituents in groundwater of the BEAR study unit.In the Bear Valley study area, high and moderate RCs of fluoride were found in sites known to be influenced by hydrothermic conditions or that had high concentrations of fluoride historically. The high RC of arsenic can likely be attributed to desorption of arsenic from aquifer sediments saturated in old groundwater with high pH under reducing conditions. The THMs were detected more frequently at USGS grid sites that were wells, part of a large urban water system, and surrounded by urban land use. Solvents, MTBE, and simazine were all detected more frequently at USGS grid sites that were wells with a greater urban percentage of surrounding land use and that accessed older groundwater than other USGS grid sites. Comparison between the observed and predicted detection frequencies of perchlorate at USGS grid sites indicated that anthropogenic sources could have contributed to low levels of perchlorate in the groundwater of the Bear Valley study area.In the Lake Arrowhead Watershed study area, high and moderate RCs of radon-222 and uranium can be attributed to older groundwater from the granitic fractured-rock primary aquifer system. Low RCs of THMs were detected at USGS grid sites that were wells and part of small water systems. The similarities between the observed and predicted detection frequencies of perchlorate in samples from USGS grid sites indicated that the source and distribution of perchlorate were most likely attributable to precipitation (rain and snow), with minimal, if any, contribution from anthropogenic sources.

  17. Emissions & Generation Resource Integrated Database (eGRID), eGRID2010

    EPA Pesticide Factsheets

    The Emissions & Generation Resource Integrated Database (eGRID) is a comprehensive source of data on the environmental characteristics of almost all electric power generated in the United States. These environmental characteristics include air emissions for nitrogen oxides, sulfur dioxide, carbon dioxide, methane, and nitrous oxide; emissions rates; net generation; resource mix; and many other attributes.eGRID2010 contains the complete release of year 2007 data, as well as years 2005 and 2004 data. Excel spreadsheets, full documentation, summary data, eGRID subregion and NERC region representational maps, and GHG emission factors are included in this data set. The Archived data in eGRID2002 contain years 1996 through 2000 data.For year 2007 data, the first Microsoft Excel workbook, Plant, contains boiler, generator, and plant spreadsheets. The second Microsoft Excel workbook, Aggregation, contains aggregated data by state, electric generating company, parent company, power control area, eGRID subregion, NERC region, and U.S. total levels. The third Microsoft Excel workbook, ImportExport, contains state import-export data, as well as U.S. generation and consumption data for years 2007, 2005, and 2004. For eGRID data for years 2005 and 2004, a user friendly web application, eGRIDweb, is available to select, view, print, and export specified data.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seal, Brian; Huque, Aminul; Rogers, Lindsey

    In 2011, EPRI began a four-year effort under the Department of Energy (DOE) SunShot Initiative Solar Energy Grid Integration Systems - Advanced Concepts (SEGIS-AC) to demonstrate smart grid ready inverters with utility communication. The objective of the project was to successfully implement and demonstrate effective utilization of inverters with grid support functionality to capture the full value of distributed photovoltaic (PV). The project leveraged ongoing investments and expanded PV inverter capabilities, to enable grid operators to better utilize these grid assets. Developing and implementing key elements of PV inverter grid support capabilities will increase the distribution system’s capacity for highermore » penetration levels of PV, while reducing the cost. The project team included EPRI, Yaskawa-Solectria Solar, Spirae, BPL Global, DTE Energy, National Grid, Pepco, EDD, NPPT and NREL. The project was divided into three phases: development, deployment, and demonstration. Within each phase, the key areas included: head-end communications for Distributed Energy Resources (DER) at the utility operations center; methods for coordinating DER with existing distribution equipment; back-end PV plant master controller; and inverters with smart-grid functionality. Four demonstration sites were chosen in three regions of the United States with different types of utility operating systems and implementations of utility-scale PV inverters. This report summarizes the project and findings from field demonstration at three utility sites.« less

  19. Connection technology of HPTO type WECs and DC nano grid in island

    NASA Astrophysics Data System (ADS)

    Wang, Kun-lin; Tian, Lian-fang; You, Ya-ge; Wang, Xiao-hong; Sheng, Song-wei; Zhang, Ya-qun; Ye, Yin

    2016-07-01

    Wave energy fluctuating a great deal endangers the security of power grid especially micro grid in island. A DC nano grid supported by batteries is proposed to smooth the output power of wave energy converters (WECs). Thus, renewable energy converters connected to DC grid is a new subject. The characteristics of WECs are very important to the connection technology of HPTO type WECs and DC nano grid. Hydraulic power take-off system (HPTO) is the core unit of the largest category of WECs, with the functions of supplying suitable damping for a WEC to absorb wave energy, and converting captured wave energy to electricity. The HPTO is divided into a hydraulic energy storage system (HESS) and a hydraulic power generation system (HPGS). A primary numerical model for the HPGS is established in this paper. Three important basic characteristics of the HPGS are deduced, which reveal how the generator load determines the HPGS rotation rate. Therefore, the connector of HPTO type WEC and DC nano grid would be an uncontrollable rectifier with high reliability, also would be a controllable power converter with high efficiency, such as interleaved boost converter-IBC. The research shows that it is very flexible to connect to DC nano grid for WECs, but bypass resistance loads are indispensable for the security of WECs.

  20. An updated global grid point surface air temperature anomaly data set: 1851--1990

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sepanski, R.J.; Boden, T.A.; Daniels, R.C.

    1991-10-01

    This document presents land-based monthly surface air temperature anomalies (departures from a 1951--1970 reference period mean) on a 5{degree} latitude by 10{degree} longitude global grid. Monthly surface air temperature anomalies (departures from a 1957--1975 reference period mean) for the Antarctic (grid points from 65{degree}S to 85{degree}S) are presented in a similar way as a separate data set. The data were derived primarily from the World Weather Records and the archives of the United Kingdom Meteorological Office. This long-term record of temperature anomalies may be used in studies addressing possible greenhouse-gas-induced climate changes. To date, the data have been employed inmore » generating regional, hemispheric, and global time series for determining whether recent (i.e., post-1900) warming trends have taken place. This document also presents the monthly mean temperature records for the individual stations that were used to generate the set of gridded anomalies. The periods of record vary by station. Northern Hemisphere station data have been corrected for inhomogeneities, while Southern Hemisphere data are presented in uncorrected form. 14 refs., 11 figs., 10 tabs.« less

  1. Electric motorcycle charging station powered by solar energy

    NASA Astrophysics Data System (ADS)

    Siriwattanapong, Akarawat; Chantharasenawong, Chawin

    2018-01-01

    This research proposes a design and verification of an off-grid photovoltaic system (PVS) for electric motorcycle charging station to be located in King’s Mongkut’s University of Technology Thonburi, Bangkok, Thailand. The system is designed to work independently (off-grid) and it must be able to fully charge the batteries of a typical passenger electric motorcycle every evening. A 1,000W Toyotron electric motorcycle is chosen for this study. It carries five units of 12.8V 20Ah batteries in series; hence its maximum energy requirement per day is 1,200Wh. An assessment of solar irradiation data and the Generation Factor in Bangkok, Thailand suggests that the charging system consists of one 500W PV panel, an MPPT charge controller, 48V 150Ah battery, a 1,000W DC to AC inverter and other safety devices such as fuses and breakers. An experiment is conducted to verify the viability of the off-grid PVS charging station by collecting the total daily energy generation data in the raining season and winter. The data suggests that the designed off-grid solar power charging station for electric motorcycle is able to supply sufficient energy for daily charging requirements.

  2. An approach to the parametric design of ion thrusters

    NASA Technical Reports Server (NTRS)

    Wilbur, Paul J.; Beattie, John R.; Hyman, Jay, Jr.

    1988-01-01

    A methodology that can be used to determine which of several physical constraints can limit ion thruster power and thrust, under various design and operating conditions, is presented. The methodology is exercised to demonstrate typical limitations imposed by grid system span-to-gap ratio, intragrid electric field, discharge chamber power per unit beam area, screen grid lifetime, and accelerator grid lifetime constraints. Limitations on power and thrust for a thruster defined by typical discharge chamber and grid system parameters when it is operated at maximum thrust-to-power are discussed. It is pointed out that other operational objectives such as optimization of payload fraction or mission duration can be substituted for the thrust-to-power objective and that the methodology can be used as a tool for mission analysis.

  3. Refining area of occupancy to address the modifiable areal unit problem in ecology and conservation.

    PubMed

    Moat, Justin; Bachman, Steven P; Field, Richard; Boyd, Doreen S

    2018-05-23

    The 'modifiable areal unit problem' is prevalent across many aspects of spatial analysis within ecology and conservation. The problem is particularly manifest when calculating metrics for extinction risk estimation, for example, area of occupancy (AOO). Although embedded into the International Union for the Conservation of Nature (IUCN) Red List criteria, AOO is often not used or is poorly applied. Here we evaluate new and existing methods for calculating AOO from occurrence records and present a method for determining the minimum AOO using a uniform grid. We evaluate the grid cell shape, grid origin and grid rotation with both real-world and simulated data, reviewing the effects on AOO values, and possible impacts for species already assessed on the IUCN Red List. We show that AOO can vary by up to 80% and a ratio of cells to points of 1:1.21 gives the maximum variation in the number of occupied cells. These findings potentially impact 3% of existing species on the IUCN Red List, as well as species not yet assessed. We show that a new method that combines both grid rotation and moving grid origin gives fast, robust and reproducible results and, in the majority of cases, achieves the minimum AOO. As well as reporting minimum AOO, we outline a confidence interval which should be incorporated into existing tools that support species risk assessment. We also make further recommendations for reporting AOO and other areal measurements within ecology, leading to more robust methods for future species risk assessment. This article is protected by copyright. All rights reserved. © 2018 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.

  4. Regional models of the gravity field from terrestrial gravity data of heterogeneous quality and density

    NASA Astrophysics Data System (ADS)

    Talvik, Silja; Oja, Tõnis; Ellmann, Artu; Jürgenson, Harli

    2014-05-01

    Gravity field models in a regional scale are needed for a number of applications, for example national geoid computation, processing of precise levelling data and geological modelling. Thus the methods applied for modelling the gravity field from surveyed gravimetric information need to be considered carefully. The influence of using different gridding methods, the inclusion of unit or realistic weights and indirect gridding of free air anomalies (FAA) are investigated in the study. Known gridding methods such as kriging (KRIG), least squares collocation (LSCO), continuous curvature (CCUR) and optimal Delaunay triangulation (ODET) are used for production of gridded gravity field surfaces. As the quality of data collected varies considerably depending on the methods and instruments available or used in surveying it is important to somehow weigh the input data. This puts additional demands on data maintenance as accuracy information needs to be available for each data point participating in the modelling which is complicated by older gravity datasets where the uncertainties of not only gravity values but also supplementary information such as survey point position are not always known very accurately. A number of gravity field applications (e.g. geoid computation) demand foran FAA model, the acquisition of which is also investigated. Instead of direct gridding it could be more appropriate to proceed with indirect FAA modelling using a Bouguer anomaly grid to reduce the effect of topography on the resulting FAA model (e.g. near terraced landforms). The inclusion of different gridding methods, weights and indirect FAA modelling helps to improve gravity field modelling methods. It becomes possible to estimate the impact of varying methodical approaches on the gravity field modelling as statistical output is compared. Such knowledge helps assess the accuracy of gravity field models and their effect on the aforementioned applications.

  5. Online Optimization Method for Operation of Generators in a Micro Grid

    NASA Astrophysics Data System (ADS)

    Hayashi, Yasuhiro; Miyamoto, Hideki; Matsuki, Junya; Iizuka, Toshio; Azuma, Hitoshi

    Recently a lot of studies and developments about distributed generator such as photovoltaic generation system, wind turbine generation system and fuel cell have been performed under the background of the global environment issues and deregulation of the electricity market, and the technique of these distributed generators have progressed. Especially, micro grid which consists of several distributed generators, loads and storage battery is expected as one of the new operation system of distributed generator. However, since precipitous load fluctuation occurs in micro grid for the reason of its smaller capacity compared with conventional power system, high-accuracy load forecasting and control scheme to balance of supply and demand are needed. Namely, it is necessary to improve the precision of operation in micro grid by observing load fluctuation and correcting start-stop schedule and output of generators online. But it is not easy to determine the operation schedule of each generator in short time, because the problem to determine start-up, shut-down and output of each generator in micro grid is a mixed integer programming problem. In this paper, the authors propose an online optimization method for the optimal operation schedule of generators in micro grid. The proposed method is based on enumeration method and particle swarm optimization (PSO). In the proposed method, after picking up all unit commitment patterns of each generators satisfied with minimum up time and minimum down time constraint by using enumeration method, optimal schedule and output of generators are determined under the other operational constraints by using PSO. Numerical simulation is carried out for a micro grid model with five generators and photovoltaic generation system in order to examine the validity of the proposed method.

  6. IEC Thrusters for Space Probe Applications and Propulsion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miley, George H.; Momota, Hiromu; Wu Linchun

    Earlier conceptual design studies (Bussard, 1990; Miley et al., 1998; Burton et al., 2003) have described Inertial Electrostatic Confinement (IEC) fusion propulsion to provide a high-power density fusion propulsion system capable of aggressive deep space missions. However, this requires large multi-GW thrusters and a long term development program. As a first step towards this goal, a progression of near-term IEC thrusters, stating with a 1-10 kWe electrically-driven IEC jet thruster for satellites are considered here. The initial electrically-powered unit uses a novel multi-jet plasma thruster based on spherical IEC technology with electrical input power from a solar panel. In thismore » spherical configuration, Xe ions are generated and accelerated towards the center of double concentric spherical grids. An electrostatic potential well structure is created in the central region, providing ion trapping. Several enlarged grid opening extract intense quasi-neutral plasma jets. A variable specific impulse in the range of 1000-4000 seconds is achieved by adjusting the grid potential. This design provides high maneuverability for satellite and small space probe operations. The multiple jets, combined with gimbaled auxiliary equipment, provide precision changes in thrust direction. The IEC electrical efficiency can match or exceed efficiencies of conventional Hall Current Thrusters (HCTs) while offering advantages such as reduced grid erosion (long life time), reduced propellant leakage losses (reduced fuel storage), and a very high power-to-weight ratio. The unit is ideally suited for probing missions. The primary propulsive jet enables delicate maneuvering close to an object. Then simply opening a second jet offset 180 degrees from the propulsion one provides a 'plasma analytic probe' for interrogation of the object.« less

  7. Development of an Asset Value Map for Disaster Risk Assessment in China by Spatial Disaggregation Using Ancillary Remote Sensing Data.

    PubMed

    Wu, Jidong; Li, Ying; Li, Ning; Shi, Peijun

    2018-01-01

    The extent of economic losses due to a natural hazard and disaster depends largely on the spatial distribution of asset values in relation to the hazard intensity distribution within the affected area. Given that statistical data on asset value are collected by administrative units in China, generating spatially explicit asset exposure maps remains a key challenge for rapid postdisaster economic loss assessment. The goal of this study is to introduce a top-down (or downscaling) approach to disaggregate administrative-unit level asset value to grid-cell level. To do so, finding the highly correlated "surrogate" indicators is the key. A combination of three data sets-nighttime light grid, LandScan population grid, and road density grid, is used as ancillary asset density distribution information for spatializing the asset value. As a result, a high spatial resolution asset value map of China for 2015 is generated. The spatial data set contains aggregated economic value at risk at 30 arc-second spatial resolution. Accuracy of the spatial disaggregation reflects redistribution errors introduced by the disaggregation process as well as errors from the original ancillary data sets. The overall accuracy of the results proves to be promising. The example of using the developed disaggregated asset value map in exposure assessment of watersheds demonstrates that the data set offers immense analytical flexibility for overlay analysis according to the hazard extent. This product will help current efforts to analyze spatial characteristics of exposure and to uncover the contributions of both physical and social drivers of natural hazard and disaster across space and time. © 2017 Society for Risk Analysis.

  8. The Efficiency and the Scalability of an Explicit Operator on an IBM POWER4 System

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We present an evaluation of the efficiency and the scalability of an explicit CFD operator on an IBM POWER4 system. The POWER4 architecture exhibits a common trend in HPC architectures: boosting CPU processing power by increasing the number of functional units, while hiding the latency of memory access by increasing the depth of the memory hierarchy. The overall machine performance depends on the ability of the caches-buses-fabric-memory to feed the functional units with the data to be processed. In this study we evaluate the efficiency and scalability of one explicit CFD operator on an IBM POWER4. This operator performs computations at the points of a Cartesian grid and involves a few dozen floating point numbers and on the order of 100 floating point operations per grid point. The computations in all grid points are independent. Specifically, we estimate the efficiency of the RHS operator (SP of NPB) on a single processor as the observed/peak performance ratio. Then we estimate the scalability of the operator on a single chip (2 CPUs), a single MCM (8 CPUs), 16 CPUs, and the whole machine (32 CPUs). Then we perform the same measurements for a chache-optimized version of the RHS operator. For our measurements we use the HPM (Hardware Performance Monitor) counters available on the POWER4. These counters allow us to analyze the obtained performance results.

  9. A new ghost-node method for linking different models and initial investigations of heterogeneity and nonmatching grids

    USGS Publications Warehouse

    Dickinson, J.E.; James, S.C.; Mehl, S.; Hill, M.C.; Leake, S.A.; Zyvoloski, G.A.; Faunt, C.C.; Eddebbarh, A.-A.

    2007-01-01

    A flexible, robust method for linking parent (regional-scale) and child (local-scale) grids of locally refined models that use different numerical methods is developed based on a new, iterative ghost-node method. Tests are presented for two-dimensional and three-dimensional pumped systems that are homogeneous or that have simple heterogeneity. The parent and child grids are simulated using the block-centered finite-difference MODFLOW and control-volume finite-element FEHM models, respectively. The models are solved iteratively through head-dependent (child model) and specified-flow (parent model) boundary conditions. Boundary conditions for models with nonmatching grids or zones of different hydraulic conductivity are derived and tested against heads and flows from analytical or globally-refined models. Results indicate that for homogeneous two- and three-dimensional models with matched grids (integer number of child cells per parent cell), the new method is nearly as accurate as the coupling of two MODFLOW models using the shared-node method and, surprisingly, errors are slightly lower for nonmatching grids (noninteger number of child cells per parent cell). For heterogeneous three-dimensional systems, this paper compares two methods for each of the two sets of boundary conditions: external heads at head-dependent boundary conditions for the child model are calculated using bilinear interpolation or a Darcy-weighted interpolation; specified-flow boundary conditions for the parent model are calculated using model-grid or hydrogeologic-unit hydraulic conductivities. Results suggest that significantly more accurate heads and flows are produced when both Darcy-weighted interpolation and hydrogeologic-unit hydraulic conductivities are used, while the other methods produce larger errors at the boundary between the regional and local models. The tests suggest that, if posed correctly, the ghost-node method performs well. Additional testing is needed for highly heterogeneous systems. ?? 2007 Elsevier Ltd. All rights reserved.

  10. Evaluating penalized logistic regression models to predict Heat-Related Electric grid stress days

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bramer, L. M.; Rounds, J.; Burleyson, C. D.

    Understanding the conditions associated with stress on the electricity grid is important in the development of contingency plans for maintaining reliability during periods when the grid is stressed. In this paper, heat-related grid stress and the relationship with weather conditions is examined using data from the eastern United States. Penalized logistic regression models were developed and applied to predict stress on the electric grid using weather data. The inclusion of other weather variables, such as precipitation, in addition to temperature improved model performance. Several candidate models and datasets were examined. A penalized logistic regression model fit at the operation-zone levelmore » was found to provide predictive value and interpretability. Additionally, the importance of different weather variables observed at different time scales were examined. Maximum temperature and precipitation were identified as important across all zones while the importance of other weather variables was zone specific. The methods presented in this work are extensible to other regions and can be used to aid in planning and development of the electrical grid.« less

  11. Evaluating penalized logistic regression models to predict Heat-Related Electric grid stress days

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bramer, Lisa M.; Rounds, J.; Burleyson, C. D.

    Understanding the conditions associated with stress on the electricity grid is important in the development of contingency plans for maintaining reliability during periods when the grid is stressed. In this paper, heat-related grid stress and the relationship with weather conditions were examined using data from the eastern United States. Penalized logistic regression models were developed and applied to predict stress on the electric grid using weather data. The inclusion of other weather variables, such as precipitation, in addition to temperature improved model performance. Several candidate models and combinations of predictive variables were examined. A penalized logistic regression model which wasmore » fit at the operation-zone level was found to provide predictive value and interpretability. Additionally, the importance of different weather variables observed at various time scales were examined. Maximum temperature and precipitation were identified as important across all zones while the importance of other weather variables was zone specific. In conclusion, the methods presented in this work are extensible to other regions and can be used to aid in planning and development of the electrical grid.« less

  12. Evaluating penalized logistic regression models to predict Heat-Related Electric grid stress days

    DOE PAGES

    Bramer, Lisa M.; Rounds, J.; Burleyson, C. D.; ...

    2017-09-22

    Understanding the conditions associated with stress on the electricity grid is important in the development of contingency plans for maintaining reliability during periods when the grid is stressed. In this paper, heat-related grid stress and the relationship with weather conditions were examined using data from the eastern United States. Penalized logistic regression models were developed and applied to predict stress on the electric grid using weather data. The inclusion of other weather variables, such as precipitation, in addition to temperature improved model performance. Several candidate models and combinations of predictive variables were examined. A penalized logistic regression model which wasmore » fit at the operation-zone level was found to provide predictive value and interpretability. Additionally, the importance of different weather variables observed at various time scales were examined. Maximum temperature and precipitation were identified as important across all zones while the importance of other weather variables was zone specific. In conclusion, the methods presented in this work are extensible to other regions and can be used to aid in planning and development of the electrical grid.« less

  13. Seismic imaging for an ocean drilling site survey and its verification in the Izu rear arc

    NASA Astrophysics Data System (ADS)

    Yamashita, Mikiya; Takahashi, Narumi; Tamura, Yoshihiko; Miura, Seiichi; Kodaira, Shuichi

    2018-01-01

    To evaluate the crustal structure of a site proposed for International Ocean Discovery Program drilling, the Japan Agency for Marine-Earth Science and Technology carried out seismic surveys in the Izu rear arc between 2006 and 2008, using research vessels Kaiyo and Kairei. High-resolution dense grid surveys, consisting of three kinds of reflection surveys, generated clear seismic profiles, together with a seismic velocity image obtained from a seismic refraction survey. In this paper, we compare the seismic profiles with the geological column obtained from the drilling. Five volcaniclastic sedimentary units were identified in seismic reflection profiles above the 5 km/s and 6 km/s contours of P-wave velocity obtained from the velocity image from the seismic refraction survey. However, some of the unit boundaries interpreted from the seismic images were not recognised in the drilling core, highlighting the difficulties of geological target identification in volcanic regions from seismic images alone. The geological core derived from drilling consisted of seven lithological units (labelled I to VII). Units I to V were aged at 0-9 Ma, and units VI and VII, from 1320-1806.5 m below seafloor (mbsf) had ages from 9 to ~15 Ma. The strong heterogeneity of volcanic sediments beneath the drilling site U1437 was also identified from coherence, calculated using cross-spectral analysis between grid survey lines. Our results suggest that use of a dense grid configuration is important in site surveys for ocean drilling in volcanic rear-arc situations, in order to recognise heterogeneous crustal structure, such as sediments from different origins.

  14. On non-parametric maximum likelihood estimation of the bivariate survivor function.

    PubMed

    Prentice, R L

    The likelihood function for the bivariate survivor function F, under independent censorship, is maximized to obtain a non-parametric maximum likelihood estimator &Fcirc;. &Fcirc; may or may not be unique depending on the configuration of singly- and doubly-censored pairs. The likelihood function can be maximized by placing all mass on the grid formed by the uncensored failure times, or half lines beyond the failure time grid, or in the upper right quadrant beyond the grid. By accumulating the mass along lines (or regions) where the likelihood is flat, one obtains a partially maximized likelihood as a function of parameters that can be uniquely estimated. The score equations corresponding to these point mass parameters are derived, using a Lagrange multiplier technique to ensure unit total mass, and a modified Newton procedure is used to calculate the parameter estimates in some limited simulation studies. Some considerations for the further development of non-parametric bivariate survivor function estimators are briefly described.

  15. Residential grid-connected photovoltaics adoption in north central Texas: Lessons from the Solarize Plano project

    NASA Astrophysics Data System (ADS)

    Jack, Katherine G.

    Residential Grid-Connected Photovoltaics (GPV) systems hold remarkable promise in their potential to reduce energy use, air pollution, greenhouse gas emissions, and energy costs to consumers, while also providing grid efficiency and demand-side management benefits to utilities. Broader adoption of customer-sited GPV also has the potential to transform the traditional model of electricity generation and delivery. Interest and activity has grown in recent years to promote GPV in north central Texas. This study employs a mixed methods design to better understand the status of residential GPV adoption in the DFW area, and those factors influencing a homeowner's decision of whether or not to install a system. Basic metrics are summarized, including installation numbers, distribution and socio-demographic information for the case study city of Plano, the DFW region, Texas, and the United States. Qualitative interview methods are used to gain an in-depth understanding of the factors influencing adoption for the Solarize Plano case study participants; to evaluate the effectiveness of the Solarize Plano program; and to identify concepts that may be regionally relevant. Recommendations are presented for additional research that may advance GPV adoption in north central Texas.

  16. Fluidized bed coal combustion reactor

    NASA Technical Reports Server (NTRS)

    Moynihan, P. I.; Young, D. L. (Inventor)

    1981-01-01

    A fluidized bed coal reactor includes a combination nozzle-injector ash-removal unit formed by a grid of closely spaced open channels, each containing a worm screw conveyor, which function as continuous ash removal troughs. A pressurized air-coal mixture is introduced below the unit and is injected through the elongated nozzles formed by the spaces between the channels. The ash build-up in the troughs protects the worm screw conveyors as does the cooling action of the injected mixture. The ash layer and the pressure from the injectors support a fluidized flame combustion zone above the grid which heats water in boiler tubes disposed within and/or above the combustion zone and/or within the walls of the reactor.

  17. Research on unit commitment with large-scale wind power connected power system

    NASA Astrophysics Data System (ADS)

    Jiao, Ran; Zhang, Baoqun; Chi, Zhongjun; Gong, Cheng; Ma, Longfei; Yang, Bing

    2017-01-01

    Large-scale integration of wind power generators into power grid brings severe challenges to power system economic dispatch due to its stochastic volatility. Unit commitment including wind farm is analyzed from the two parts of modeling and solving methods. The structures and characteristics can be summarized after classification has been done according to different objective function and constraints. Finally, the issues to be solved and possible directions of research and development in the future are discussed, which can adapt to the requirements of the electricity market, energy-saving power generation dispatching and smart grid, even providing reference for research and practice of researchers and workers in this field.

  18. A GPU-based incompressible Navier-Stokes solver on moving overset grids

    NASA Astrophysics Data System (ADS)

    Chandar, Dominic D. J.; Sitaraman, Jayanarayanan; Mavriplis, Dimitri J.

    2013-07-01

    In pursuit of obtaining high fidelity solutions to the fluid flow equations in a short span of time, graphics processing units (GPUs) which were originally intended for gaming applications are currently being used to accelerate computational fluid dynamics (CFD) codes. With a high peak throughput of about 1 TFLOPS on a PC, GPUs seem to be favourable for many high-resolution computations. One such computation that involves a lot of number crunching is computing time accurate flow solutions past moving bodies. The aim of the present paper is thus to discuss the development of a flow solver on unstructured and overset grids and its implementation on GPUs. In its present form, the flow solver solves the incompressible fluid flow equations on unstructured/hybrid/overset grids using a fully implicit projection method. The resulting discretised equations are solved using a matrix-free Krylov solver using several GPU kernels such as gradient, Laplacian and reduction. Some of the simple arithmetic vector calculations are implemented using the CU++: An Object Oriented Framework for Computational Fluid Dynamics Applications using Graphics Processing Units, Journal of Supercomputing, 2013, doi:10.1007/s11227-013-0985-9 approach where GPU kernels are automatically generated at compile time. Results are presented for two- and three-dimensional computations on static and moving grids.

  19. The development of a control system for a small high speed steam microturbine generator system

    NASA Astrophysics Data System (ADS)

    Alford, A.; Nichol, P.; Saunders, M.; Frisby, B.

    2015-08-01

    Steam is a widely used energy source. In many situations steam is generated at high pressures and then reduced in pressure through control valves before reaching point of use. An opportunity was identified to convert some of the energy at the point of pressure reduction into electricity. To take advantage of a market identified for small scale systems, a microturbine generator was designed based on a small high speed turbo machine. This machine was packaged with the necessary control valves and systems to allow connection of the machine to the grid. Traditional machines vary the speed of the generator to match the grid frequency. This was not possible due to the high speed of this machine. The characteristics of the rotating unit had to be understood to allow a control that allowed export of energy at the right frequency to the grid under the widest possible range of steam conditions. A further goal of the control system was to maximise the efficiency of generation under all conditions. A further complication was to provide adequate protection for the rotating unit in the event of the loss of connection to the grid. The system to meet these challenges is outlined with the solutions employed and tested for this application.

  20. Optimal Wind Energy Integration in Large-Scale Electric Grids

    NASA Astrophysics Data System (ADS)

    Albaijat, Mohammad H.

    The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create profit for investors for renting their transmission capacity, and cheaper electricity for end users. We propose a hybrid method based on a heuristic and deterministic method to attain new transmission lines additions and increase transmission capacity. Renewable energy resources (RES) have zero operating cost, which makes them very attractive for generation companies and market participants. In addition, RES have zero carbon emission, which helps relieve the concerns of environmental impacts of electric generation resources' carbon emission. RES are wind, solar, hydro, biomass, and geothermal. By 2030, the expectation is that more than 30% of electricity in the U.S. will come from RES. One major contributor of RES generation will be from wind energy resources (WES). Furthermore, WES will be an important component of the future generation portfolio. However, the nature of WES is that it experiences a high intermittency and volatility. Because of the great expectation of high WES penetration and the nature of such resources, researchers focus on studying the effects of such resources on the electric grid operation and its adequacy from different aspects. Additionally, current market operations of electric grids add another complication to consider while integrating RES (e.g., specifically WES). Mandates by market rules and long-term analysis of renewable penetration in large-scale electric grid are also the focus of researchers in recent years. We advocate a method for high-wind resources penetration study on large-scale electric grid operations. PMU is a geographical positioning system (GPS) based device, which provides immediate and precise measurements of voltage angle in a high-voltage transmission system. PMUs can update the status of a transmission line and related measurements (e.g., voltage magnitude and voltage phase angle) more frequently. Every second, a PMU can provide 30 samples of measurements compared to traditional systems (e.g., supervisory control and data acquisition [SCADA] system), which provides one sample of measurement every 2 to 5 seconds. Because PMUs provide more measurement data samples, PMU can improve electric grid reliability and observability. (Abstract shortened by UMI.)

  1. Adapting the iSNOBAL model for improved visualization in a GIS environment

    NASA Astrophysics Data System (ADS)

    Johansen, W. J.; Delparte, D.

    2014-12-01

    Snowmelt is a primary means of crucial water resources in much of the western United States. Researchers are developing models that estimate snowmelt to aid in water resource management. One such model is the image snowcover energy and mass balance (iSNOBAL) model. It uses input climate grids to simulate the development and melting of snowpack in mountainous regions. This study looks at applying this model to the Reynolds Creek Experimental Watershed in southwestern Idaho, utilizing novel approaches incorporating geographic information systems (GIS). To improve visualization of the iSNOBAL model, we have adapted it to run in a GIS environment. This type of environment is suited to both the input grid creation and the visualization of results. The data used for input grid creation can be stored locally or on a web-server. Kriging interpolation embedded within Python scripts are used to create air temperature, soil temperature, humidity, and precipitation grids, while built-in GIS and existing tools are used to create solar radiation and wind grids. Additional Python scripting is then used to perform model calculations. The final product is a user-friendly and accessible version of the iSNOBAL model, including the ability to easily visualize and interact with model results, all within a web- or desktop-based GIS environment. This environment allows for interactive manipulation of model parameters and visualization of the resulting input grids for the model calculations. Future work is moving towards adapting the model further for use in a 3D gaming engine for improved visualization and interaction.

  2. The National Map - Elevation

    USGS Publications Warehouse

    Gesch, Dean; Evans, Gayla; Mauck, James; Hutchinson, John; Carswell, William J.

    2009-01-01

    The National Elevation Dataset (NED) is the primary elevation data product produced and distributed by the USGS. The NED provides seamless raster elevation data of the conterminous United States, Alaska, Hawaii, and the island territories. The NED is derived from diverse source data sets that are processed to a specification with a consistent resolution, coordinate system, elevation units, and horizontal and vertical datums. The NED is the logical result of the maturation of the long-standing USGS elevation program, which for many years concentrated on production of topographic map quadrangle-based digital elevation models. The NED serves as the elevation layer of The National Map, and provides basic elevation information for earth science studies and mapping applications in the United States. The NED is a multi-resolution dataset that is updated bimonthly to integrate newly available, improved elevation source data. NED data are available nationally at grid spacings of 1 arc-second (approximately 30 meters) for the conterminous United States, and at 1/3 and 1/9 arc-seconds (approximately 10 and 3 meters, respectively) for parts of the United States. Most of the NED for Alaska is available at 2-arc-second (about 60 meters) grid spacing, where only lower resolution source data exist. Part of Alaska is available at the 1/3-arc-second resolution, and plans are in development for a significant upgrade in elevation data coverage of the State over the next 5 years. Specifications for the NED include the following: *Coordinate system: Geographic (decimal degrees of latitude and longitude), *Horizontal datum: North American Datum of 1983 (NAD 83), *Vertical datum: North American Vertical Datum of 1988 (NAVD 88) over the conterminous United States and varies in other areas, and *Elevation units: Decimal meters.

  3. 77 FR 12241 - Smart Grid Trade Mission to the United Kingdom; London, United Kingdom, October 15-17, 2012

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-29

    ... 2020 and by 80% of 1990 levels by 2050. Power generation is a major source of carbon emissions, with 74% of power generated in the United Kingdom coming from fossil fuels. As the government seeks to reduce... power. Highly developed, sophisticated, and diversified, the UK market is the single largest export...

  4. An overview of distributed microgrid state estimation and control for smart grids.

    PubMed

    Rana, Md Masud; Li, Li

    2015-02-12

    Given the significant concerns regarding carbon emission from the fossil fuels, global warming and energy crisis, the renewable distributed energy resources (DERs) are going to be integrated in the smart grid. This grid can spread the intelligence of the energy distribution and control system from the central unit to the long-distance remote areas, thus enabling accurate state estimation (SE) and wide-area real-time monitoring of these intermittent energy sources. In contrast to the traditional methods of SE, this paper proposes a novel accuracy dependent Kalman filter (KF) based microgrid SE for the smart grid that uses typical communication systems. Then this article proposes a discrete-time linear quadratic regulation to control the state deviations of the microgrid incorporating multiple DERs. Therefore, integrating these two approaches with application to the smart grid forms a novel contributions in green energy and control research communities. Finally, the simulation results show that the proposed KF based microgrid SE and control algorithm provides an accurate SE and control compared with the existing method.

  5. Can rodents conceive hyperbolic spaces?

    PubMed Central

    Urdapilleta, Eugenio; Troiani, Francesca; Stella, Federico; Treves, Alessandro

    2015-01-01

    The grid cells discovered in the rodent medial entorhinal cortex have been proposed to provide a metric for Euclidean space, possibly even hardwired in the embryo. Yet, one class of models describing the formation of grid unit selectivity is entirely based on developmental self-organization, and as such it predicts that the metric it expresses should reflect the environment to which the animal has adapted. We show that, according to self-organizing models, if raised in a non-Euclidean hyperbolic cage rats should be able to form hyperbolic grids. For a given range of grid spacing relative to the radius of negative curvature of the hyperbolic surface, such grids are predicted to appear as multi-peaked firing maps, in which each peak has seven neighbours instead of the Euclidean six, a prediction that can be tested in experiments. We thus demonstrate that a useful universal neuronal metric, in the sense of a multi-scale ruler and compass that remain unaltered when changing environments, can be extended to other than the standard Euclidean plane. PMID:25948611

  6. E-Research: An Imperative for Strengthening Institutional Partnerships

    ERIC Educational Resources Information Center

    O'Brien, Linda

    2005-01-01

    Whether it is "e-research" in Australia, "cyberinfrastructure" in the United States, the "grid" in Europe, or "e-science" in the United Kingdom, a transformation is clearly occurring in research practice, a transformation that will have a profound impact on the roles of information professionals within…

  7. Power conditioning unit for photovoltaic power systems

    NASA Astrophysics Data System (ADS)

    Beghin, G.; Nguyen Phuoc, V. T.

    Operational features and components of a power conditioning unit for interconnecting solar cell module powers with a utility grid are outlined. The two-stage unit first modifies the voltage to desired levels on an internal dc link, then inverts the current in 2 power transformers connected to a vector summation control to neutralize harmonic distortion up to the 11th harmonic. The system operates in parallel with the grid with extra inductors to absorb line-to-line voltage and phase differences, and permits peak power use from the PV array. Reactive power is gained internally, and a power system controller monitors voltages, frequencies, and currents. A booster preregulator adjusts the input voltage from the array to provide voltage regulation for the inverter, and can commutate 450 amps. A total harmonic distortion of less than 5 percent is claimed, with a rating of 5 kVA, 50/60 Hz, 3-phase, and 4-wire.

  8. Oregon Magnetic and Gravity Maps and Data: A Web Site for Distribution of Data

    USGS Publications Warehouse

    Roberts, Carter W.; Kucks, Robert P.; Hill, Patricia L.

    2008-01-01

    This web site gives the results of a USGS project to acquire the best available, public-domain, aeromagnetic and gravity data in the United States and merge these data into uniform, composite grids for each State. The results for the State of Oregon are presented here on this site. Files of aeromagnetic and gravity grids and images are available for these States for downloading. In Oregon, 49 magnetic surveys have been knit together to form a single digital grid and map. Also, a complete Bouguer gravity anomaly grid and map was generated from 40,665 gravity station measurements in and adjacent to Oregon. In addition, a map shows the location of the aeromagnetic surveys, color-coded to the survey flight-line spacing. This project was supported by the Mineral Resource Program of the USGS.

  9. Improved mapping of National Atmospheric Deposition Program wet-deposition in complex terrain using PRISM-gridded data sets

    USGS Publications Warehouse

    Latysh, Natalie E.; Wetherbee, Gregory Alan

    2012-01-01

    High-elevation regions in the United States lack detailed atmospheric wet-deposition data. The National Atmospheric Deposition Program/National Trends Network (NADP/NTN) measures and reports precipitation amounts and chemical constituent concentration and deposition data for the United States on annual isopleth maps using inverse distance weighted (IDW) interpolation methods. This interpolation for unsampled areas does not account for topographic influences. Therefore, NADP/NTN isopleth maps lack detail and potentially underestimate wet deposition in high-elevation regions. The NADP/NTN wet-deposition maps may be improved using precipitation grids generated by other networks. The Parameter-elevation Regressions on Independent Slopes Model (PRISM) produces digital grids of precipitation estimates from many precipitation-monitoring networks and incorporates influences of topographical and geographical features. Because NADP/NTN ion concentrations do not vary with elevation as much as precipitation depths, PRISM is used with unadjusted NADP/NTN data in this paper to calculate ion wet deposition in complex terrain to yield more accurate and detailed isopleth deposition maps in complex terrain. PRISM precipitation estimates generally exceed NADP/NTN precipitation estimates for coastal and mountainous regions in the western United States. NADP/NTN precipitation estimates generally exceed PRISM precipitation estimates for leeward mountainous regions in Washington, Oregon, and Nevada, where abrupt changes in precipitation depths induced by topography are not depicted by IDW interpolation. PRISM-based deposition estimates for nitrate can exceed NADP/NTN estimates by more than 100% for mountainous regions in the western United States.

  10. Improved mapping of National Atmospheric Deposition Program wet-deposition in complex terrain using PRISM-gridded data sets.

    PubMed

    Latysh, Natalie E; Wetherbee, Gregory Alan

    2012-01-01

    High-elevation regions in the United States lack detailed atmospheric wet-deposition data. The National Atmospheric Deposition Program/National Trends Network (NADP/NTN) measures and reports precipitation amounts and chemical constituent concentration and deposition data for the United States on annual isopleth maps using inverse distance weighted (IDW) interpolation methods. This interpolation for unsampled areas does not account for topographic influences. Therefore, NADP/NTN isopleth maps lack detail and potentially underestimate wet deposition in high-elevation regions. The NADP/NTN wet-deposition maps may be improved using precipitation grids generated by other networks. The Parameter-elevation Regressions on Independent Slopes Model (PRISM) produces digital grids of precipitation estimates from many precipitation-monitoring networks and incorporates influences of topographical and geographical features. Because NADP/NTN ion concentrations do not vary with elevation as much as precipitation depths, PRISM is used with unadjusted NADP/NTN data in this paper to calculate ion wet deposition in complex terrain to yield more accurate and detailed isopleth deposition maps in complex terrain. PRISM precipitation estimates generally exceed NADP/NTN precipitation estimates for coastal and mountainous regions in the western United States. NADP/NTN precipitation estimates generally exceed PRISM precipitation estimates for leeward mountainous regions in Washington, Oregon, and Nevada, where abrupt changes in precipitation depths induced by topography are not depicted by IDW interpolation. PRISM-based deposition estimates for nitrate can exceed NADP/NTN estimates by more than 100% for mountainous regions in the western United States.

  11. National Economic Value Assessment of Plug-in Electric Vehicles: Volume I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melaina, Marc; Bush, Brian; Eichman, Joshua

    The adoption of plug-in electric vehicles (PEVs) can reduce household fuel expenditures by substituting electricity for gasoline while reducing greenhouse gas emissions and petroleum imports. A scenario approach is employed to provide insights into the long-term economic value of increased PEV market growth across the United States. The analytic methods estimate fundamental costs and benefits associated with an economic allocation of PEVs across households based upon household driving patterns, projected vehicle cost and performance attributes, and simulations of a future electricity grid. To explore the full technological potential of PEVs and resulting demands on the electricity grid, very high PEVmore » market growth projections from previous studies are relied upon to develop multiple future scenarios.« less

  12. Statistically-Estimated Tree Composition for the Northeastern United States at Euro-American Settlement.

    PubMed

    Paciorek, Christopher J; Goring, Simon J; Thurman, Andrew L; Cogbill, Charles V; Williams, John W; Mladenoff, David J; Peters, Jody A; Zhu, Jun; McLachlan, Jason S

    2016-01-01

    We present a gridded 8 km-resolution data product of the estimated composition of tree taxa at the time of Euro-American settlement of the northeastern United States and the statistical methodology used to produce the product from trees recorded by land surveyors. Composition is defined as the proportion of stems larger than approximately 20 cm diameter at breast height for 22 tree taxa, generally at the genus level. The data come from settlement-era public survey records that are transcribed and then aggregated spatially, giving count data. The domain is divided into two regions, eastern (Maine to Ohio) and midwestern (Indiana to Minnesota). Public Land Survey point data in the midwestern region (ca. 0.8-km resolution) are aggregated to a regular 8 km grid, while data in the eastern region, from Town Proprietor Surveys, are aggregated at the township level in irregularly-shaped local administrative units. The product is based on a Bayesian statistical model fit to the count data that estimates composition on the 8 km grid across the entire domain. The statistical model is designed to handle data from both the regular grid and the irregularly-shaped townships and allows us to estimate composition at locations with no data and to smooth over noise caused by limited counts in locations with data. Critically, the model also allows us to quantify uncertainty in our composition estimates, making the product suitable for applications employing data assimilation. We expect this data product to be useful for understanding the state of vegetation in the northeastern United States prior to large-scale Euro-American settlement. In addition to specific regional questions, the data product can also serve as a baseline against which to investigate how forests and ecosystems change after intensive settlement. The data product is being made available at the NIS data portal as version 1.0.

  13. The Department of Defense Information Security Process: A Study of Change Acceptance and Past-Performance-Based Outsourcing

    ERIC Educational Resources Information Center

    Hackney, Dennis W. G.

    2011-01-01

    Subchapter III of Chapter 35 of Title 44, United States Code, Federal Information Security Management Act of 2002; Department of Defense (DoD) Directive 8500.01E, Information Assurance, October 24, 2002; DoD Directive 8100.1, Global Information Grid Overarching Policy, September 19, 2002; and DoD Instruction 8500.2, Information Assurance…

  14. [Assessing and making safe the medicine use pathway in paediatrics].

    PubMed

    Didelot, Nicolas; Guerrier, Catherine; Didelot, Anne; Fritsch, Sandrine; Pelte, Jean-Pierre; Socha, Marie; Javelot, Hervé

    2016-01-01

    Based on an assessment of adverse events in a follow-up care and rehabilitation unit in paediatrics, audits were carried out of the medicine use pathway. The evaluation grid taken from this study today serves as a basis for the audits carried out on the medicine use pathway on a national level. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  15. Coordination and Control of Flexible Building Loads for Renewable Integration; Demonstrations using VOLTTRON

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hao, He; Liu, Guopeng; Huang, Sen

    Renewable energy resources such as wind and solar power have a high degree of uncertainty. Large-scale integration of these variable generation sources into the grid is a big challenge for power system operators. Buildings, in which we live and work, consume about 75% of the total electricity in the United States. They also have a large capacity of power flexibility due to their massive thermal capacitance. Therefore, they present a great opportunity to help the grid to manage power balance. In this report, we study coordination and control of flexible building loads for renewable integration. We first present the motivationmore » and background, and conduct a literature review on building-to-grid integration. We also compile a catalog of flexible building loads that have great potential for renewable integration, and discuss their characteristics. We next collect solar generation data from a photovoltaic panel on Pacific Northwest National Laboratory campus, and conduct data analysis to study their characteristics. We find that solar generation output has a strong uncertainty, and the uncertainty occurs at almost all time scales. Additional data from other sources are also used to verify our study. We propose two transactive coordination strategies to manage flexible building loads for renewable integration. We prove the theories that support the two transactive coordination strategies and discuss their pros and cons. In this report, we select three types of flexible building loads—air-handling unit, rooftop unit, and a population of WHs—for which we demonstrate control of the flexible load to track a dispatch signal (e.g., renewable generation fluctuation) using experiment, simulation, or hardware-in-the-loop study. More specifically, we present the system description, model identification, controller design, test bed setup, and experiment results for each demonstration. We show that coordination and control of flexible loads has a great potential to integrate variable generation sources. The flexible loads can successfully track a power dispatch signal from the coordinator, while having little impact on the quality of service to the end-users.« less

  16. Nursing leadership in intensive care units and its relationship to the work environment 1

    PubMed Central

    Balsanelli, Alexandre Pazetto; Cunha, Isabel Cristina Kowal Olm

    2015-01-01

    AIM: To establish whether there is any relationship between the work environment and nursing leadership at intensive care units (ICUs). METHOD: Correlational study conducted at four ICUs in southern São Paulo (SP), Brazil. The study population was comprised of 66 pairs (nurses and nursing technicians) established by lottery. The nurses responded to three instruments: 1) characterization; 2) a validated Portuguese version of the Nursing Work Index Revised (B-NWI-R); and 3) Grid & Leadership in Nursing: ideal behavior. The nursing technicians responded to 1) characterization and to 2) Grid and Leadership in Nursing: actual behavior, relative to the corresponding randomly-assigned nurse. The data were analyzed by means of analysis of variance (ANOVA) at p ≤ 0.05. RESULTS: The work environment was not associated with actual nursing leadership (p = 0.852). The public or private nature of the institutions where the investigated ICUs were located had no significant effect on leadership (p = 0.437). Only the nurse-physician relationship domain stood out (p = 0.001). CONCLUSION: The choice of leadership styles by nurses should match the ICU characteristics. Leadership skills could be developed, and the work environment did not exert any influence on the investigated population. PMID:25806638

  17. Guest Editorial High Performance Computing (HPC) Applications for a More Resilient and Efficient Power Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Zhenyu Henry; Tate, Zeb; Abhyankar, Shrirang

    The power grid has been evolving over the last 120 years, but it is seeing more changes in this decade and next than it has seen over the past century. In particular, the widespread deployment of intermittent renewable generation, smart loads and devices, hierarchical and distributed control technologies, phasor measurement units, energy storage, and widespread usage of electric vehicles will require fundamental changes in methods and tools for the operation and planning of the power grid. The resulting new dynamic and stochastic behaviors will demand the inclusion of more complexity in modeling the power grid. Solving such complex models inmore » the traditional computing environment will be a major challenge. Along with the increasing complexity of power system models, the increasing complexity of smart grid data further adds to the prevailing challenges. In this environment, the myriad of smart sensors and meters in the power grid increase by multiple orders of magnitude, so do the volume and speed of the data. The information infrastructure will need to drastically change to support the exchange of enormous amounts of data as smart grid applications will need the capability to collect, assimilate, analyze and process the data, to meet real-time grid functions. High performance computing (HPC) holds the promise to enhance these functions, but it is a great resource that has not been fully explored and adopted for the power grid domain.« less

  18. Using Geothermal Electric Power to Reduce Carbon Footprint

    NASA Astrophysics Data System (ADS)

    Crombie, George W.

    Human activities, including the burning of fossil fuels, increase carbon dioxide levels, which contributes to global warming. The research problem of the current study examined if geothermal electric power could adequately replace fossil fuel by 2050, thus reducing the emissions of carbon dioxide while avoiding potential problems with expanding nuclear generation. The purpose of this experimental research was to explore under what funding and business conditions geothermal power could be exploited to replace fossil fuels, chiefly coal. Complex systems theory, along with network theory, provided the theoretical foundation for the study. Research hypotheses focused on parameters, such as funding level, exploration type, and interfaces with the existing power grid that will bring the United States closest to the goal of phasing out fossil based power by 2050. The research was conducted by means of computer simulations, using agent-based modeling, wherein data were generated and analyzed. The simulations incorporated key information about the location of geothermal resources, exploitation methods, transmission grid limits and enhancements, and demand centers and growth. The simulation suggested that rapid and aggressive deployment of geothermal power plants in high potential areas, combined with a phase out of coal and nuclear plants, would produce minimal disruptions in the supply of electrical power in the United States. The implications for social change include reduced risk of global warming for all humans on the planet, reduced pollution due to reduction or elimination of coal and nuclear power, increased stability in energy supply and prices in the United States, and increased employment of United States citizens in jobs related to domestic energy production.

  19. Site-specific strong ground motion prediction using 2.5-D modelling

    NASA Astrophysics Data System (ADS)

    Narayan, J. P.

    2001-08-01

    An algorithm was developed using the 2.5-D elastodynamic wave equation, based on the displacement-stress relation. One of the most significant advantages of the 2.5-D simulation is that the 3-D radiation pattern can be generated using double-couple point shear-dislocation sources in the 2-D numerical grid. A parsimonious staggered grid scheme was adopted instead of the standard staggered grid scheme, since this is the only scheme suitable for computing the dislocation. This new 2.5-D numerical modelling avoids the extensive computational cost of 3-D modelling. The significance of this exercise is that it makes it possible to simulate the strong ground motion (SGM), taking into account the energy released, 3-D radiation pattern, path effects and local site conditions at any location around the epicentre. The slowness vector (py) was used in the supersonic region for each layer, so that all the components of the inertia coefficient are positive. The double-couple point shear-dislocation source was implemented in the numerical grid using the moment tensor components as the body-force couples. The moment per unit volume was used in both the 3-D and 2.5-D modelling. A good agreement in the 3-D and 2.5-D responses for different grid sizes was obtained when the moment per unit volume was further reduced by a factor equal to the finite-difference grid size in the case of the 2.5-D modelling. The components of the radiation pattern were computed in the xz-plane using 3-D and 2.5-D algorithms for various focal mechanisms, and the results were in good agreement. A comparative study of the amplitude behaviour of the 3-D and 2.5-D wavefronts in a layered medium reveals the spatial and temporal damped nature of the 2.5-D elastodynamic wave equation. 3-D and 2.5-D simulated responses at a site using a different strike direction reveal that strong ground motion (SGM) can be predicted just by rotating the strike of the fault counter-clockwise by the same amount as the azimuth of the site with respect to the epicentre. This adjustment is necessary since the response is computed keeping the epicentre, focus and the desired site in the same xz-plane, with the x-axis pointing in the north direction.

  20. Impact of the 2017 Solar Eclipse on the Smart Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron M; Reda, Ibrahim M; Andreas, Afshin M

    With the increasing interest in using solar energy as a major contributor to the use of renewable generation, and with the focus on using smart grids to optimize the use of electrical energy based on demand and resources from different locations, the need arises to know the moons position in the sky with respect to the sun. When a solar eclipse occurs, the moon disk might totally or partially shade the sun disk, which can affect the irradiance level from the sun disk, consequently affecting a resource on the electric grid. The moons position can then provide smart grid usersmore » with information about how potential total or partial solar eclipses might affect different locations on the grid so that other resources on the grid can be directed to where they might be needed when such phenomena occurs. At least five solar eclipses occur yearly at different locations on Earth, they can last 3 hours or more depending on the location, and they can affect smart grid users. On August 21, 2017, a partial and full solar eclipse occurred in many locations in the United States, including at the National Renewable Energy Laboratory in Golden, Colorado. Solar irradiance measurements during the eclipse were compared to the data generated by a model for validation at eight locations.« less

  1. Scientific Grid activities and PKI deployment in the Cybermedia Center, Osaka University.

    PubMed

    Akiyama, Toyokazu; Teranishi, Yuuichi; Nozaki, Kazunori; Kato, Seiichi; Shimojo, Shinji; Peltier, Steven T; Lin, Abel; Molina, Tomas; Yang, George; Lee, David; Ellisman, Mark; Naito, Sei; Koike, Atsushi; Matsumoto, Shuichi; Yoshida, Kiyokazu; Mori, Hirotaro

    2005-10-01

    The Cybermedia Center (CMC), Osaka University, is a research institution that offers knowledge and technology resources obtained from advanced researches in the areas of large-scale computation, information and communication, multimedia content and education. Currently, CMC is involved in Japanese national Grid projects such as JGN II (Japan Gigabit Network), NAREGI and BioGrid. Not limited to Japan, CMC also actively takes part in international activities such as PRAGMA. In these projects and international collaborations, CMC has developed a Grid system that allows scientists to perform their analysis by remote-controlling the world's largest ultra-high voltage electron microscope located in Osaka University. In another undertaking, CMC has assumed a leadership role in BioGrid by sharing its experiences and knowledge on the system development for the area of biology. In this paper, we will give an overview of the BioGrid project and introduce the progress of the Telescience unit, which collaborates with the Telescience Project led by the National Center for Microscopy and Imaging Research (NCMIR). Furthermore, CMC collaborates with seven Computing Centers in Japan, NAREGI and National Institute of Informatics to deploy PKI base authentication infrastructure. The current status of this project and future collaboration with Grid Projects will be delineated in this paper.

  2. Enhancement of Efficiency and Reduction of Grid Thickness Variation on Casting Process with Lean Six Sigma Method

    NASA Astrophysics Data System (ADS)

    Witantyo; Setyawan, David

    2018-03-01

    In a lead acid battery industry, grid casting is a process that has high defect and thickness variation level. DMAIC (Define-Measure-Analyse-Improve-Control) method and its tools will be used to improve the casting process. In the Define stage, it is used project charter and SIPOC (Supplier Input Process Output Customer) method to map the existent problem. In the Measure stage, it is conducted a data retrieval related to the types of defect and the amount of it, also the grid thickness variation that happened. And then the retrieved data is processed and analyzed by using 5 Why’s and FMEA method. In the Analyze stage, it is conducted a grid observation that experience fragile and crack type of defect by using microscope showing the amount of oxide Pb inclusion in the grid. Analysis that is used in grid casting process shows the difference of temperature that is too high between the metal fluid and mold temperature, also the corking process that doesn’t have standard. The Improve stage is conducted a fixing process which generates the reduction of grid variation thickness level and defect/unit level from 9,184% to 0,492%. In Control stage, it is conducted a new working standard determination and already fixed control process.

  3. Incorporating Wind Power Forecast Uncertainties Into Stochastic Unit Commitment Using Neural Network-Based Prediction Intervals.

    PubMed

    Quan, Hao; Srinivasan, Dipti; Khosravi, Abbas

    2015-09-01

    Penetration of renewable energy resources, such as wind and solar power, into power systems significantly increases the uncertainties on system operation, stability, and reliability in smart grids. In this paper, the nonparametric neural network-based prediction intervals (PIs) are implemented for forecast uncertainty quantification. Instead of a single level PI, wind power forecast uncertainties are represented in a list of PIs. These PIs are then decomposed into quantiles of wind power. A new scenario generation method is proposed to handle wind power forecast uncertainties. For each hour, an empirical cumulative distribution function (ECDF) is fitted to these quantile points. The Monte Carlo simulation method is used to generate scenarios from the ECDF. Then the wind power scenarios are incorporated into a stochastic security-constrained unit commitment (SCUC) model. The heuristic genetic algorithm is utilized to solve the stochastic SCUC problem. Five deterministic and four stochastic case studies incorporated with interval forecasts of wind power are implemented. The results of these cases are presented and discussed together. Generation costs, and the scheduled and real-time economic dispatch reserves of different unit commitment strategies are compared. The experimental results show that the stochastic model is more robust than deterministic ones and, thus, decreases the risk in system operations of smart grids.

  4. Velocity field calculation for non-orthogonal numerical grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flach, G. P.

    2015-03-01

    Computational grids containing cell faces that do not align with an orthogonal (e.g. Cartesian, cylindrical) coordinate system are routinely encountered in porous-medium numerical simulations. Such grids are referred to in this study as non-orthogonal grids because some cell faces are not orthogonal to a coordinate system plane (e.g. xy, yz or xz plane in Cartesian coordinates). Non-orthogonal grids are routinely encountered at the Savannah River Site in porous-medium flow simulations for Performance Assessments and groundwater flow modeling. Examples include grid lines that conform to the sloping roof of a waste tank or disposal unit in a 2D Performance Assessment simulation,more » and grid surfaces that conform to undulating stratigraphic surfaces in a 3D groundwater flow model. Particle tracking is routinely performed after a porous-medium numerical flow simulation to better understand the dynamics of the flow field and/or as an approximate indication of the trajectory and timing of advective solute transport. Particle tracks are computed by integrating the velocity field from cell to cell starting from designated seed (starting) positions. An accurate velocity field is required to attain accurate particle tracks. However, many numerical simulation codes report only the volumetric flowrate (e.g. PORFLOW) and/or flux (flowrate divided by area) crossing cell faces. For an orthogonal grid, the normal flux at a cell face is a component of the Darcy velocity vector in the coordinate system, and the pore velocity for particle tracking is attained by dividing by water content. For a non-orthogonal grid, the flux normal to a cell face that lies outside a coordinate plane is not a true component of velocity with respect to the coordinate system. Nonetheless, normal fluxes are often taken as Darcy velocity components, either naively or with accepted approximation. To enable accurate particle tracking or otherwise present an accurate depiction of the velocity field for a non-orthogonal grid, Darcy velocity components are rigorously derived in this study from normal fluxes to cell faces, which are assumed to be provided by or readily computed from porous-medium simulation code output. The normal fluxes are presumed to satisfy mass balances for every computational cell, and if so, the derived velocity fields are consistent with these mass balances. Derivations are provided for general two-dimensional quadrilateral and three-dimensional hexagonal systems, and for the commonly encountered special cases of perfectly vertical side faces in 2D and 3D and a rectangular footprint in 3D.« less

  5. Computational analysis of forebody tangential slot blowing on the high alpha research vehicle

    NASA Technical Reports Server (NTRS)

    Gee, Ken

    1995-01-01

    A numerical analysis of forebody tangential slot blowing as a means of generating side force and yawing moment is conducted using an aircraft geometry. The Reynolds-averaged, thin-layer, Navier-Stokes equations are solved using a partially flux-split, approximately-factored algorithm. An algebraic turbulence model is used to determine the turbulent eddy viscosity values. Solutions are obtained using both patched and overset grid systems. In the patched grid model, and actuator plane is used to introduce jet variables into the flow field. The overset grid model is used to model the physical slot geometry and facilitate modeling of the full aircraft configuration. A slot optimization study indicates that a short slot located close to the nose of the aircraft provided the most side force and yawing moment per unit blowing coefficient. Comparison of computed surface pressure with that obtained in full-scale wind tunnel tests produce good agreement, indicating the numerical method and grid system used in the study are valid. Full aircraft computations resolve the changes in vortex burst point due to blowing. A time-accurate full-aircraft solution shows the effect of blowing on the changes in the frequency of the aerodynamic loads over the vertical tails. A study of the effects of freestream Mach number and various jet parameters indicates blowing remains effective through the transonic Mach range. An investigation of the force onset time lag associated with forebody blowing shows the lag to be minimal. The knowledge obtained in this study may be applied to the design of a forebody tangential slot blowing system for use on flight aircraft.

  6. Enabling a flexible exchange of energy of a photovoltaic plant with the grid by means of a controlled storage system

    NASA Astrophysics Data System (ADS)

    Lazzari, R.; Parma, C.; De Marco, A.; Bittanti, S.

    2015-07-01

    In this paper, we describe a control strategy for a photovoltaic (PV) power plant equipped with an energy storage system (ESS), based on lithium-ion battery. The plant consists of the following units: the PV generator, the energy storage system, the DC-bus and the inverter. The control, organised in a hierarchical manner, maximises the self-consumption of the local load unit. In particular, the ESS action performs power balance in case of low solar radiation or surplus of PV generation, thus managing the power exchange variability at the plant with the grid. The implemented control strategy is under testing in RSE pilot test facility in Milan, Italy.

  7. Independent Research and Independent Exploratory Development Annual Report Fiscal Year 1976 and FYTQ

    DTIC Science & Technology

    1976-10-01

    Command Control Natural Language Processing; Network Study Small Ship C2 System Display Studies Communication HF-Propagation Signal Processing Theory...faster than their natur tl horizontally polarized light. This is passed through mechanical resonances. the wire grid polar’izer and mixes with the ZOOM...present in the coronary care unit Borkat, FR, Kataoka, RW, and Martin, JI, "Digital would contribute considerably to the high level of Cardiotachometer

  8. Exploring New Models for Utility Distributed Energy Resource Planning and Integration: SMUD and Con Edison

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2018-01-23

    As a result of the rapid growth of renewable energy in the United States, the U.S. electric grid is undergoing a monumental shift away from its historical status quo. These changes are occurring at both the centralized and local levels and have been driven by a number of different factors, including large declines in renewable energy costs, federal and state incentives and mandates, and advances in the underlying technology. Higher levels of variable-generation renewable energy, however, may require new and increasingly complex methods for utilities to operate and maintain the grid while also attempting to limit the costly build-out ofmore » supporting grid infrastructure.« less

  9. A two-phase investment model for optimal allocation of phasor measurement units considering transmission switching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mousavian, Seyedamirabbas; Valenzuela, Jorge; Wang, Jianhui

    2015-02-01

    Ensuring the reliability of an electrical power system requires a wide-area monitoring and full observability of the state variables. Phasor measurement units (PMUs) collect in real time synchronized phasors of voltages and currents which are used for the observability of the power grid. Due to the considerable cost of installing PMUs, it is not possible to equip all buses with PMUs. In this paper, we propose an integer linear programming model to determine the optimal PMU placement plan in two investment phases. In the first phase, PMUs are installed to achieve full observability of the power grid whereas additional PMUsmore » are installed in the second phase to guarantee the N - 1 observability of the power grid. The proposed model also accounts for transmission switching and single contingencies such as failure of a PMU or a transmission line. Results are provided on several IEEE test systems which show that our proposed approach is a promising enhancement to the methods available for the optimal placement of PMUs.« less

  10. Robust and efficient method for matching features in omnidirectional images

    NASA Astrophysics Data System (ADS)

    Zhu, Qinyi; Zhang, Zhijiang; Zeng, Dan

    2018-04-01

    Binary descriptors have been widely used in many real-time applications due to their efficiency. These descriptors are commonly designed for perspective images but perform poorly on omnidirectional images, which are severely distorted. To address this issue, this paper proposes tangent plane BRIEF (TPBRIEF) and adapted log polar grid-based motion statistics (ALPGMS). TPBRIEF projects keypoints to a unit sphere and applies the fixed test set in BRIEF descriptor on the tangent plane of the unit sphere. The fixed test set is then backprojected onto the original distorted images to construct the distortion invariant descriptor. TPBRIEF directly enables keypoint detecting and feature describing on original distorted images, whereas other approaches correct the distortion through image resampling, which introduces artifacts and adds time cost. With ALPGMS, omnidirectional images are divided into circular arches named adapted log polar grids. Whether a match is true or false is then determined by simply thresholding the match numbers in a grid pair where the two matched points located. Experiments show that TPBRIEF greatly improves the feature matching accuracy and ALPGMS robustly removes wrong matches. Our proposed method outperforms the state-of-the-art methods.

  11. The self-organization of grid cells in 3D

    PubMed Central

    Stella, Federico; Treves, Alessandro

    2015-01-01

    Do we expect periodic grid cells to emerge in bats, or perhaps dolphins, exploring a three-dimensional environment? How long will it take? Our self-organizing model, based on ring-rate adaptation, points at a complex answer. The mathematical analysis leads to asymptotic states resembling face centered cubic (FCC) and hexagonal close packed (HCP) crystal structures, which are calculated to be very close to each other in terms of cost function. The simulation of the full model, however, shows that the approach to such asymptotic states involves several sub-processes over distinct time scales. The smoothing of the initially irregular multiple fields of individual units and their arrangement into hexagonal grids over certain best planes are observed to occur relatively quickly, even in large 3D volumes. The correct mutual orientation of the planes, though, and the coordinated arrangement of different units, take a longer time, with the network showing no sign of convergence towards either a pure FCC or HCP ordering. DOI: http://dx.doi.org/10.7554/eLife.05913.001 PMID:25821989

  12. Sinter of uniform, predictable, blemish-free nickel plaque for large aerospace nickel cadmium cells

    NASA Technical Reports Server (NTRS)

    Seiger, H. N.

    1975-01-01

    A series of nickel slurry compositions were tested. Important slurry parameters were found to be the nature of the binder, a pore former and the method of mixing. A slow roll mixing which is non-turbulent successfully eliminated entrapped air so that bubbles and pockets were avoided in the sinter. A slurry applicator was developed which enabled an equal quantity of slurry to be applied to both sides of the grid. Sintering in a furnace having a graded atmosphere characteristic, ranging from oxidizing to strongly reducing, improved adhesion of porous sinter to grid and resulted in a uniform welding of nickel particles to each other throughout the plaque. Sintering was carried out in a horizontal furnace having three heating zones and 16 heating control circuits. Tests used for plaque evaluation include (1) appearance, (2) grid location and adhesion, (3) mechanical strength, (4) thickness, (5) weight per unit area, (6) void volume per unit area, (7) surface area and (8) electrical resistance. Plaque material was impregnated using Heliotek proprietary processes and 100 AH cells were fabricated.

  13. Redistribution population data across a regular spatial grid according to buildings characteristics

    NASA Astrophysics Data System (ADS)

    Calka, Beata; Bielecka, Elzbieta; Zdunkiewicz, Katarzyna

    2016-12-01

    Population data are generally provided by state census organisations at the predefined census enumeration units. However, these datasets very are often required at userdefined spatial units that differ from the census output levels. A number of population estimation techniques have been developed to address these problems. This article is one of those attempts aimed at improving county level population estimates by using spatial disaggregation models with support of buildings characteristic, derived from national topographic database, and average area of a flat. The experimental gridded population surface was created for Opatów county, sparsely populated rural region located in Central Poland. The method relies on geolocation of population counts in buildings, taking into account the building volume and structural building type and then aggregation the people total in 1 km quadrilateral grid. The overall quality of population distribution surface expressed by the mean of RMSE equals 9 persons, and the MAE equals 0.01. We also discovered that nearly 20% of total county area is unpopulated and 80% of people lived on 33% of the county territory.

  14. OpenMP parallelization of a gridded SWAT (SWATG)

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin

    2017-12-01

    Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.

  15. Navier-Stokes Flowfield Simulation of Boeing 747-200 as Platform for SOFIA

    NASA Technical Reports Server (NTRS)

    Srinivasan, G.R.

    1994-01-01

    Steady and unsteady viscous, three-dimensional flowfields are calculated using a thin layer approximation of Navier-Stokes equations in conjunction with Chimera overset grids. The finite-difference numerical scheme uses structured grids and a pentadiagonal flow solver called "OVERFLOW". The configuration of Boeing 747-200 has been chosen as one of configurations to be used as a platform for the SOFIA (Stratospheric Observatory For Infrared Astronomy). Initially, the steady flowfield of the full aircraft is calculated for the clean configuration (without a cavity to house telescope). This solution is then used to start the unsteady flowfield of a configuration containing cavity housing the observation telescope and its peripheral units. Analysis of unsteady flowfield in the cavity and its influence on the tail empennage, as well as the noise due to turbulence and optical quality of the flow are the main focus of this study. For the configuration considered here, the telescope housing cavity is located slightly downstream of the portwing. The entire flow-field is carefully constructed using 45 overset grids and consists of nearly 4 million grid points. All the computations axe done at one freestream flow condition of M(sub infinity) = 0.85, alpha = 2.5deg, and a Reynolds of Re = 1.85x10deg

  16. A technological review on electric vehicle DC charging stations using photovoltaic sources

    NASA Astrophysics Data System (ADS)

    Youssef, Cheddadi; Fatima, Errahimi; najia, Es-sbai; Chakib, Alaoui

    2018-05-01

    Within the next few years, Electrified vehicles are destined to become the essential component of the transport field. Consequently, the charging infrastructure should be developed in the same time. Among this substructure, Charging stations photovoltaic-assisted are attracting a substantial interest due to increased environmental awareness, cost reduction and rise in efficiency of the PV modules. The intention of this paper is to review the technological status of Photovoltaic–Electric vehicle (PV-EV) charging stations during the last decade. The PV-EV charging station is divided into two categories, which are PV-grid and PV-standalone charging systems. From a practical point view, the distinction between the two architectures is the bidirectional inverter, which is added to link the station to the smart grid. The technological infrastructure includes the common hardware components of every station, namely: PV array, dc-dc converter provided with MPPT control, energy storage unit, bidirectional dc charger and inverter. We investigate, compare and evaluate many valuable researches that contain the design and control of PV-EV charging system. Additionally, this concise overview reports the studies that include charging standards, the power converters topologies that focus on the adoption of Vehicle-to grid technology and the control for both PV–grid and PV standalone DC charging systems.

  17. BECCS Market Launch Strategy Aiming to Help Ensure Reliable Grid Power at High Penetrations of IRE (Intermittent Renewable Electricity)

    NASA Astrophysics Data System (ADS)

    WIlliams, R. H.

    2017-12-01

    Despite its recognized importance for carbon (C)-mitigation, progress in advancing biomass energy with CO2 capture and sequestration (BECCS) has been slow. A BECCS market launch strategy based on technologies ready for commercial-scale demonstration is discussed—based on co-gasification of coal and biomass to make H2 with CCS. H2 so produced would be a key element of a H2 balancing capacity (H2-BC) strategy for ensuring reliable grid power at high IRE penetrations. High grid penetrations of IRE must be complemented by fast-ramping balancing (backup and/or storage) capacity (BC) to ensure reliable grid power. BC provided now by natural gas-fired gas turbine combined cycle and combustion turbine units would eventually have to be decarbonized to realize C-mitigation goals, via CCS or other means. Capital-intensive CCS energy systems require baseload operation to realize favourable economics, but at high IRE penetrations, BC plants must be operated at low capacity factors. A H2-BC strategy is a promising way to address this challenge. The elements of a H2-BC system are: (a) H2 production from carbonaceous feedstocks in baseload plants with CCS; (b) H2 consumption in fast-ramping BC units that operate at low capacity factors; (c) Buffer underground H2 storage to enable decoupling baseload H2 production from highly variable H2 consumption by BC units. The concept is likely to "work" because underground H2 storage is expected to be inexpensive. A H2 production analysis is presented for a negative GHG-emitting H2-BC system based on cogasification of corn stover and coal, with captured CO2 used for enhanced oil recovery. The technical readiness of each system component is discussed, and preliminary insights are offered as to the conditions under which the corresponding H2-BC system might compete with natural gas in providing backup for IRE on US electric grids. Public policy to help advance this strategy might be forthcoming, because 2 US Senate bills with broad bipartisan support might become law soon: (a) S.1535, which extends and expands 45Q tax credits for CO2 EOR; and (b) S.1460. Inter alia, S.1460 authorizes DOE to spend $22 million/year during 2018-2022 to support of Front End Engineering and Design studies for net-negative CO2 emissions projects based on thermochemical coal/biomass coprocessing with CCS.

  18. How Low Can You Go? The Importance of Quantifying Minimum Generation Levels for Renewable Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denholm, Paul L; Brinkman, Gregory L; Mai, Trieu T

    One of the significant limitations of solar and wind deployment is declining value caused by the limited correlation of renewable energy supply and electricity demand as well as limited flexibility of the power system. Limited flexibility can result from thermal and hydro plants that cannot turn off or reduce output due to technical or economic limits. These limits include the operating range of conventional thermal power plants, the need for process heat from combined heat and power plants, and restrictions on hydro unit operation. To appropriately analyze regional and national energy policies related to renewable deployment, these limits must bemore » accurately captured in grid planning models. In this work, we summarize data sources and methods for U.S. power plants that can be used to capture minimum generation levels in grid planning tools, such as production cost models. We also provide case studies for two locations in the U.S. (California and Texas) that demonstrate the sensitivity of variable generation (VG) curtailment to grid flexibility assumptions which shows the importance of analyzing (and documenting) minimum generation levels in studies of increased VG penetration.« less

  19. Considering the spatial-scale factor when modelling sustainable land management.

    NASA Astrophysics Data System (ADS)

    Bouma, Johan

    2015-04-01

    Considering the spatial-scale factor when modelling sustainable land management. J.Bouma Em.prof. soil science, Wageningen University, Netherlands. Modelling soil-plant processes is a necessity when exploring future effects of climate change and innovative soil management on agricultural productivity. Soil data are needed to run models and traditional soil maps and the associated databases (based on various soil Taxonomies ), have widely been applied to provide such data obtained at "representative" points in the field. Pedotransferfunctions (PTF)are used to feed simulation models, statistically relating soil survey data ( obtained at a given point in the landscape) to physical parameters for simulation, thus providing a link with soil functionality. Soil science has a basic problem: their object of study is invisible. Only point data are obtained by augering or in pits. Only occasionally roadcuts provide a better view. Extrapolating point to area data is essential for all applications and presents a basic problem for soil science, because mapping units on soil maps, named for a given soil type,may also contain other soil types and quantitative information about the composition of soil map units is usually not available. For detailed work at farm level ( 1:5000-1:10000), an alternative procedure is proposed. Based on a geostatistical analysis, onsite soil observations are made in a grid pattern with spacings based on a geostatistical analysis. Multi-year simulations are made for each point of the functional properties that are relevant for the case being studied, such as the moisture supply capacity, nitrate leaching etc. under standardized boundary conditions to allow comparisons. Functional spatial units are derived next by aggregating functional point data. These units, which have successfully functioned as the basis for precision agriculture, do not necessarily correspond with Taxonomic units but when they do the Taxonomic names should be noted . At lower landscape and watershed scale ( 1:25.000 -1:50000) digital soil mapping can provide soil data for small grids that can be used for modeling, again through pedotransferfunctions. There is a risk, however, that digital mapping results in an isolated series of projects that don't increase the knowledge base on soil functionality, e.g.linking Taxonomic names ( such as soil series) to functionality, allowing predictions of soil behavior at new sites where certain soil series occur. We therefore suggest that aside from collecting 13 soil characteristics for each grid, as occurs in digital soil mapping, also the Taxonomic name of the representative soil in the grid is recorded. At spatial scales of 1:50000 and smaller, use of Taxonomic names becomes ever more attractive because at such small scales relations between soil types and landscape features become more pronounced. But in all cases, selection of procedures should not be science-based but based on the type of questions being asked including their level of generalization. These questions are quite different at the different spatial-scale levels and so should be the procedures.

  20. Illinois, Indiana, and Ohio Magnetic and Gravity Maps and Data: A Website for Distribution of Data

    USGS Publications Warehouse

    Daniels, David L.; Kucks, Robert P.; Hill, Patricia L.

    2008-01-01

    This web site gives the results of a USGS project to acquire the best available, public-domain, aeromagnetic and gravity data in the United States and merge these data into uniform, composite grids for each state. The results for the three states, Illinois, Indiana, and Ohio are presented here in one site. Files of aeromagnetic and gravity grids and images are available for these states for downloading. In Illinois, Indiana, and Ohio, 19 magnetic surveys have been knit together to form a single digital grid and map. And, a complete Bouguer gravity anomaly grid and map was generated from 128,227 gravity station measurements in and adjacent to Illinois, Indiana, and Ohio. In addition, a map shows the location of the aeromagnetic surveys, color-coded to the survey flight-line spacing. This project was supported by the Mineral Resource Program of the USGS.

  1. Global gridded crop specific agricultural areas from 1961-2014

    NASA Astrophysics Data System (ADS)

    Konar, M.; Jackson, N. D.

    2017-12-01

    Current global cropland datasets are limited in crop specificity and temporal resolution. Time series maps of crop specific agricultural areas would enable us to better understand the global agricultural geography of the 20th century. To this end, we develop a global gridded dataset of crop specific agricultural areas from 1961-2014. To do this, we downscale national cropland information using a probabilistic approach. Our method relies upon gridded Global Agro-Ecological Zones (GAEZ) maps, the History Database of the Global Environment (HYDE), and crop calendars from Sacks et al. (2010). We estimate crop-specific agricultural areas for a 0.25 degree spatial grid and annual time scale for all major crops. We validate our global estimates for the year 2000 with Monfreda et al. (2008) and our time series estimates within the United States using government data. This database will contribute to our understanding of global agricultural change of the past century.

  2. Expanded serial communication capability for the transport systems research vehicle laptop computers

    NASA Technical Reports Server (NTRS)

    Easley, Wesley C.

    1991-01-01

    A recent upgrade of the Transport Systems Research Vehicle (TSRV) operated by the Advanced Transport Operating Systems Program Office at the NASA Langley Research Center included installation of a number of Grid 1500 series laptop computers. Each unit is a 80386-based IBM PC clone. RS-232 data busses are needed for TSRV flight research programs, and it has been advantageous to extend the application of the Grids in this area. Use was made of the expansion features of the Grid internal bus to add a user programmable serial communication channel. Software to allow use of the Grid bus expansion has been written and placed in a Turbo C library for incorporation into applications programs in a transparent manner via function calls. Port setup; interrupt-driven, two-way data transfer; and software flow control are built into the library functions.

  3. Solar on the Rise: How Cost Declines and Grid Integration Shape Solar's Growth Potential in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cole, Wesley J; Denholm, Paul L; Feldman, David J

    During the past decade, solar power has experienced transformative price declines, enabling it to become a viable electricity source that is supplying 1% of U.S. and world electricity. Further cost reductions are expected to enable substantially greater solar deployment, and new Department of Energy cost targets for utility-scale photovoltaics (PV) and concentrating solar thermal power are $0.03/kW h and $0.05/kW h by 2030, respectively. However, cost reductions are no longer the only significant challenge for PV - addressing grid integration challenges and increasing grid flexibility are critical as the penetration of PV electricity on the grid increases. The development ofmore » low cost energy storage is particularly synergistic with low cost PV, as cost declines in each technology are expected to support greater market opportunities for the other.« less

  4. Draft reference grid cells for emergency response reconnaissance developed for use by the US Environmental Protection Agency [ER.QUADS6K_EPA

    EPA Pesticide Factsheets

    Draft reference grid cells for emergency response reconnaissance developed for use by the US Environmental Protection Agency. Grid cells are based on densification of the USGS Quarterquad (1:12,000 scale or 12K) grids for the continental United States, Alaska, Hawaii and Puerto Rico and are roughly equivalent to 1:6000 scale (6K) quadrangles approximately 2 miles long on each side. Note: This file is >80MB in size. Regional subsets have been created from this national file that include a 20 mile buffer of tiles around each EPA Region. To access the regional subsets, go to http://geodata.epa.gov/OSWER/6kquads_epa.zip and select the name of the file that corresponds to your region of interest (e.g. 6kquadr1.zip is the name of the file created for EPA Region 1).

  5. Calculations of Flowfield About Indented Nosetips,

    DTIC Science & Technology

    1982-08-23

    agreement is good. UNCLASSIFIED SECURITY CLASSIFICATION OF THIS PAOE(ft,. Date E -t. , - NSWC TR 82-286 FOREWORD A finite difference computer program has been...Specific heat at constant pressure and volume respectively e Total energy per unit volume E ,F,H,R,S,T Functions of U AHT, HT Error in total enthalpy and...total enthalpy respectively ijGrid index in E and n directions respectively SI Identity matrix J,K Maximum grid point in E and n directions respectively

  6. Social stimuli enhance phencyclidine (PCP) self-administration in rhesus monkeys

    PubMed Central

    Newman, Jennifer L.; Perry, Jennifer L.; Carroll, Marilyn E.

    2007-01-01

    Environmental factors, including social interaction, can alter the effects of drugs of abuse on behavior. The present study was conducted to examine the effects of social stimuli on oral phencyclidine (PCP) self-administration by rhesus monkeys. Ten adult rhesus monkeys (M. mulatta) were housed side by side in modular cages that could be configured to provide visual, auditory, and olfactory stimuli provided by another monkey located in the other side of the paired unit. During the first experiment, monkeys self-administered PCP (0.25 mg/ml) and water under concurrent fixed ratio (FR) 16 schedules of reinforcement with either a solid or a grid (social) partition separating each pair of monkeys. In the second experiment, a PCP concentration-response relationship was determined under concurrent progressive ratio (PR) schedules of reinforcement under the solid and grid partition conditions. Under the concurrent FR 16 schedules, PCP and water self-administration was significantly higher during exposure to a cage mate through a grid partition than when a solid partition separated the monkeys. The relative reinforcing strength of PCP, as measured by PR break points, was greater during the grid partition condition compared to the solid partition condition indicated by an upward shift in the concentration-response curve. To determine whether the social stimuli provided by another monkey led to activation of the hypothalamic-pituitary-adrenal (HPA) axis, which may have evoked the increase of PCP self-administration during the grid partition condition, a third experiment was conducted to examine cortisol levels under the two housing conditions. A modest, but nonsignificant increase in cortisol levels was found upon switching from the solid to the grid partition condition. The results suggest that social stimulation among monkeys in adjoining cages leads to enhanced reinforcing strength of PCP. PMID:17560636

  7. Simulating the impact of the large-scale circulation on the 2-m temperature and precipitation climatology

    NASA Astrophysics Data System (ADS)

    Bowden, Jared H.; Nolte, Christopher G.; Otte, Tanya L.

    2013-04-01

    The impact of the simulated large-scale atmospheric circulation on the regional climate is examined using the Weather Research and Forecasting (WRF) model as a regional climate model. The purpose is to understand the potential need for interior grid nudging for dynamical downscaling of global climate model (GCM) output for air quality applications under a changing climate. In this study we downscale the NCEP-Department of Energy Atmospheric Model Intercomparison Project (AMIP-II) Reanalysis using three continuous 20-year WRF simulations: one simulation without interior grid nudging and two using different interior grid nudging methods. The biases in 2-m temperature and precipitation for the simulation without interior grid nudging are unreasonably large with respect to the North American Regional Reanalysis (NARR) over the eastern half of the contiguous United States (CONUS) during the summer when air quality concerns are most relevant. This study examines how these differences arise from errors in predicting the large-scale atmospheric circulation. It is demonstrated that the Bermuda high, which strongly influences the regional climate for much of the eastern half of the CONUS during the summer, is poorly simulated without interior grid nudging. In particular, two summers when the Bermuda high was west (1993) and east (2003) of its climatological position are chosen to illustrate problems in the large-scale atmospheric circulation anomalies. For both summers, WRF without interior grid nudging fails to simulate the placement of the upper-level anticyclonic (1993) and cyclonic (2003) circulation anomalies. The displacement of the large-scale circulation impacts the lower atmosphere moisture transport and precipitable water, affecting the convective environment and precipitation. Using interior grid nudging improves the large-scale circulation aloft and moisture transport/precipitable water anomalies, thereby improving the simulated 2-m temperature and precipitation. The results demonstrate that constraining the RCM to the large-scale features in the driving fields improves the overall accuracy of the simulated regional climate, and suggest that in the absence of such a constraint, the RCM will likely misrepresent important large-scale shifts in the atmospheric circulation under a future climate.

  8. Efficient grid-based techniques for density functional theory

    NASA Astrophysics Data System (ADS)

    Rodriguez-Hernandez, Juan Ignacio

    Understanding the chemical and physical properties of molecules and materials at a fundamental level often requires quantum-mechanical models for these substance's electronic structure. This type of many body quantum mechanics calculation is computationally demanding, hindering its application to substances with more than a few hundreds atoms. The supreme goal of many researches in quantum chemistry---and the topic of this dissertation---is to develop more efficient computational algorithms for electronic structure calculations. In particular, this dissertation develops two new numerical integration techniques for computing molecular and atomic properties within conventional Kohn-Sham-Density Functional Theory (KS-DFT) of molecular electronic structure. The first of these grid-based techniques is based on the transformed sparse grid construction. In this construction, a sparse grid is generated in the unit cube and then mapped to real space according to the pro-molecular density using the conditional distribution transformation. The transformed sparse grid was implemented in program deMon2k, where it is used as the numerical integrator for the exchange-correlation energy and potential in the KS-DFT procedure. We tested our grid by computing ground state energies, equilibrium geometries, and atomization energies. The accuracy on these test calculations shows that our grid is more efficient than some previous integration methods: our grids use fewer points to obtain the same accuracy. The transformed sparse grids were also tested for integrating, interpolating and differentiating in different dimensions (n = 1,2,3,6). The second technique is a grid-based method for computing atomic properties within QTAIM. It was also implemented in deMon2k. The performance of the method was tested by computing QTAIM atomic energies, charges, dipole moments, and quadrupole moments. For medium accuracy, our method is the fastest one we know of.

  9. Vector-based navigation using grid-like representations in artificial agents.

    PubMed

    Banino, Andrea; Barry, Caswell; Uria, Benigno; Blundell, Charles; Lillicrap, Timothy; Mirowski, Piotr; Pritzel, Alexander; Chadwick, Martin J; Degris, Thomas; Modayil, Joseph; Wayne, Greg; Soyer, Hubert; Viola, Fabio; Zhang, Brian; Goroshin, Ross; Rabinowitz, Neil; Pascanu, Razvan; Beattie, Charlie; Petersen, Stig; Sadik, Amir; Gaffney, Stephen; King, Helen; Kavukcuoglu, Koray; Hassabis, Demis; Hadsell, Raia; Kumaran, Dharshan

    2018-05-01

    Deep neural networks have achieved impressive successes in fields ranging from object recognition to complex games such as Go 1,2 . Navigation, however, remains a substantial challenge for artificial agents, with deep neural networks trained by reinforcement learning 3-5 failing to rival the proficiency of mammalian spatial behaviour, which is underpinned by grid cells in the entorhinal cortex 6 . Grid cells are thought to provide a multi-scale periodic representation that functions as a metric for coding space 7,8 and is critical for integrating self-motion (path integration) 6,7,9 and planning direct trajectories to goals (vector-based navigation) 7,10,11 . Here we set out to leverage the computational functions of grid cells to develop a deep reinforcement learning agent with mammal-like navigational abilities. We first trained a recurrent network to perform path integration, leading to the emergence of representations resembling grid cells, as well as other entorhinal cell types 12 . We then showed that this representation provided an effective basis for an agent to locate goals in challenging, unfamiliar, and changeable environments-optimizing the primary objective of navigation through deep reinforcement learning. The performance of agents endowed with grid-like representations surpassed that of an expert human and comparison agents, with the metric quantities necessary for vector-based navigation derived from grid-like units within the network. Furthermore, grid-like representations enabled agents to conduct shortcut behaviours reminiscent of those performed by mammals. Our findings show that emergent grid-like representations furnish agents with a Euclidean spatial metric and associated vector operations, providing a foundation for proficient navigation. As such, our results support neuroscientific theories that see grid cells as critical for vector-based navigation 7,10,11 , demonstrating that the latter can be combined with path-based strategies to support navigation in challenging environments.

  10. Breakeven Prices for Photovoltaics on Supermarkets in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ong, S.; Clark, N.; Denholm, P.

    The photovoltaic (PV) breakeven price is the PV system price at which the cost of PV-generated electricity equals the cost of electricity purchased from the grid. This point is also called 'grid parity' and can be expressed as dollars per watt ($/W) of installed PV system capacity. Achieving the PV breakeven price depends on many factors, including the solar resource, local electricity prices, customer load profile, PV incentives, and financing. In the United States, where these factors vary substantially across regions, breakeven prices vary substantially across regions as well. In this study, we estimate current and future breakeven prices formore » PV systems installed on supermarkets in the United States. We also evaluate key drivers of current and future commercial PV breakeven prices by region. The results suggest that breakeven prices for PV systems installed on supermarkets vary significantly across the United States. Non-technical factors -- including electricity rates, rate structures, incentives, and the availability of system financing -- drive break-even prices more than technical factors like solar resource or system orientation. In 2020 (where we assume higher electricity prices and lower PV incentives), under base-case assumptions, we estimate that about 17% of supermarkets will be in utility territories where breakeven conditions exist at a PV system price of $3/W; this increases to 79% at $1.25/W (the DOE SunShot Initiative's commercial PV price target for 2020). These percentages increase to 26% and 91%, respectively, when rate structures favorable to PV are used.« less

  11. Comprehensive framework for visualizing and analyzing spatio-temporal dynamics of racial diversity in the entire United States

    PubMed Central

    Netzel, Pawel

    2017-01-01

    The United States is increasingly becoming a multi-racial society. To understand multiple consequences of this overall trend to our neighborhoods we need a methodology capable of spatio-temporal analysis of racial diversity at the local level but also across the entire U.S. Furthermore, such methodology should be accessible to stakeholders ranging from analysts to decision makers. In this paper we present a comprehensive framework for visualizing and analyzing diversity data that fulfills such requirements. The first component of our framework is a U.S.-wide, multi-year database of race sub-population grids which is freely available for download. These 30 m resolution grids have being developed using dasymetric modeling and are available for 1990-2000-2010. We summarize numerous advantages of gridded population data over commonly used Census tract-aggregated data. Using these grids frees analysts from constructing their own and allows them to focus on diversity analysis. The second component of our framework is a set of U.S.-wide, multi-year diversity maps at 30 m resolution. A diversity map is our product that classifies the gridded population into 39 communities based on their degrees of diversity, dominant race, and population density. It provides spatial information on diversity in a single, easy-to-understand map that can be utilized by analysts and end users alike. Maps based on subsequent Censuses provide information about spatio-temporal dynamics of diversity. Diversity maps are accessible through the GeoWeb application SocScape (http://sil.uc.edu/webapps/socscape_usa/) for an immediate online exploration. The third component of our framework is a proposal to quantitatively analyze diversity maps using a set of landscape metrics. Because of its form, a grid-based diversity map could be thought of as a diversity “landscape” and analyzed quantitatively using landscape metrics. We give a brief summary of most pertinent metrics and demonstrate how they can be applied to diversity maps. PMID:28358862

  12. The role of pre-field operations at four forest inventory units: We can see the trees, not just the forest

    Treesearch

    Sara A. Goeking; Greg C. Liknes

    2009-01-01

    The Forest Inventory and Analysis (FIA) program attempts to inventory all forested lands throughout the United States. Each of the four FIA units has developed a process to minimize inventory costs by refraining from visiting those plots in the national inventory grid that are undoubtedly nonforest. We refer to this process as pre-field operations. Until recently, the...

  13. Wireless Infrastructure M2M Network For Distributed Power Grid Monitoring

    PubMed Central

    Gharavi, Hamid; Hu, Bin

    2018-01-01

    With the massive integration of distributed renewable energy sources (RESs) into the power system, the demand for timely and reliable network quality monitoring, control, and fault analysis is rapidly growing. Following the successful deployment of Phasor Measurement Units (PMUs) in transmission systems for power monitoring, a new opportunity to utilize PMU measurement data for power quality assessment in distribution grid systems is emerging. The main problem however, is that a distribution grid system does not normally have the support of an infrastructure network. Therefore, the main objective in this paper is to develop a Machine-to-Machine (M2M) communication network that can support wide ranging sensory data, including high rate synchrophasor data for real-time communication. In particular, we evaluate the suitability of the emerging IEEE 802.11ah standard by exploiting its important features, such as classifying the power grid sensory data into different categories according to their traffic characteristics. For performance evaluation we use our hardware in the loop grid communication network testbed to access the performance of the network. PMID:29503505

  14. Backshort-Under-Grid arrays for infrared astronomy

    NASA Astrophysics Data System (ADS)

    Allen, C. A.; Benford, D. J.; Chervenak, J. A.; Chuss, D. T.; Miller, T. M.; Moseley, S. H.; Staguhn, J. G.; Wollack, E. J.

    2006-04-01

    We are developing a kilopixel, filled bolometer array for space infrared astronomy. The array consists of three individual components, to be merged into a single, working unit; (1) a transition edge sensor bolometer array, operating in the milliKelvin regime, (2) a quarter-wave backshort grid, and (3) superconducting quantum interference device multiplexer readout. The detector array is designed as a filled, square grid of suspended, silicon bolometers with superconducting sensors. The backshort arrays are fabricated separately and will be positioned in the cavities created behind each detector during fabrication. The grids have a unique interlocking feature machined into the walls for positioning and mechanical stability. The spacing of the backshort beneath the detector grid can be set from ˜30 300 μm, by independently adjusting two process parameters during fabrication. The ultimate goal is to develop a large-format array architecture with background-limited sensitivity, suitable for a wide range of wavelengths and applications, to be directly bump bonded to a multiplexer circuit. We have produced prototype two-dimensional arrays having 8×8 detector elements. We present detector design, fabrication overview, and assembly technologies.

  15. An Overview of Distributed Microgrid State Estimation and Control for Smart Grids

    PubMed Central

    Rana, Md Masud; Li, Li

    2015-01-01

    Given the significant concerns regarding carbon emission from the fossil fuels, global warming and energy crisis, the renewable distributed energy resources (DERs) are going to be integrated in the smart grid. This grid can spread the intelligence of the energy distribution and control system from the central unit to the long-distance remote areas, thus enabling accurate state estimation (SE) and wide-area real-time monitoring of these intermittent energy sources. In contrast to the traditional methods of SE, this paper proposes a novel accuracy dependent Kalman filter (KF) based microgrid SE for the smart grid that uses typical communication systems. Then this article proposes a discrete-time linear quadratic regulation to control the state deviations of the microgrid incorporating multiple DERs. Therefore, integrating these two approaches with application to the smart grid forms a novel contributions in green energy and control research communities. Finally, the simulation results show that the proposed KF based microgrid SE and control algorithm provides an accurate SE and control compared with the existing method. PMID:25686316

  16. Wireless Infrastructure M2M Network For Distributed Power Grid Monitoring.

    PubMed

    Gharavi, Hamid; Hu, Bin

    2017-01-01

    With the massive integration of distributed renewable energy sources (RESs) into the power system, the demand for timely and reliable network quality monitoring, control, and fault analysis is rapidly growing. Following the successful deployment of Phasor Measurement Units (PMUs) in transmission systems for power monitoring, a new opportunity to utilize PMU measurement data for power quality assessment in distribution grid systems is emerging. The main problem however, is that a distribution grid system does not normally have the support of an infrastructure network. Therefore, the main objective in this paper is to develop a Machine-to-Machine (M2M) communication network that can support wide ranging sensory data, including high rate synchrophasor data for real-time communication. In particular, we evaluate the suitability of the emerging IEEE 802.11ah standard by exploiting its important features, such as classifying the power grid sensory data into different categories according to their traffic characteristics. For performance evaluation we use our hardware in the loop grid communication network testbed to access the performance of the network.

  17. Internet-based wide area measurement applications in deregulated power systems

    NASA Astrophysics Data System (ADS)

    Khatib, Abdel-Rahman Amin

    Since the deregulation of power systems was started in 1989 in the UK, many countries have been motivated to undergo deregulation. The United State started deregulation in the energy sector in California back in 1996. Since that time many other states have also started the deregulation procedures in different utilities. Most of the deregulation market in the United States now is in the wholesale market area, however, the retail market is still undergoing changes. Deregulation has many impacts on power system network operation and control. The number of power transactions among the utilities has increased and many Independent Power Producers (IPPs) now have a rich market for competition especially in the green power market. The Federal Energy Regulatory Commission (FERC) called upon utilities to develop the Regional Transmission Organization (RTO). The RTO is a step toward the national transmission grid. RTO is an independent entity that will operate the transmission system in a large region. The main goal of forming RTOs is to increase the operation efficiency of the power network under the impact of the deregulated market. The objective of this work is to study Internet based Wide Area Information Sharing (WAIS) applications in the deregulated power system. The study is the first step toward building a national transmission grid picture using information sharing among utilities. Two main topics are covered as applications for the WAIS in the deregulated power system, state estimation and Total Transfer Capability (TTC) calculations. As a first step for building this national transmission grid picture, WAIS and the level of information sharing of the state estimation calculations have been discussed. WAIS impacts to the TTC calculations are also covered. A new technique to update the TTC using on line measurements based on WAIS created by sharing state estimation is presented.

  18. Disaggregating and mapping crop statistics using hypertemporal remote sensing

    NASA Astrophysics Data System (ADS)

    Khan, M. R.; de Bie, C. A. J. M.; van Keulen, H.; Smaling, E. M. A.; Real, R.

    2010-02-01

    Governments compile their agricultural statistics in tabular form by administrative area, which gives no clue to the exact locations where specific crops are actually grown. Such data are poorly suited for early warning and assessment of crop production. 10-Daily satellite image time series of Andalucia, Spain, acquired since 1998 by the SPOT Vegetation Instrument in combination with reported crop area statistics were used to produce the required crop maps. Firstly, the 10-daily (1998-2006) 1-km resolution SPOT-Vegetation NDVI-images were used to stratify the study area in 45 map units through an iterative unsupervised classification process. Each unit represents an NDVI-profile showing changes in vegetation greenness over time which is assumed to relate to the types of land cover and land use present. Secondly, the areas of NDVI-units and the reported cropped areas by municipality were used to disaggregate the crop statistics. Adjusted R-squares were 98.8% for rainfed wheat, 97.5% for rainfed sunflower, and 76.5% for barley. Relating statistical data on areas cropped by municipality with the NDVI-based unit map showed that the selected crops were significantly related to specific NDVI-based map units. Other NDVI-profiles did not relate to the studied crops and represented other types of land use or land cover. The results were validated by using primary field data. These data were collected by the Spanish government from 2001 to 2005 through grid sampling within agricultural areas; each grid (block) contains three 700 m × 700 m segments. The validation showed 68%, 31% and 23% variability explained (adjusted R-squares) between the three produced maps and the thousands of segment data. Mainly variability within the delineated NDVI-units caused relatively low values; the units are internally heterogeneous. Variability between units is properly captured. The maps must accordingly be considered "small scale maps". These maps can be used to monitor crop performance of specific cropped areas because of using hypertemporal images. Early warning thus becomes more location and crop specific because of using hypertemporal remote sensing.

  19. Grid-based lattice summation of electrostatic potentials by assembled rank-structured tensor approximation

    NASA Astrophysics Data System (ADS)

    Khoromskaia, Venera; Khoromskij, Boris N.

    2014-12-01

    Our recent method for low-rank tensor representation of sums of the arbitrarily positioned electrostatic potentials discretized on a 3D Cartesian grid reduces the 3D tensor summation to operations involving only 1D vectors however retaining the linear complexity scaling in the number of potentials. Here, we introduce and study a novel tensor approach for fast and accurate assembled summation of a large number of lattice-allocated potentials represented on 3D N × N × N grid with the computational requirements only weakly dependent on the number of summed potentials. It is based on the assembled low-rank canonical tensor representations of the collected potentials using pointwise sums of shifted canonical vectors representing the single generating function, say the Newton kernel. For a sum of electrostatic potentials over L × L × L lattice embedded in a box the required storage scales linearly in the 1D grid-size, O(N) , while the numerical cost is estimated by O(NL) . For periodic boundary conditions, the storage demand remains proportional to the 1D grid-size of a unit cell, n = N / L, while the numerical cost reduces to O(N) , that outperforms the FFT-based Ewald-type summation algorithms of complexity O(N3 log N) . The complexity in the grid parameter N can be reduced even to the logarithmic scale O(log N) by using data-sparse representation of canonical N-vectors via the quantics tensor approximation. For justification, we prove an upper bound on the quantics ranks for the canonical vectors in the overall lattice sum. The presented approach is beneficial in applications which require further functional calculus with the lattice potential, say, scalar product with a function, integration or differentiation, which can be performed easily in tensor arithmetics on large 3D grids with 1D cost. Numerical tests illustrate the performance of the tensor summation method and confirm the estimated bounds on the tensor ranks.

  20. Characterization of scatter in digital mammography from physical measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leon, Stephanie M., E-mail: Stephanie.Leon@uth.tmc.edu; Wagner, Louis K.; Brateman, Libby F.

    2014-06-15

    Purpose: That scattered radiation negatively impacts the quality of medical radiographic imaging is well known. In mammography, even slight amounts of scatter reduce the high contrast required for subtle soft-tissue imaging. In current clinical mammography, image contrast is partially improved by use of an antiscatter grid. This form of scatter rejection comes with a sizeable dose penalty related to the concomitant elimination of valuable primary radiation. Digital mammography allows the use of image processing as a method of scatter correction that might avoid effects that negatively impact primary radiation, while potentially providing more contrast improvement than is currently possible withmore » a grid. For this approach to be feasible, a detailed characterization of the scatter is needed. Previous research has modeled scatter as a constant background that serves as a DC bias across the imaging surface. The goal of this study was to provide a more substantive data set for characterizing the spatially-variant features of scatter radiation at the image detector of modern mammography units. Methods: This data set was acquired from a model of the radiation beam as a matrix of very narrow rays or pencil beams. As each pencil beam penetrates tissue, the pencil widens in a predictable manner due to the production of scatter. The resultant spreading of the pencil beam at the detector surface can be characterized by two parameters: mean radial extent (MRE) and scatter fraction (SF). The SF and MRE were calculated from measurements obtained using the beam stop method. Two digital mammography units were utilized, and the SF and MRE were found as functions of target, filter, tube potential, phantom thickness, and presence or absence of a grid. These values were then used to generate general equations allowing the SF and MRE to be calculated for any combination of the above parameters. Results: With a grid, the SF ranged from a minimum of about 0.05 to a maximum of about 0.16, and the MRE ranged from about 3 to 13 mm. Without a grid, the SF ranged from a minimum of 0.25 to a maximum of 0.52, and the MRE ranged from about 20 to 45 mm. The SF with a grid demonstrated a mild dependence on target/filter combination and kV, whereas the SF without a grid was independent of these factors. The MRE demonstrated a complex relationship as a function of kV, with notable difference among target/filter combinations. The primary source of change in both the SF and MRE was phantom thickness. Conclusions: Because breast tissue varies spatially in physical density and elemental content, the effective thickness of breast tissue varies spatially across the imaging field, resulting in a spatially-variant scatter distribution in the imaging field. The data generated in this study can be used to characterize the scatter contribution on a point-by-point basis, for a variety of different techniques.« less

  1. A solar thermal electric power plant for small communities

    NASA Technical Reports Server (NTRS)

    Holl, R. J.

    1979-01-01

    A solar power plant has been designed with a rating of 1000-kW electric and a 0.4 annual capacity factor. It was configured as a prototype for plants in the 1000 to 10,000-kWe size range for application to small communities or industrial users either grid-connected or isolated from a utility grid. A small central receiver was selected for solar energy collection after being compared with alternative distributed collectors. Further trade studies resulted in the selection of Hitec (heat transfer salt composed of 53 percent KNO3, 40 percent NaNO2, 7 percent NaNO3) as both the receiver coolant and the sensible heat thermal stroage medium and the steam Rankine cycle for power conversion. The plant is configured with road-transportable units to accommodate remote sites and minimize site assembly requirements. Results of the analyses indicate that busbar energy costs are competitive with diesel-electric plants in certain situations, e.g., off-grid, remote regions with high insolation. Sensitivity of energy costs to plant power rating and system capacity factor are given.

  2. The Decay of Forced Turbulent Coflow of He II Past a Grid

    NASA Astrophysics Data System (ADS)

    Babuin, S.; Varga, E.; Skrbek, L.

    2014-04-01

    We present an experimental study of the decay of He II turbulence created mechanically, by a bellows-induced flow past a stationary grid in a 7×7 mm2 superfluid wind tunnel. The temporal decay L( t) originating from various steady-states of vortex line length per unit volume, L 0, has been observed based on measurements of the attenuation of second-sound, in the temperature range 1.17 K< T<1.95 K. Each presented decay curve is the average of up to 150 single decay events. We find that, independently of T and L 0, within seconds past the sudden stop of the drive, all the decay curves show a universal behavior lasting up to 200 s, of the form L( t)∝( t- t 0)-3/2, where t 0 is the virtual origin time. From this decay process we deduce the effective kinematic viscosity of turbulent He II. We compare our results with the bench-mark Oregon towed grid experiments and, despite our turbulence being non-homogeneous, find strong similarities.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sepanski, R.J.; Boden, T.A.; Daniels, R.C.

    This document presents land-based monthly surface air temperature anomalies (departures from a 1951--1970 reference period mean) on a 5{degree} latitude by 10{degree} longitude global grid. Monthly surface air temperature anomalies (departures from a 1957--1975 reference period mean) for the Antarctic (grid points from 65{degree}S to 85{degree}S) are presented in a similar way as a separate data set. The data were derived primarily from the World Weather Records and the archives of the United Kingdom Meteorological Office. This long-term record of temperature anomalies may be used in studies addressing possible greenhouse-gas-induced climate changes. To date, the data have been employed inmore » generating regional, hemispheric, and global time series for determining whether recent (i.e., post-1900) warming trends have taken place. This document also presents the monthly mean temperature records for the individual stations that were used to generate the set of gridded anomalies. The periods of record vary by station. Northern Hemisphere station data have been corrected for inhomogeneities, while Southern Hemisphere data are presented in uncorrected form. 14 refs., 11 figs., 10 tabs.« less

  4. Evaluation of load flow and grid expansion in a unit-commitment and expansion optimization model SciGRID International Conference on Power Grid Modelling

    NASA Astrophysics Data System (ADS)

    Senkpiel, Charlotte; Biener, Wolfgang; Shammugam, Shivenes; Längle, Sven

    2018-02-01

    Energy system models serve as a basis for long term system planning. Joint optimization of electricity generating technologies, storage systems and the electricity grid leads to lower total system cost compared to an approach in which the grid expansion follows a given technology portfolio and their distribution. Modelers often face the problem of finding a good tradeoff between computational time and the level of detail that can be modeled. This paper analyses the differences between a transport model and a DC load flow model to evaluate the validity of using a simple but faster transport model within the system optimization model in terms of system reliability. The main findings in this paper are that a higher regional resolution of a system leads to better results compared to an approach in which regions are clustered as more overloads can be detected. An aggregation of lines between two model regions compared to a line sharp representation has little influence on grid expansion within a system optimizer. In a DC load flow model overloads can be detected in a line sharp case, which is therefore preferred. Overall the regions that need to reinforce the grid are identified within the system optimizer. Finally the paper recommends the usage of a load-flow model to test the validity of the model results.

  5. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards

    PubMed Central

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G.

    2012-01-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids. The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable. In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation. We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards. PMID:22347787

  6. Compute-unified device architecture implementation of a block-matching algorithm for multiple graphical processing unit cards.

    PubMed

    Massanes, Francesc; Cadennes, Marie; Brankov, Jovan G

    2011-07-01

    In this paper we describe and evaluate a fast implementation of a classical block matching motion estimation algorithm for multiple Graphical Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) computing engine. The implemented block matching algorithm (BMA) uses summed absolute difference (SAD) error criterion and full grid search (FS) for finding optimal block displacement. In this evaluation we compared the execution time of a GPU and CPU implementation for images of various sizes, using integer and non-integer search grids.The results show that use of a GPU card can shorten computation time by a factor of 200 times for integer and 1000 times for a non-integer search grid. The additional speedup for non-integer search grid comes from the fact that GPU has built-in hardware for image interpolation. Further, when using multiple GPU cards, the presented evaluation shows the importance of the data splitting method across multiple cards, but an almost linear speedup with a number of cards is achievable.In addition we compared execution time of the proposed FS GPU implementation with two existing, highly optimized non-full grid search CPU based motion estimations methods, namely implementation of the Pyramidal Lucas Kanade Optical flow algorithm in OpenCV and Simplified Unsymmetrical multi-Hexagon search in H.264/AVC standard. In these comparisons, FS GPU implementation still showed modest improvement even though the computational complexity of FS GPU implementation is substantially higher than non-FS CPU implementation.We also demonstrated that for an image sequence of 720×480 pixels in resolution, commonly used in video surveillance, the proposed GPU implementation is sufficiently fast for real-time motion estimation at 30 frames-per-second using two NVIDIA C1060 Tesla GPU cards.

  7. SimulatorToFMU v0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nouidui, Thierry; Wetter, Michael

    SimulatorToFMU is a software package written in Python which allows users to export a memoryless Python-driven simulation program or script as a Functional Mock-up Unit (FMU) for model exchange or co-simulation.In CyDER (Cyber Physical Co-simulation Platform for Distributed Energy Resources in Smart Grids), SimulatorToFMU will allow exporting OPAL-RT as an FMU. This will enable OPAL-RT to be linked to CYMDIST and GridDyn FMUs through a standardized open source interface.

  8. An Application of Con-Resistant Trust to Improve the Reliability of Special Protection Systems within the Smart Grid

    DTIC Science & Technology

    2012-06-01

    in an effort to be more reliable and efficient. However, with the benefits of this new technology comes added risk . This research utilizes a con ...AN APPLICATION OF CON -RESISTANT TRUST TO IMPROVE THE RELIABILITY OF SPECIAL PROTECTION SYSTEMS WITHIN THE SMART GRID THESIS Crystal M. Shipman...Government and is not subject to copyright protection in the United States AFIT/GCO/ENG/12-22 AN APPLICATION OF CON -RESISTANT TRUST TO IMPROVE THE

  9. Smart nanogrid systems for disaster mitigation employing deployable renewable energy harvesting devices

    NASA Astrophysics Data System (ADS)

    Ghasemi-Nejhad, Mehrdad N.; Menendez, Michael; Minei, Brenden; Wong, Kyle; Gabrick, Caton; Thornton, Matsu; Ghorbani, Reza

    2016-04-01

    This paper explains the development of smart nanogrid systems for disaster mitigation employing deployable renewable energy harvesting, or Deployable Disaster Devices (D3), where wind turbines and solar panels are developed in modular forms, which can be tied together depending on the needed power. The D3 packages/units can be used: (1) as a standalone unit in case of a disaster where no source of power is available, (2) for a remote location such as a farm, camp site, or desert (3) for a community that converts energy usage from fossil fuels to Renewable Energy (RE) sources, or (4) in a community system as a source of renewable energy for grid-tie or off-grid operation. In Smart D3 system, the power is generated (1) for consumer energy needs, (2) charge storage devices (such as batteries, capacitors, etc.), (3) to deliver power to the network when the smart D3 nano-grid is tied to the network and when the power generation is larger than consumption and storage recharge needs, or (4) to draw power from the network when the smart D3 nano-grid is tied to the network and when the power generation is less than consumption and storage recharge needs. The power generated by the Smart D3 systems are routed through high efficiency inverters for proper DC to DC or DC to AC for final use or grid-tie operations. The power delivery from the D3 is 220v AC, 110v AC and 12v DC provide proper power for most electrical and electronic devices worldwide. The power supply is scalable, using a modular system that connects multiple units together. This are facilitated through devices such as external Input-Output or I/O ports. The size of the system can be scaled depending on how many accessory units are connected to the I/O ports on the primary unit. The primary unit is the brain of the system allowing for smart switching and load balancing of power input and smart regulation of power output. The Smart D3 systems are protected by ruggedized weather proof casings allowing for operation in a variety of extreme environments and can be parachuted into the needed locations. The Smart Nanogrid Systems will have sensors that will sense the environmental conditions for the wind turbines and solar panels for maximum energy harvesting as well as identifying the appliances in use. These signal will be sent to a control system to send signal to the energy harvester actuators to maximize the power generation as well as regulating the power, i.e., either send the power to the appliances and consumer devices or send the power to the batteries and capacitors for energy storage, if the power is being generated but there are no consumer appliances in use, making it a "smart nanogrid deployable renewable energy harvesting system."

  10. Application of the FUN3D Unstructured-Grid Navier-Stokes Solver to the 4th AIAA Drag Prediction Workshop Cases

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, Elizabeth M.; Hammond, Dana P.; Nielsen, Eric J.; Pirzadeh, S. Z.; Rumsey, Christopher L.

    2010-01-01

    FUN3D Navier-Stokes solutions were computed for the 4th AIAA Drag Prediction Workshop grid convergence study, downwash study, and Reynolds number study on a set of node-based mixed-element grids. All of the baseline tetrahedral grids were generated with the VGRID (developmental) advancing-layer and advancing-front grid generation software package following the gridding guidelines developed for the workshop. With maximum grid sizes exceeding 100 million nodes, the grid convergence study was particularly challenging for the node-based unstructured grid generators and flow solvers. At the time of the workshop, the super-fine grid with 105 million nodes and 600 million elements was the largest grid known to have been generated using VGRID. FUN3D Version 11.0 has a completely new pre- and post-processing paradigm that has been incorporated directly into the solver and functions entirely in a parallel, distributed memory environment. This feature allowed for practical pre-processing and solution times on the largest unstructured-grid size requested for the workshop. For the constant-lift grid convergence case, the convergence of total drag is approximately second-order on the finest three grids. The variation in total drag between the finest two grids is only 2 counts. At the finest grid levels, only small variations in wing and tail pressure distributions are seen with grid refinement. Similarly, a small wing side-of-body separation also shows little variation at the finest grid levels. Overall, the FUN3D results compare well with the structured-grid code CFL3D. The FUN3D downwash study and Reynolds number study results compare well with the range of results shown in the workshop presentations.

  11. TIGGERC: Turbomachinery Interactive Grid Generator for 2-D Grid Applications and Users Guide

    NASA Technical Reports Server (NTRS)

    Miller, David P.

    1994-01-01

    A two-dimensional multi-block grid generator has been developed for a new design and analysis system for studying multiple blade-row turbomachinery problems. TIGGERC is a mouse driven, interactive grid generation program which can be used to modify boundary coordinates and grid packing and generates surface grids using a hyperbolic tangent or algebraic distribution of grid points on the block boundaries. The interior points of each block grid are distributed using a transfinite interpolation approach. TIGGERC can generate a blocked axisymmetric H-grid, C-grid, I-grid or O-grid for studying turbomachinery flow problems. TIGGERC was developed for operation on Silicon Graphics workstations. Detailed discussion of the grid generation methodology, menu options, operational features and sample grid geometries are presented.

  12. U.S. Electricity Grid & Markets

    EPA Pesticide Factsheets

    Renewable Energy Certificates (RECs), are tradable, non-tangible energy commodities in the United States that represent proof that 1 megawatt-hour (MWh) of electricity was generated from an eligible renewable energy resource.

  13. Evaluating Gridded Spring Indices Using the USA National Phenology Network's Observational Phenology Data

    NASA Astrophysics Data System (ADS)

    Crimmins, T. M.; Gerst, K.

    2017-12-01

    The USA National Phenology Network (USA-NPN; www.usanpn.org) produces and freely delivers daily and short-term forecast maps of spring onset dates at fine spatial scale for the conterminous United States and Alaska using the Spring Indices. These models, which represent the start of biological activity in the spring season, were developed using a long-term observational record of four species of lilacs and honeysuckles contributed by volunteer observers. Three of the four species continue to be tracked through the USA-NPN's phenology observation program, Nature's Notebook. The gridded Spring Index maps have utility for a wide range of natural resource planning and management applications, including scheduling invasive species and pest detection and control activities, anticipating allergy outbreaks and planning agricultural harvest dates. However, to date, there has not been a comprehensive assessment of how well the gridded Spring Index maps accurately reflect phenological activity in lilacs and honeysuckles or other species of plants. In this study, we used observational plant phenology data maintained by the USA-NPN to evaluate how well the gridded Spring Index maps match leaf and flowering onset dates in a) the lilac and honeysuckle species used to construct the models and b) in several species of deciduous trees. The Spring Index performed strongly at predicting the timing of leaf-out and flowering in lilacs and honeysuckles. The average error between predicted and observed date of onset ranged from 5.9 to 11.4 days. Flowering models performed slightly better than leaf-out models. The degree to which the Spring Indices predicted native deciduous tree leaf and flower phenology varied by year, species, and region. Generally, the models were better predictors of leaf and flowering onset dates in the Northeastern and Midwestern US. These results reveal when and where the Spring Indices are a meaningful proxy of phenological activity across the United States.

  14. ICT-based hydrometeorology science and natural disaster societal impact assessment

    NASA Astrophysics Data System (ADS)

    Parodi, A.; Clematis, A.; Craig, G. C.; Kranzmueller, D.

    2009-09-01

    In the Lisbon strategy, the 2005 European Council identified knowledge and innovation as the engines of sustainable growth and stated that it is essential to build a fully inclusive information society. In parallel, the World Conference on Disaster Reduction (Hyogo, 2005), defined among its thematic priorities the improvement of international cooperation in hydrometeorology research activities. This was recently confirmed at the joint press conference of the Center for Research on Epidemiology of Disasters (CRED) with the United Nations International Strategy for Disaster Reduction (UNISDR) Secretariat, held on January 2009, where it was noted that flood and storm events are among the natural disasters that most impact human life. Hydrometeorological science has made strong progress over the last decade at the European and worldwide level: new modelling tools, post processing methodologies and observational data are available. Recent European efforts in developing a platform for e-science, like EGEE (Enabling Grids for E-sciencE), SEE-GRID-SCI (South East Europe GRID e-Infrastructure for regional e-Science), and the German C3-Grid, provide an ideal basis for the sharing of complex hydrometeorological data sets and tools. Despite these early initiatives, however, the awareness of the potential of the Grid technology as a catalyst for future hydrometeorological research is still low and both the adoption and the exploitation have astonishingly been slow, not only within individual EC member states, but also on a European scale. With this background in mind, the goal of the Distributed Research Infrastructure for Hydro-Meteorology Study (DRIHMS) project is the promotion of the Grid culture within the European hydrometeorological research community through the diffusion of a Grid platform for e-collaboration in this earth science sector: the idea is to further boost European research excellence and competitiveness in the fields of hydrometeorological research and Grid research by bridging the gaps between these two scientific communities. Furthermore the project is intended to transfer the results to areas beyond the strict hydrometeorology science as a support for the assessment of the effects of extreme hydrometeorological events on society and for the development of the tools improving the adaptation and resilience of society to the challenges of climate change.

  15. Archaeological Investigations at Site 45-DO-285, Chief Joseph Dam Project, Washington.

    DTIC Science & Technology

    1984-01-01

    systematic design. Sampling strata were created by dividing the site Into seven sets of grid units, each composed of 25 2 x 2-in units arranged in squares...NISP=23) C-r-s-my pJc (painted turtle) -- 23 elements. Painted turtle Is the only turtle currently I lving in the project area. Clemmys marmoraa

  16. Adaptive-optics optical coherence tomography processing using a graphics processing unit.

    PubMed

    Shafer, Brandon A; Kriske, Jeffery E; Kocaoglu, Omer P; Turner, Timothy L; Liu, Zhuolin; Lee, John Jaehwan; Miller, Donald T

    2014-01-01

    Graphics processing units are increasingly being used for scientific computing for their powerful parallel processing abilities, and moderate price compared to super computers and computing grids. In this paper we have used a general purpose graphics processing unit to process adaptive-optics optical coherence tomography (AOOCT) images in real time. Increasing the processing speed of AOOCT is an essential step in moving the super high resolution technology closer to clinical viability.

  17. 15 MW HArdware-in-the-loop Grid Simulation Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rigas, Nikolaos; Fox, John Curtiss; Collins, Randy

    2014-10-31

    The 15MW Hardware-in-the-loop (HIL) Grid Simulator project was to (1) design, (2) construct and (3) commission a state-of-the-art grid integration testing facility for testing of multi-megawatt devices through a ‘shared facility’ model open to all innovators to promote the rapid introduction of new technology in the energy market to lower the cost of energy delivered. The 15 MW HIL Grid Simulator project now serves as the cornerstone of the Duke Energy Electric Grid Research, Innovation and Development (eGRID) Center. This project leveraged the 24 kV utility interconnection and electrical infrastructure of the US DOE EERE funded WTDTF project at themore » Clemson University Restoration Institute in North Charleston, SC. Additionally, the project has spurred interest from other technology sectors, including large PV inverter and energy storage testing and several leading edge research proposals dealing with smart grid technologies, grid modernization and grid cyber security. The key components of the project are the power amplifier units capable of providing up to 20MW of defined power to the research grid. The project has also developed a one of a kind solution to performing fault ride-through testing by combining a reactive divider network and a large power converter into a hybrid method. This unique hybrid method of performing fault ride-through analysis will allow for the research team at the eGRID Center to investigate the complex differences between the alternative methods of performing fault ride-through evaluations and will ultimately further the science behind this testing. With the final goal of being able to perform HIL experiments and demonstration projects, the eGRID team undertook a significant challenge with respect to developing a control system that is capable of communicating with several different pieces of equipment with different communication protocols in real-time. The eGRID team developed a custom fiber optical network that is based upon FPGA hardware that allows for communication between the key real-time interfaces and reduces the latency between these interfaces to acceptable levels for HIL experiments.« less

  18. Benefits Analysis of Smart Grid Projects. White paper, 2014-2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marnay, Chris; Liu, Liping; Yu, JianCheng

    Smart grids are rolling out internationally, with the United States (U.S.) nearing completion of a significant USD4-plus-billion federal program funded under the American Recovery and Reinvestment Act (ARRA-2009). The emergence of smart grids is widespread across developed countries. Multiple approaches to analyzing the benefits of smart grids have emerged. The goals of this white paper are to review these approaches and analyze examples of each to highlight their differences, advantages, and disadvantages. This work was conducted under the auspices of a joint U.S.-China research effort, the Climate Change Working Group (CCWG) Implementation Plan, Smart Grid. We present comparative benefits assessmentsmore » (BAs) of smart grid demonstrations in the U.S. and China along with a BA of a pilot project in Europe. In the U.S., we assess projects at two sites: (1) the University of California, Irvine campus (UCI), which consists of two distinct demonstrations: Southern California Edison’s (SCE) Irvine Smart Grid Demonstration Project (ISGD) and the UCI campus itself; and (2) the Navy Yard (TNY) area in Philadelphia, which has been repurposed as a mixed commercial-industrial, and possibly residential, development. In China, we cover several smart-grid aspects of the Sino-Singapore Tianjin Eco-city (TEC) and the Shenzhen Bay Technology and Ecology City (B-TEC). In Europe, we look at a BA of a pilot smart grid project in the Malagrotta area west of Rome, Italy, contributed by the Joint Research Centre (JRC) of the European Commission. The Irvine sub-project BAs use the U.S. Department of Energy (U.S. DOE) Smart Grid Computational Tool (SGCT), which is built on methods developed by the Electric Power Research Institute (EPRI). The TEC sub-project BAs apply Smart Grid Multi-Criteria Analysis (SG-MCA) developed by the State Grid Corporation of China (SGCC) based on the analytic hierarchy process (AHP) with fuzzy logic. The B-TEC and TNY sub-project BAs are evaluated using new approaches developed by those project teams. JRC has adopted an approach similar to EPRI’s but tailored to the Malagrotta distribution grid.« less

  19. Execution of a parallel edge-based Navier-Stokes solver on commodity graphics processor units

    NASA Astrophysics Data System (ADS)

    Corral, Roque; Gisbert, Fernando; Pueblas, Jesus

    2017-02-01

    The implementation of an edge-based three-dimensional Reynolds Average Navier-Stokes solver for unstructured grids able to run on multiple graphics processing units (GPUs) is presented. Loops over edges, which are the most time-consuming part of the solver, have been written to exploit the massively parallel capabilities of GPUs. Non-blocking communications between parallel processes and between the GPU and the central processor unit (CPU) have been used to enhance code scalability. The code is written using a mixture of C++ and OpenCL, to allow the execution of the source code on GPUs. The Message Passage Interface (MPI) library is used to allow the parallel execution of the solver on multiple GPUs. A comparative study of the solver parallel performance is carried out using a cluster of CPUs and another of GPUs. It is shown that a single GPU is up to 64 times faster than a single CPU core. The parallel scalability of the solver is mainly degraded due to the loss of computing efficiency of the GPU when the size of the case decreases. However, for large enough grid sizes, the scalability is strongly improved. A cluster featuring commodity GPUs and a high bandwidth network is ten times less costly and consumes 33% less energy than a CPU-based cluster with an equivalent computational power.

  20. Sensitivity of Hydrologic Extremes to Spatial Resolution of Meteorological Forcings: A Case Study of the Conterminous United States

    NASA Astrophysics Data System (ADS)

    Kao, S. C.; Naz, B. S.; Gangrade, S.; Ashfaq, M.; Rastogi, D.

    2016-12-01

    The magnitude and frequency of hydroclimate extremes are projected to increase in the conterminous United States (CONUS) with significant implications for future water resource planning and flood risk management. Nevertheless, apart from the change of natural environment, the choice of model spatial resolution could also artificially influence the features of simulated extremes. To better understand how the spatial resolution of meteorological forcings may affect hydroclimate projections, we test the runoff sensitivity using the Variable Infiltration Capacity (VIC) model that was calibrated for each CONUS 8-digit hydrologic unit (HUC8) at 1/24° ( 4km) grid resolution. The 1980-2012 gridded Daymet and PRISM meteorological observations are used to conduct the 1/24° resolution control simulation. Comparative simulations are achieved by smoothing the 1/24° forcing into 1/12° and 1/8° resolutions which are then used to drive the VIC model for the CONUS. In addition, we also test how the simulated high and low runoff conditions would react to change in precipitation (±10%) and temperature (+1°C). The results are further analyzed for various types of hydroclimate extremes across different watersheds in the CONUS. This work helps us understand the sensitivity of simulated runoff to different spatial resolutions of climate forcings and also its sensitivity to different watershed sizes and characteristics of extreme events in the future climate conditions.

  1. Research on Grid Size Suitability of Gridded Population Distribution in Urban Area: A Case Study in Urban Area of Xuanzhou District, China.

    PubMed

    Dong, Nan; Yang, Xiaohuan; Cai, Hongyan; Xu, Fengjiao

    2017-01-01

    The research on the grid size suitability is important to provide improvement in accuracies of gridded population distribution. It contributes to reveal the actual spatial distribution of population. However, currently little research has been done in this area. Many well-modeled gridded population dataset are basically built at a single grid scale. If the grid cell size is not appropriate, it will result in spatial information loss or data redundancy. Therefore, in order to capture the desired spatial variation of population within the area of interest, it is necessary to conduct research on grid size suitability. This study summarized three expressed levels to analyze grid size suitability, which include location expressed level, numeric information expressed level, and spatial relationship expressed level. This study elaborated the reasons for choosing the five indexes to explore expression suitability. These five indexes are consistency measure, shape index rate, standard deviation of population density, patches diversity index, and the average local variance. The suitable grid size was determined by constructing grid size-indicator value curves and suitable grid size scheme. Results revealed that the three expressed levels on 10m grid scale are satisfying. And the population distribution raster data with 10m grid size provide excellent accuracy without loss. The 10m grid size is recommended as the appropriate scale for generating a high-quality gridded population distribution in our study area. Based on this preliminary study, it indicates the five indexes are coordinated with each other and reasonable and effective to assess grid size suitability. We also suggest choosing these five indexes in three perspectives of expressed level to carry out the research on grid size suitability of gridded population distribution.

  2. Research on Grid Size Suitability of Gridded Population Distribution in Urban Area: A Case Study in Urban Area of Xuanzhou District, China

    PubMed Central

    Dong, Nan; Yang, Xiaohuan; Cai, Hongyan; Xu, Fengjiao

    2017-01-01

    The research on the grid size suitability is important to provide improvement in accuracies of gridded population distribution. It contributes to reveal the actual spatial distribution of population. However, currently little research has been done in this area. Many well-modeled gridded population dataset are basically built at a single grid scale. If the grid cell size is not appropriate, it will result in spatial information loss or data redundancy. Therefore, in order to capture the desired spatial variation of population within the area of interest, it is necessary to conduct research on grid size suitability. This study summarized three expressed levels to analyze grid size suitability, which include location expressed level, numeric information expressed level, and spatial relationship expressed level. This study elaborated the reasons for choosing the five indexes to explore expression suitability. These five indexes are consistency measure, shape index rate, standard deviation of population density, patches diversity index, and the average local variance. The suitable grid size was determined by constructing grid size-indicator value curves and suitable grid size scheme. Results revealed that the three expressed levels on 10m grid scale are satisfying. And the population distribution raster data with 10m grid size provide excellent accuracy without loss. The 10m grid size is recommended as the appropriate scale for generating a high-quality gridded population distribution in our study area. Based on this preliminary study, it indicates the five indexes are coordinated with each other and reasonable and effective to assess grid size suitability. We also suggest choosing these five indexes in three perspectives of expressed level to carry out the research on grid size suitability of gridded population distribution. PMID:28122050

  3. Trends of the electricity output, power conversion efficiency, and the grid emission factor in North Korea

    NASA Astrophysics Data System (ADS)

    Yeo, M. J.; Kim, Y. P.

    2017-12-01

    Recently, concerns about the atmospheric environmental problems in North Korea (NK) have been growing. According to the World Health Organization (WHO) (2017), NK was the first ranked country in mortality rate attributed to household and ambient air pollution in 2012. Reliable energy-related data in NK were needed to understand the characteristics of air quality in NK. However, data from the North Korean government were limited. Nevertheless, we could find specific energy-related data produced by NK in the Project Design Documents (PDDs) of the Clean Development Mechanism (CDM) submitted to the United Nations Framework Convention on Climate Change (UNFCCC). There were the 6 registered CDM projects hosted by North Korea, developed as small hydropower plants. Several data of each power plant, such as the electricity output, connected to the Eastern Power Grid (EPG) or the Western Power Grid (WPG) in North Korea were provided in the CDM PDDs. We (1) figured out the trends of the electricity output, the `power conversion efficiency' which we defined the amount of generated electricity to the supplied input primary energy for power generation, and fuel mix as grid emission factor in NK as using the data produced by NK between 2005 and 2009, (2) discussed the operating status of the thermal power plants in NK, and (3) discussed the energy/environmental-related policies and the priority issues in NK in this study.

  4. Future-year ozone prediction for the United States using updated models and inputs.

    PubMed

    Collet, Susan; Kidokoro, Toru; Karamchandani, Prakash; Shah, Tejas; Jung, Jaegun

    2017-08-01

    The relationship between emission reductions and changes in ozone can be studied using photochemical grid models. These models are updated with new information as it becomes available. The primary objective of this study was to update the previous Collet et al. studies by using the most up-to-date (at the time the study was done) modeling emission tools, inventories, and meteorology available to conduct ozone source attribution and sensitivity studies. Results show future-year, 2030, design values for 8-hr ozone concentrations were lower than base-year values, 2011. The ozone source attribution results for selected cities showed that boundary conditions were the dominant contributors to ozone concentrations at the western U.S. locations, and were important for many of the eastern U.S. Point sources were generally more important in the eastern United States than in the western United States. The contributions of on-road mobile emissions were less than 5 ppb at a majority of the cities selected for analysis. The higher-order decoupled direct method (HDDM) results showed that in most of the locations selected for analysis, NOx emission reductions were more effective than VOC emission reductions in reducing ozone levels. The source attribution results from this study provide useful information on the important source categories and provide some initial guidance on future emission reduction strategies. The relationship between emission reductions and changes in ozone can be studied using photochemical grid models, which are updated with new available information. This study was to update the previous Collet et al. studies by using the most current, at the time the study was done, models and inventory to conduct ozone source attribution and sensitivity studies. The source attribution results from this study provide useful information on the important source categories and provide some initial guidance on future emission reduction strategies.

  5. The smart meter and a smarter consumer: quantifying the benefits of smart meter implementation in the United States

    PubMed Central

    2012-01-01

    The electric grid in the United States has been suffering from underinvestment for years, and now faces pressing challenges from rising demand and deteriorating infrastructure. High congestion levels in transmission lines are greatly reducing the efficiency of electricity generation and distribution. In this paper, we assess the faults of the current electric grid and quantify the costs of maintaining the current system into the future. While the proposed “smart grid” contains many proposals to upgrade the ailing infrastructure of the electric grid, we argue that smart meter installation in each U.S. household will offer a significant reduction in peak demand on the current system. A smart meter is a device which monitors a household’s electricity consumption in real-time, and has the ability to display real-time pricing in each household. We conclude that these devices will provide short-term and long-term benefits to utilities and consumers. The smart meter will enable utilities to closely monitor electricity consumption in real-time, while also allowing households to adjust electricity consumption in response to real-time price adjustments. PMID:22540990

  6. The flow field investigations of no load conditions in axial flow fixed-blade turbine

    NASA Astrophysics Data System (ADS)

    Yang, J.; Gao, L.; Wang, Z. W.; Zhou, X. Z.; Xu, H. X.

    2014-03-01

    During the start-up process, the strong instabilities happened at no load operation in a low head axial flow fixed-blade turbine, with strong pressure pulsation and vibration. The rated speed can not reach until guide vane opening to some extent, and stable operation could not be maintained under the rated speed at some head, which had a negative impact on the grid-connected operation of the unit. In order to find the reason of this phenomenon, the unsteady flow field of the whole flow passage at no load conditions was carried out to analyze the detailed fluid field characteristics including the pressure pulsation and force imposed on the runner under three typical heads. The main hydraulic cause of no load conditions instability was described. It is recommended that the power station should try to reduce the no-load running time and go into the high load operation as soon as possible when connected to grid at the rated head. Following the recommendations, the plant operation practice proved the unstable degree of the unit was reduced greatly during start up and connect to the power grid.

  7. EVALUATING AND USING AIR QUALITY MODELS

    EPA Science Inventory

    Grid-based models are being used to assess the magnitude of the pollution problem and to design emission control strategies to achieve compliance with the relevant air quality standards in the United States.

  8. Pore-scale discretisation limits of multiphase lattice-Boltzmann methods

    NASA Astrophysics Data System (ADS)

    Li, Z.; Middleton, J.; Varslot, T.; Sheppard, A.

    2015-12-01

    Lattice-Boltzmann (LB) modeling is a popular method for the numerical solution of the Navier-Stokes equations and several multi-component LB models are widely used to simulate immiscible two-phase fluid flow in porous media. However, there has been relatively little study of the models' ability to make optimal use of 3D imagery by considering the minimum number of grid points that are needed to represent geometric features such as pore throats. This is of critical importance since 3D images of geological samples are a compromise between resolution and field of view. In this work we explore the discretisation limits of LB models, their behavior near these limits, and the consequences of this behavior for simulations of drainage and imbibition. We quantify the performance of two commonly used multiphase LB models: Shan-Chen (SC) and Rothman-Keller (RK) models in a set of tests, including simulations of bubbles in bulk fluid, on flat surfaces, confined in flat/tilted tubes, and fluid invasion into single tubes. Simple geometries like these allow better quantification of model behavior and better understanding of breakdown mechanisms. In bulk fluid, bubble radii less than 2.5 grid units (image voxels) cause numerical instability in SC model; the RK model is stable to a radius of 2.5 units and below, but with poor agreement with the Laplace's law. When confined to a flat duct, the SC model can simulate similar radii to RK model, but with higher interface spurious currents than the RK model and some risk of instability. In tilted ducts with 'staircase' voxel-level roughness, the SC model seems to average the roughness, whereas for RK model only the 'peaks' of the surface are relevant. Overall, our results suggest that LB models can simulate fluid capillary pressure corresponding to interfacial radii of just 1.5 grid units, with the RK model exhibiting significantly better stability.

  9. Mathematical construction and perturbation analysis of Zernike discrete orthogonal points.

    PubMed

    Shi, Zhenguang; Sui, Yongxin; Liu, Zhenyu; Peng, Ji; Yang, Huaijiang

    2012-06-20

    Zernike functions are orthogonal within the unit circle, but they are not over the discrete points such as CCD arrays or finite element grids. This will result in reconstruction errors for loss of orthogonality. By using roots of Legendre polynomials, a set of points within the unit circle can be constructed so that Zernike functions over the set are discretely orthogonal. Besides that, the location tolerances of the points are studied by perturbation analysis, and the requirements of the positioning precision are not very strict. Computer simulations show that this approach provides a very accurate wavefront reconstruction with the proposed sampling set.

  10. Optimization of a stand-alone Solar PV-Wind-DG Hybrid System for Distributed Power Generation at Sagar Island

    NASA Astrophysics Data System (ADS)

    Roy, P. C.; Majumder, A.; Chakraborty, N.

    2010-10-01

    An estimation of a stand-alone solar PV and wind hybrid system for distributed power generation has been made based on the resources available at Sagar island, a remote area distant to grid operation. Optimization and sensitivity analysis has been made to evaluate the feasibility and size of the power generation unit. A comparison of the different modes of hybrid system has been studied. It has been estimated that Solar PV-Wind-DG hybrid system provides lesser per unit electricity cost. Capital investment is observed to be lesser when the system run with Wind-DG compared to Solar PV-DG.

  11. Patient doses from chest radiography in Victoria.

    PubMed

    Cardillo, I; Boal, T J; Einsiedel, P F

    1997-06-01

    This survey examines doses from PA chest radiography at radiology practices, private hospitals and public hospitals throughout metropolitan and country Victoria. Data were collected from 111 individual X-ray units at 86 different practices. Entrance skin doses in air were measured for exposure factors used by the centre for a 23 cm thick male chest. A CDRH LucA1 chest phantom was used when making these measurements. About half of the centres used grid technique and half used non-grid technique. There was a factor of greater than 10 difference in the entrance dose delivered between the highest dose centre and the lowest dose centre for non-grid centres; and a factor of about 5 for centres using grids. Factors contributing to the high doses recorded at some centres were identified. Guidance levels for chest radiography based on the third quartile value of the entrance doses from this survey have been recommended and compared with guidance levels recommended in other countries.

  12. Towards a PTAS for the generalized TSP in grid clusters

    NASA Astrophysics Data System (ADS)

    Khachay, Michael; Neznakhina, Katherine

    2016-10-01

    The Generalized Traveling Salesman Problem (GTSP) is a combinatorial optimization problem, which is to find a minimum cost cycle visiting one point (city) from each cluster exactly. We consider a geometric case of this problem, where n nodes are given inside the integer grid (in the Euclidean plane), each grid cell is a unit square. Clusters are induced by cells `populated' by nodes of the given instance. Even in this special setting, the GTSP remains intractable enclosing the classic Euclidean TSP on the plane. Recently, it was shown that the problem has (1.5+8√2+ɛ)-approximation algorithm with complexity bound depending on n and k polynomially, where k is the number of clusters. In this paper, we propose two approximation algorithms for the Euclidean GTSP on grid clusters. For any fixed k, both algorithms are PTAS. Time complexity of the first one remains polynomial for k = O(log n) while the second one is a PTAS, when k = n - O(log n).

  13. Deriving flow directions for coarse-resolution (1-4 km) gridded hydrologic modeling

    NASA Astrophysics Data System (ADS)

    Reed, Seann M.

    2003-09-01

    The National Weather Service Hydrology Laboratory (NWS-HL) is currently testing a grid-based distributed hydrologic model at a resolution (4 km) commensurate with operational, radar-based precipitation products. To implement distributed routing algorithms in this framework, a flow direction must be assigned to each model cell. A new algorithm, referred to as cell outlet tracing with an area threshold (COTAT) has been developed to automatically, accurately, and efficiently assign flow directions to any coarse-resolution grid cells using information from any higher-resolution digital elevation model. Although similar to previously published algorithms, this approach offers some advantages. Use of an area threshold allows more control over the tendency for producing diagonal flow directions. Analyses of results at different output resolutions ranging from 300 m to 4000 m indicate that it is possible to choose an area threshold that will produce minimal differences in average network flow lengths across this range of scales. Flow direction grids at a 4 km resolution have been produced for the conterminous United States.

  14. Yucca Mountain Site Charecteization Project Summary of Socioeconomic Data Analysis Conducted in Support of the Radiological Monitoring Program, During FY 2001

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    L.K. Roe

    2001-12-11

    This report is a summary of socioeconomic data analyses conducted in support of the Radiological Monitoring Program during fiscal year 2001. Socioeconomic data contained in this report include estimates for the years 2000 and 2001 of the resident population in the vicinity of Yucca Mountain. The estimates presented in this report are based on selected Census 2000 statistics, and housing and population data that were acquired and developed in accordance with LP-RS-00 1 Q-M&0, Scientific Investigation of Economic, Demographic, and Agricultural Characteristics in the Vicinity of Yucca Mountain. The study area from which data were collected is delineated by amore » radial grid, consisting of 160 grid cells, that is suitable for evaluating the pathways and potential impacts of a release of radioactive materials to the environment within a distance of 84 kilometers from Yucca Mountain. Data are presented in a tabular format by the county, state, area, and grid cell in which housing units, households, and resident population are located. Also included is a visual representation of the distribution of the 2000 residential populations within the study area, showing Census 2000 geography, county boundaries, and taxing district boundaries for selected communities.« less

  15. Grid vs Mesh: The case of Hyper-resolution Modeling in Urban Landscapes

    NASA Astrophysics Data System (ADS)

    Grimley, L. E.; Tijerina, D.; Khanam, M.; Tiernan, E. D.; Frazier, N.; Ogden, F. L.; Steinke, R. C.; Maxwell, R. M.; Cohen, S.

    2017-12-01

    In this study, the relative performance of ADHydro and GSSHA was analyzed for a small and large rainfall event in an urban watershed called Dead Run near Baltimore, Maryland. ADHydro is a physics-based, distributed, hydrologic model that uses an unstructured mesh and operates in a high performance computing environment. The Gridded Surface/Subsurface Hydrological Analysis (GSSHA) model, which is maintained by the US Army Corps of Engineers, is a physics-based, distributed, hydrologic model that incorporates subsurface utilities and uses a structured mesh. A large portion of the work served as alpha-testing of ADHydro, which is under development by the CI-WATER modeling team at the University of Wyoming. Triangular meshes at variable resolutions were created to assess the sensitivity of ADHydro to changes in resolution and test the model's ability to handle a complicated urban routing network with structures present. ADHydro was compared with GSSHA which does not have the flexibility of an unstructured grid but does incorporate the storm drainage network. The modelled runoff hydrographs were compared to observed United States Geological Survey (USGS) stream gage data. The objective of this study was to analyze the effects of mesh type and resolution using ADHydro and GSSHA in simulations of an urban watershed.

  16. Digital-map grids of mean-annual precipitation for 1961-90, and generalized skew coefficients of annual maximum streamflow for Oklahoma

    USGS Publications Warehouse

    Rea, A.H.; Tortorelli, R.L.

    1997-01-01

    This digital report contains two digital-map grids of data that were used to develop peak-flow regression equations in Tortorelli, 1997, 'Techniques for estimating peak-streamflow frequency for unregulated streams and streams regulated by small floodwater retarding structures in Oklahoma,' U.S. Geological Survey Water-Resources Investigations Report 97-4202. One data set is a grid of mean annual precipitation, in inches, based on the period 1961-90, for Oklahoma. The data set was derived from the PRISM (Parameter-elevation Regressions on Independent Slopes Model) mean annual precipitation grid for the United States, developed by Daly, Neilson, and Phillips (1994, 'A statistical-topographic model for mapping climatological precipitation over mountainous terrain:' Journal of Applied Meteorology, v. 33, no. 2, p. 140-158). The second data set is a grid of generalized skew coefficients of logarithms of annual maximum streamflow for Oklahoma streams less than or equal to 2,510 square miles in drainage area. This grid of skew coefficients is taken from figure 11 of Tortorelli and Bergman, 1985, 'Techniques for estimating flood peak discharges for unregulated streams and streams regulated by small floodwater retarding structures in Oklahoma,' U.S. Geological Survey Water-Resources Investigations Report 84-4358. To save disk space, the skew coefficient values have been multiplied by 100 and rounded to integers with two significant digits. The data sets are provided in an ASCII grid format.

  17. Annual Fossil-Fuel CO2 Emissions: Mass of Emissions Gridded by One Degree Latitude by One Degree Longitude (NDP-058.2010)

    DOE Data Explorer

    Andres, R. J. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Boden, T. A. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Marland, G. [Appalachian State University, Boone, NC (USA)

    2010-01-01

    The 2010 version of this database presents a time series recording 1° latitude by 1° longitude CO2 emissions in units of million metric tons of carbon per year from anthropogenic sources for 1751-2007. Detailed geographic information on CO2 emissions can be critical in understanding the pattern of the atmospheric and biospheric response to these emissions. Global, regional, and national annual estimates for 1751 through 2007 were published earlier (Boden et al. 2010). Those national, annual CO2 emission estimates were based on statistics about fossil-fuel burning, cement manufacturing and gas flaring in oil fields as well as energy production, consumption, and trade data, using the methods of Marland and Rotty (1984). The national annual estimates were combined with gridded 1° data on political units and 1984 human populations to create the new gridded CO2 emission time series. The same population distribution was used for each of the years as proxy for the emission distribution within each country. The implied assumption for that procedure was that per capita energy use and fuel mixes are uniform over a political unit. The consequence of this first-order procedure is that the spatial changes observed over time are solely due to changes in national energy consumption and nation-based fuel mix. Increases in fossil-fuel CO2 emissions over time are apparent for most areas.

  18. Annual Fossil-Fuel CO2 Emissions: Mass of Emissions Gridded by One Degree Latitude by One Degree Longitude (NDP-058.2013)

    DOE Data Explorer

    Andres, R. J. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Boden, T. A. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Marland, G. [Appalachian State University, Boone, NC (USA)

    2013-01-01

    The 2013 version of this database presents a time series recording 1° latitude by 1° longitude CO2 emissions in units of million metric tons of carbon per year from anthropogenic sources for 1751-2010. Detailed geographic information on CO2 emissions can be critical in understanding the pattern of the atmospheric and biospheric response to these emissions. Global, regional, and national annual estimates for 1751 through 2010 were published earlier (Boden et al. 2013). Those national, annual CO2 emission estimates were based on statistics about fossil-fuel burning, cement manufacturing and gas flaring in oil fields as well as energy production, consumption, and trade data, using the methods of Marland and Rotty (1984). The national annual estimates were combined with gridded 1° data on political units and 1984 human populations to create the new gridded CO2 emission time series. The same population distribution was used for each of the years as proxy for the emission distribution within each country. The implied assumption for that procedure was that per capita energy use and fuel mixes are uniform over a political unit. The consequence of this first-order procedure is that the spatial changes observed over time are solely due to changes in national energy consumption and nation-based fuel mix. Increases in fossil-fuel CO2 emissions over time are apparent for most areas.

  19. Annual Fossil-Fuel CO2 Emissions: Mass of Emissions Gridded by One Degree Latitude by One Degree Longitude (NDP-058.2015)

    DOE Data Explorer

    Andres, R. J. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Boden, T. A. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Marland, G. [Appalachian State University, Boone, NC (USA)

    2015-01-01

    The 2015 version of this database presents a time series recording 1° latitude by 1° longitude CO2 emissions in units of million metric tons of carbon per year from anthropogenic sources for 1751-2011. Detailed geographic information on CO2 emissions can be critical in understanding the pattern of the atmospheric and biospheric response to these emissions. Global, regional, and national annual estimates for 1751 through 2011 were published earlier (Boden et al. 2015). Those national, annual CO2 emission estimates were based on statistics about fossil-fuel burning, cement manufacturing and gas flaring in oil fields as well as energy production, consumption, and trade data, using the methods of Marland and Rotty (1984). The national annual estimates were combined with gridded 1° data on political units and 1984 human populations to create the new gridded CO2 emission time series. The same population distribution was used for each of the years as proxy for the emission distribution within each country. The implied assumption for that procedure was that per capita energy use and fuel mixes are uniform over a political unit. The consequence of this first-order procedure is that the spatial changes observed over time are solely due to changes in national energy consumption and nation-based fuel mix. Increases in fossil-fuel CO2 emissions over time are apparent for most areas.

  20. Annual Fossil-Fuel CO2 Emissions: Mass of Emissions Gridded by One Degree Latitude by One Degree Longitude (NDP-058.2011)

    DOE Data Explorer

    Andres, R. J. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Boden, T. A. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Marland, G. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA)

    2011-01-01

    The 2011 version of this database presents a time series recording 1° latitude by 1° longitude CO2 emissions in units of million metric tons of carbon per year from anthropogenic sources for 1751-2008. Detailed geographic information on CO2 emissions can be critical in understanding the pattern of the atmospheric and biospheric response to these emissions. Global, regional, and national annual estimates for 1751 through 2008 were published earlier (Boden et al. 2011). Those national, annual CO2 emission estimates were based on statistics about fossil-fuel burning, cement manufacturing and gas flaring in oil fields as well as energy production, consumption, and trade data, using the methods of Marland and Rotty (1984). The national annual estimates were combined with gridded 1° data on political units and 1984 human populations to create the new gridded CO2 emission time series. The same population distribution was used for each of the years as proxy for the emission distribution within each country. The implied assumption for that procedure was that per capita energy use and fuel mixes are uniform over a political unit. The consequence of this first-order procedure is that the spatial changes observed over time are solely due to changes in national energy consumption and nation-based fuel mix. Increases in fossil-fuel CO2 emissions over time are apparent for most areas.

  1. Annual Fossil-Fuel CO2 Emissions: Mass of Emissions Gridded by One Degree Latitude by One Degree Longitude (NDP-058.2012)

    DOE Data Explorer

    Andres, R. J. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Boden, T. A. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Marland, G. [Appalachian State University, Boone, NC (USA)

    2012-01-01

    The 2012 version of this database presents a time series recording 1° latitude by 1° longitude CO2 emissions in units of million metric tons of carbon per year from anthropogenic sources for 1751-2009. Detailed geographic information on CO2 emissions can be critical in understanding the pattern of the atmospheric and biospheric response to these emissions. Global, regional, and national annual estimates for 1751 through 2009 were published earlier (Boden et al. 2012). Those national, annual CO2 emission estimates were based on statistics about fossil-fuel burning, cement manufacturing and gas flaring in oil fields as well as energy production, consumption, and trade data, using the methods of Marland and Rotty (1984). The national annual estimates were combined with gridded 1° data on political units and 1984 human populations to create the new gridded CO2 emission time series. The same population distribution was used for each of the years as proxy for the emission distribution within each country. The implied assumption for that procedure was that per capita energy use and fuel mixes are uniform over a political unit. The consequence of this first-order procedure is that the spatial changes observed over time are solely due to changes in national energy consumption and nation-based fuel mix. Increases in fossil-fuel CO2 emissions over time are apparent for most areas.

  2. Annual Fossil-Fuel CO2 Emissions: Mass of Emissions Gridded by One Degree Latitude by One Degree Longitude (1751-2006) (NDP-058.2009)

    DOE Data Explorer

    Andres, R. J. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Boden, T. A. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Marland, G. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA)

    2009-01-01

    The 2009 version of this database presents a time series recording 1° latitude by 1° longitude CO2 emissions in units of million metric tons of carbon per year from anthropogenic sources for 1751-2006. Detailed geographic information on CO2 emissions can be critical in understanding the pattern of the atmospheric and biospheric response to these emissions. Global, regional, and national annual estimates for 1751 through 2006 were published earlier (Boden et al. 2009). Those national, annual CO2 emission estimates were based on statistics about fossil-fuel burning, cement manufacturing and gas flaring in oil fields as well as energy production, consumption, and trade data, using the methods of Marland and Rotty (1984). The national annual estimates were combined with gridded 1° data on political units and 1984 human populations to create the new gridded CO2 emission time series. The same population distribution was used for each of the years as proxy for the emission distribution within each country. The implied assumption for that procedure was that per capita energy use and fuel mixes are uniform over a political unit. The consequence of this first-order procedure is that the spatial changes observed over time are solely due to changes in national energy consumption and nation-based fuel mix. Increases in fossil-fuel CO2 emissions over time are apparent for most areas.

  3. Annual Fossil-Fuel CO2 Emissions: Mass of Emissions Gridded by One Degree Latitude by One Degree Longitude (NDP-058.2016)

    DOE Data Explorer

    Andres, R. J. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA); Boden, T. A. [Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (USA)

    2016-01-01

    The 2016 version of this database presents a time series recording 1° latitude by 1° longitude CO2 emissions in units of million metric tons of carbon per year from anthropogenic sources for 1751-2013. Detailed geographic information on CO2 emissions can be critical in understanding the pattern of the atmospheric and biospheric response to these emissions. Global, regional, and national annual estimates for 1751 through 2013 were published earlier (Boden et al. 2016). Those national, annual CO2 emission estimates were based on statistics about fossil-fuel burning, cement manufacturing and gas flaring in oil fields as well as energy production, consumption, and trade data, using the methods of Marland and Rotty (1984). The national annual estimates were combined with gridded 1° data on political units and 1984 human populations to create the new gridded CO2 emission time series. The same population distribution was used for each of the years as proxy for the emission distribution within each country. The implied assumption for that procedure was that per capita energy use and fuel mixes are uniform over a political unit. The consequence of this first-order procedure is that the spatial changes observed over time are solely due to changes in national energy consumption and nation-based fuel mix. Increases in fossil-fuel CO2 emissions over time are apparent for most areas.

  4. Geographic patterns of carbon dioxide emissions from fossil-fuel burning, hydraulic cement production, and gas flaring on a one degree by one degree grid cell basis: 1950 to 1990

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brenkert, A.L.; Andres, R.J.; Marland, G.

    1997-03-01

    Data sets of one degree latitude by one degree longitude carbon dioxide (CO{sub 2}) emissions in units of thousand metric tons of carbon (C) per year from anthropogenic sources have been produced for 1950, 1960, 1970, 1980 and 1990. Detailed geographic information on CO{sub 2} emissions can be critical in understanding the pattern of the atmospheric and biospheric response to these emissions. Global, regional and national annual estimates for 1950 through 1992 were published previously. Those national, annual CO{sub 2} emission estimates were based on statistics on fossil-fuel burning, cement manufacturing and gas flaring in oil fields as well asmore » energy production, consumption and trade data, using the methods of Marland and Rotty. The national annual estimates were combined with gridded one-degree data on political units and 1984 human populations to create the new gridded CO{sub 2} emission data sets. The same population distribution was used for each of the years as proxy for the emission distribution within each country. The implied assumption for that procedure was that per capita energy use and fuel mix is uniform over a political unit. The consequence of this first-order procedure is that the spatial changes observed over time are solely due to changes in national energy consumption and nation-based fuel mix. Increases in emissions over time are apparent for most areas.« less

  5. Grid Integration Studies: Advancing Clean Energy Planning and Deployment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katz, Jessica; Chernyakhovskiy, Ilya

    2016-07-01

    Integrating significant variable renewable energy (VRE) into the grid requires an evolution in power system planning and operation. To plan for this evolution, power system stakeholders can undertake grid integration studies. This Greening the Grid document reviews grid integration studies, common elements, questions, and guidance for system planners.

  6. GPS Spoofing Attack Characterization and Detection in Smart Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blum, Rick S.; Pradhan, Parth; Nagananda, Kyatsandra

    The problem of global positioning system (GPS) spoofing attacks on smart grids endowed with phasor measurement units (PMUs) is addressed, taking into account the dynamical behavior of the states of the system. First, it is shown how GPS spoofing introduces a timing synchronization error in the phasor readings recorded by the PMUs and alters the measurement matrix of the dynamical model. Then, a generalized likelihood ratio-based hypotheses testing procedure is devised to detect changes in the measurement matrix when the system is subjected to a spoofing attack. Monte Carlo simulations are performed on the 9-bus, 3-machine test grid to demonstratemore » the implication of the spoofing attack on dynamic state estimation and to analyze the performance of the proposed hypotheses test.« less

  7. Advanced batteries for load-leveling - The utility perspective on system integration

    NASA Astrophysics Data System (ADS)

    Delmonaco, J. L.; Lewis, P. A.; Roman, H. T.; Zemkoski, J.

    1982-09-01

    Rechargeable battery systems for applications as utility load-leveling units, particularly in urban areas, are discussed. Particular attention is given to advanced lead-acid, zinc-halogen, sodium-sulfer, and lithium-iron sulfide battery systems, noting that battery charging can proceed at light load hours and requires no fuel on-site. Each battery site will have a master site controller and related subsystems necessary for ensuring grid-quality power output from the batteries and charging when feasible. The actual interconnection with the grid is envisioned as similar to transmission, subtransmission, or distribution systems similar to cogeneration or wind-derived energy interconnections. Analyses are presented of factors influencing the planning economics, impacts on existing grids through solid-state converters, and operational and maintenance considerations. Finally, research directions towards large scale battery implementation are outlined.

  8. Empirical analyses of plant-climate relationships for the western United States

    Treesearch

    Gerald E. Rehfeldt; Nicholas L. Crookston; Marcus V. Warwell; Jeffrey S. Evans

    2006-01-01

    The Random Forests multiple-regression tree was used to model climate profiles of 25 biotic communities of the western United States and nine of their constituent species. Analyses of the communities were based on a gridded sample of ca. 140,000 points, while those for the species used presence-absence data from ca. 120,000 locations. Independent variables included 35...

  9. Solar Market Research and Analysis Publications | Solar Research | NREL

    Science.gov Websites

    lifespan, and saving costs. The report is an expanded edition of an interim report published in 2015. Cost achieving the SETO 2030 residential PV cost target of $0.05 /kWh by identifying and quantifying cost reduction opportunities. Distribution Grid Integration Unit Cost Database: This database contains unit cost

  10. Sub-grid drag models for horizontal cylinder arrays immersed in gas-particle multiphase flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarkar, Avik; Sun, Xin; Sundaresan, Sankaran

    2013-09-08

    Immersed cylindrical tube arrays often are used as heat exchangers in gas-particle fluidized beds. In multiphase computational fluid dynamics (CFD) simulations of large fluidized beds, explicit resolution of small cylinders is computationally infeasible. Instead, the cylinder array may be viewed as an effective porous medium in coarse-grid simulations. The cylinders' influence on the suspension as a whole, manifested as an effective drag force, and on the relative motion between gas and particles, manifested as a correction to the gas-particle drag, must be modeled via suitable sub-grid constitutive relationships. In this work, highly resolved unit-cell simulations of flow around an arraymore » of horizontal cylinders, arranged in a staggered configuration, are filtered to construct sub-grid, or `filtered', drag models, which can be implemented in coarse-grid simulations. The force on the suspension exerted by the cylinders is comprised of, as expected, a buoyancy contribution, and a kinetic component analogous to fluid drag on a single cylinder. Furthermore, the introduction of tubes also is found to enhance segregation at the scale of the cylinder size, which, in turn, leads to a reduction in the filtered gas-particle drag.« less

  11. Ground-Water Quality Data in the Upper Santa Ana Watershed Study Unit, November 2006-March 2007: Results from the California GAMA Program

    USGS Publications Warehouse

    Kent, Robert; Belitz, Kenneth

    2009-01-01

    Ground-water quality in the approximately 1,000-square-mile Upper Santa Ana Watershed study unit (USAW) was investigated from November 2006 through March 2007 as part of the Priority Basin Project of the Groundwater Ambient Monitoring and Assessment (GAMA) Program. The GAMA Priority Basin project was developed in response to the Groundwater Quality Monitoring Act of 2001, and is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). The Upper Santa Ana Watershed study was designed to provide a spatially unbiased assessment of raw ground-water quality within USAW, as well as a statistically consistent basis for comparing water quality throughout California. Samples were collected from 99 wells in Riverside and San Bernardino Counties. Ninety of the wells were selected using a spatially distributed, randomized grid-based method to provide statistical representation of the study unit (grid wells). Nine wells were selected to provide additional understanding of specific water-quality issues identified within the basin (understanding wells). The ground-water samples were analyzed for a large number of organic constituents (volatile organic compounds [VOCs], pesticides and pesticide degradates, pharmaceutical compounds, and potential wastewater-indicator compounds), constituents of special interest (perchlorate, N-nitrosodimethylamine [NDMA], 1,4-dioxane, and 1,2,3-trichloropropane [1,2,3-TCP]), naturally occurring inorganic constituents (nutrients, major and minor ions, and trace elements), radioactive constituents, and microbial indicators. Naturally occurring isotopes (tritium, carbon-14, and stable isotopes of hydrogen and oxygen in water) and dissolved noble gases also were measured to help identify sources and ages of the sampled ground water. Dissolved gases, and isotopes of nitrogen gas and of dissolved nitrate also were measured in order to investigate the sources and occurrence of nitrate in the study unit. In total, nearly 400 constituents and water-quality indicators were investigated for this study. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, water typically is treated, disinfected, and (or) blended with other waters to maintain acceptable water quality. Regulatory thresholds apply to treated water that is served to the consumer, not to raw ground water. However, to provide some context for the results, concentrations of constituents measured in the raw ground water were compared with regulatory and non-regulatory health-based thresholds established by the U.S. Environmental Protection Agency (USEPA) and the California Department of Public Health (CDPH) and thresholds established for aesthetic concerns (secondary maximum contaminant levels, SMCL-CA) by CDPH. Volatile organic compounds (VOCs) were detected in more than 80 percent of USAW grid wells. Most VOCs detected were at concentrations far less than thresholds established for drinking water to protect human health; however, six wells had VOC concentrations above health-based thresholds. Twenty-four of the 85 VOCs investigated were detected in the study unit;11 were detected in more than 10 percent of the wells. The VOCs detected above health-based thresholds in at least one well were dibromochloropropane (DBCP), tetrachloroethene (PCE), trichloroethene (TCE), carbon tetrachloride, and 1,1-dichoroethene. Pesticide compounds were detected in more than 75 percent of the grid wells. However, of the 134 different pesticide compounds investigated, 13 were detected at concentrations greater than their respective long-term method detection limits, and only 7 compounds (all herbicides or herbicide degradates) were detected in more than 10 percent of the wells. No pesticide compound was detected above its health-based threshold, although thresholds exist for fewer than half of the pesticide compounds investigat

  12. Ground-Water Quality Data in the Middle Sacramento Valley Study Unit, 2006 - Results from the California GAMA Program

    USGS Publications Warehouse

    Schmitt, Stephen J.; Fram, Miranda S.; Milby Dawson, Barbara J.; Belitz, Kenneth

    2008-01-01

    Ground-water quality in the approximately 3,340 square mile Middle Sacramento Valley study unit (MSACV) was investigated from June through September, 2006, as part of the California Groundwater Ambient Monitoring and Assessment (GAMA) program. The GAMA Priority Basin Assessment project was developed in response to the Groundwater Quality Monitoring Act of 2001 and is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). The Middle Sacramento Valley study was designed to provide a spatially unbiased assessment of raw ground-water quality within MSACV, as well as a statistically consistent basis for comparing water quality throughout California. Samples were collected from 108 wells in Butte, Colusa, Glenn, Sutter, Tehama, Yolo, and Yuba Counties. Seventy-one wells were selected using a randomized grid-based method to provide statistical representation of the study unit (grid wells), 15 wells were selected to evaluate changes in water chemistry along ground-water flow paths (flow-path wells), and 22 were shallow monitoring wells selected to assess the effects of rice agriculture, a major land use in the study unit, on ground-water chemistry (RICE wells). The ground-water samples were analyzed for a large number of synthetic organic constituents (volatile organic compounds [VOCs], gasoline oxygenates and degradates, pesticides and pesticide degradates, and pharmaceutical compounds), constituents of special interest (perchlorate, N-nitrosodimethylamine [NDMA], and 1,2,3-trichloropropane [1,2,3-TCP]), inorganic constituents (nutrients, major and minor ions, and trace elements), radioactive constituents, and microbial indicators. Naturally occurring isotopes (tritium, and carbon-14, and stable isotopes of hydrogen, oxygen, nitrogen, and carbon), and dissolved noble gases also were measured to help identify the sources and ages of the sampled ground water. Quality-control samples (blanks, replicates, laboratory matrix spikes) were collected at approximately 10 percent of the wells, and the results for these samples were used to evaluate the quality of the data for the ground-water samples. Field blanks rarely contained detectable concentrations of any constituent, suggesting that contamination was not a noticeable source of bias in the data for the ground-water samples. Differences between replicate samples were within acceptable ranges, indicating acceptably low variability. Matrix spike recoveries were within acceptable ranges for most constituents. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, water typically is treated, disinfected, or blended with other waters to maintain acceptable water quality. Regulatory thresholds apply to treated water that is served to the consumer, not to raw ground water. However, to provide some context for the results, concentrations of constituents measured in the raw ground water were compared with health-based thresholds established by the U.S. Environmental Protection Agency (USEPA) and California Department of Public Health (CDPH) and thresholds established for aesthetic concerns (secondary maximum contaminant levels, SMCL-CA) by CDPH. Comparisons between data collected for this study and drinking-water thresholds are for illustrative purposes only and are not indicative of compliance or noncompliance with regulatory thresholds. Most constituents that were detected in ground-water samples were found at concentrations below drinking-water thresholds. VOCs were detected in less than one-third and pesticides and pesticide degradates in just over one-half of the grid wells, and all detections of these constituents in samples from all wells of the MSACV study unit were below health-based thresholds. All detections of trace elements in samples from MSACV grid wells were below health-based thresholds, with the exceptions of arsenic and boro

  13. Importance of Winds and Soil Moistures to the US Summertime Drought of 1988: A GCM Simulation Study

    NASA Technical Reports Server (NTRS)

    Mocko, David M.; Sud, Y. C.; Lau, William K. M. (Technical Monitor)

    2001-01-01

    The climate version of NASA's GEOS 2 GCM did not simulate a realistic 1988 summertime drought in the central United States (Mocko et al., 1999). Despite several new upgrades to the model's parameterizations, as well as finer grid spacing from 4x5 degrees to 2x2.5 degrees, no significant improvements were noted in the model's simulation of the U.S. drought.

  14. Deployed Communications in an Austere Environment: A Delphi Study

    DTIC Science & Technology

    2013-12-01

    gateways to access the Global Information Grid ( GIG ) will escalate dramatically. The ability simply to “deploy” a unit similar to the RF- SATCOM network...experts had divergent views on how deployed communications systems would link back to the GIG . The scenario uses both projected technologies. First...the self-configuring RF-SATCOM network link acts as a gateway to the GIG , providing wireless RF connectivity to autho- rized devices within the area

  15. Effective leadership within hospice and specialist palliative care units.

    PubMed

    Barker, L

    2000-01-01

    In this study the Repertory Grid interview technique was used to investigate constructs of leadership held by a group of male and female senior managers from within hospice and Specialist Palliative Care Units (SPCUs) in the UK. The themes that emerged were compared with those from existing research models of leadership. Men and women in these roles describe different constructs of effective leadership. The women's constructs that emerged were predominantly transformational, whilst the men's were predominantly transactional. Themes were also identified in this study, which differed from previous studied, i.e. those of political and environment awareness and the valuing of others' views regardless of their status. These themes do not feature highly in other research, and may be in response to the environment within which hospice and specialist palliative care functions.

  16. Evidence for the Buried "Pre-Noachian" Crust Pre-Dating the Oldest Observed Surface Units on Mars

    NASA Technical Reports Server (NTRS)

    Frey, H. V.; Frey, E. L.; Hartmann, W. K.; Tanaka, K. L. T.

    2003-01-01

    MOLA gridded data shows clear evidence for Quasi-Circular Depressions not visible on images in Early Noachian (EN) terrain units on Mars. We suggest these are buried impact basins that pre-date the superimposed craters whose high density makes these EN units the oldest visible at the surface of Mars. There is crust older than the oldest visible terrain units on Mars, and these EN units cannot date from 4.6 BYA. These and other Noa-chian units have similar total (visible + buried) crater retention ages, suggesting a common "pre-Noachian" crustal age OR crater saturation beyond which we cannot see.

  17. Evidence for Buried "Pre-Noachian" Crust Pre-Dating the Oldest Observed Surface Units on Mars

    NASA Technical Reports Server (NTRS)

    Frey, H. V.; Frey, E. L.; Hartmann, W. K.; Tanaka, K. L. T.

    2004-01-01

    MOLA gridded data shows clear evidence for Quasi-Circular Depressions not visible on images in Early Noachian (EN) terrain units on Mars. We suggest these are buried impact basins that pre-date the superimposed craters whose high density makes these EN units the oldest visible at the surface of Mars. There is crust older than the oldest visible terrain units on Mars, and these EN units cannot date from 4.6 BYA. These and other Noachian units have similar total (visible + buried) crater retention ages, suggesting a common "pre-Noachian" crustal age OR crater saturation beyond which we cannot see.

  18. A Critical Study of Agglomerated Multigrid Methods for Diffusion

    NASA Technical Reports Server (NTRS)

    Nishikawa, Hiroaki; Diskin, Boris; Thomas, James L.

    2011-01-01

    Agglomerated multigrid techniques used in unstructured-grid methods are studied critically for a model problem representative of laminar diffusion in the incompressible limit. The studied target-grid discretizations and discretizations used on agglomerated grids are typical of current node-centered formulations. Agglomerated multigrid convergence rates are presented using a range of two- and three-dimensional randomly perturbed unstructured grids for simple geometries with isotropic and stretched grids. Two agglomeration techniques are used within an overall topology-preserving agglomeration framework. The results show that multigrid with an inconsistent coarse-grid scheme using only the edge terms (also referred to in the literature as a thin-layer formulation) provides considerable speedup over single-grid methods but its convergence deteriorates on finer grids. Multigrid with a Galerkin coarse-grid discretization using piecewise-constant prolongation and a heuristic correction factor is slower and also grid-dependent. In contrast, grid-independent convergence rates are demonstrated for multigrid with consistent coarse-grid discretizations. Convergence rates of multigrid cycles are verified with quantitative analysis methods in which parts of the two-grid cycle are replaced by their idealized counterparts.

  19. Energy efficiency design strategies for buildings with grid-connected photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Yimprayoon, Chanikarn

    The building sector in the United States represents more than 40% of the nation's energy consumption. Energy efficiency design strategies and renewable energy are keys to reduce building energy demand. Grid-connected photovoltaic (PV) systems installed on buildings have been the fastest growing market in the PV industry. This growth poses challenges for buildings qualified to serve in this market sector. Electricity produced from solar energy is intermittent. Matching building electricity demand with PV output can increase PV system efficiency. Through experimental methods and case studies, computer simulations were used to investigate the priorities of energy efficiency design strategies that decreased electricity demand while producing load profiles matching with unique output profiles from PV. Three building types (residential, commercial, and industrial) of varying sizes and use patterns located in 16 climate zones were modeled according to ASHRAE 90.1 requirements. Buildings were analyzed individually and as a group. Complying with ASHRAE energy standards can reduce annual electricity consumption at least 13%. With energy efficiency design strategies, the reduction could reach up to 65%, making it possible for PV systems to meet reduced demands in residential and industrial buildings. The peak electricity demand reduction could be up to 71% with integration of strategies and PV. Reducing lighting power density was the best single strategy with high overall performances. Combined strategies such as zero energy building are also recommended. Electricity consumption reductions are the sum of the reductions from strategies and PV output. However, peak electricity reductions were less than their sum because they reduced peak at different times. The potential of grid stress reduction is significant. Investment incentives from government and utilities are necessary. The PV system sizes on net metering interconnection should not be limited by legislation existing in some states. Data from this study provides insight of impacts from applying energy efficiency design strategies in buildings with grid-connected PV systems. With the current transition from traditional electric grids to future smart grids, this information plus large database of various building conditions allow possible investigations needed by governments or utilities in large scale communities for implementing various measures and policies.

  20. SU-D-210-07: The Dependence On Acoustic Velocity of Medium On the Needle Template and Electronic Grid Alignment in Ultrasound QA for Prostate Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kapoor, P; Kapoor, R; Curran, B

    Purpose: To analyze the impact on acoustic velocity (AV) of two different media (water and milk) using the needle template/electronic grid alignment test. Water, easily available, makes a good material to test the alignment of the template and grid although water’s AV (1498 m/s at 25°C) is significantly different from tissue (1540 m/s). Milk, with an AV much closer (1548 m/s) to prostate tissue, may be a good substitute for water in ultrasound quality assurance testing. Methods: Tests were performed using a Hitachi ultrasound unit with a mechanical arrangement designed to position needles parallel to the transducer. In this work,more » two materials – distilled water and homogenized whole milk (AVs of 1498 and 1548 m/s at 25°C) were used in a phantom to test ultrasound needle/grid alignment. The images were obtained with both materials and analyzed for their placement accuracy. Results: The needle template/electronic grid alignment tests showed displacement errors between measured and calculated values. The measurements showed displacements of 2.3mm (water) and 0.4mm (milk), and 1.6mm (water) and 0.3mm (milk) at depths of 7cm and 5cm respectively from true needle positions. The calculated results showed a displacement of 2.36 mm (water); 0.435mm (milk), and 1.66mm (water) and 0.31mm (milk) at a depth of 7cm and 5cm respectively. The displacements in the X and Y directions were also calculated. At depths of 7cm and 5cm, the (ΔX,ΔY) displacements in water were (0.829mm, 2.21mm) and (0.273mm, 1.634mm) and for milk were (0.15mm, 0.44mm) and (0.05mm, 0.302mm) respectively. Conclusion: The measured and calculated values were in good agreement for all tests. They show that milk provides superior results when performing needle template and electronic grid alignment tests for ultrasound units used in prostate brachytherapy.« less

  1. Band gaps in grid structure with periodic local resonator subsystems

    NASA Astrophysics Data System (ADS)

    Zhou, Xiaoqin; Wang, Jun; Wang, Rongqi; Lin, Jieqiong

    2017-09-01

    The grid structure is widely used in architectural and mechanical field for its high strength and saving material. This paper will present a study on an acoustic metamaterial beam (AMB) based on the normal square grid structure with local resonators owning both flexible band gaps and high static stiffness, which have high application potential in vibration control. Firstly, the AMB with variable cross-section frame is analytically modeled by the beam-spring-mass model that is provided by using the extended Hamilton’s principle and Bloch’s theorem. The above model is used for computing the dispersion relation of the designed AMB in terms of the design parameters, and the influences of relevant parameters on band gaps are discussed. Then a two-dimensional finite element model of the AMB is built and analyzed in COMSOL Multiphysics, both the dispersion properties of unit cell and the wave attenuation in a finite AMB have fine agreement with the derived model. The effects of design parameters of the two-dimensional model in band gaps are further examined, and the obtained results can well verify the analytical model. Finally, the wave attenuation performances in three-dimensional AMBs with equal and unequal thickness are presented and discussed.

  2. Detecting Surface Changes from an Underground Explosion in Granite Using Unmanned Aerial System Photogrammetry

    DOE PAGES

    Schultz-Fellenz, Emily S.; Coppersmith, Ryan T.; Sussman, Aviva J.; ...

    2017-08-19

    Efficient detection and high-fidelity quantification of surface changes resulting from underground activities are important national and global security efforts. In this investigation, a team performed field-based topographic characterization by gathering high-quality photographs at very low altitudes from an unmanned aerial system (UAS)-borne camera platform. The data collection occurred shortly before and after a controlled underground chemical explosion as part of the United States Department of Energy’s Source Physics Experiments (SPE-5) series. The high-resolution overlapping photographs were used to create 3D photogrammetric models of the site, which then served to map changes in the landscape down to 1-cm-scale. Separate models weremore » created for two areas, herein referred to as the test table grid region and the nearfield grid region. The test table grid includes the region within ~40 m from surface ground zero, with photographs collected at a flight altitude of 8.5 m above ground level (AGL). The near-field grid area covered a broader area, 90–130 m from surface ground zero, and collected at a flight altitude of 22 m AGL. The photographs, processed using Agisoft Photoscan® in conjunction with 125 surveyed ground control point targets, yielded a 6-mm pixel-size digital elevation model (DEM) for the test table grid region. This provided the ≤3 cm resolution in the topographic data to map in fine detail a suite of features related to the underground explosion: uplift, subsidence, surface fractures, and morphological change detection. The near-field grid region data collection resulted in a 2-cm pixel-size DEM, enabling mapping of a broader range of features related to the explosion, including: uplift and subsidence, rock fall, and slope sloughing. This study represents one of the first works to constrain, both temporally and spatially, explosion-related surface damage using a UAS photogrammetric platform; these data will help to advance the science of underground explosion detection.« less

  3. Detecting Surface Changes from an Underground Explosion in Granite Using Unmanned Aerial System Photogrammetry

    NASA Astrophysics Data System (ADS)

    Schultz-Fellenz, Emily S.; Coppersmith, Ryan T.; Sussman, Aviva J.; Swanson, Erika M.; Cooley, James A.

    2017-08-01

    Efficient detection and high-fidelity quantification of surface changes resulting from underground activities are important national and global security efforts. In this investigation, a team performed field-based topographic characterization by gathering high-quality photographs at very low altitudes from an unmanned aerial system (UAS)-borne camera platform. The data collection occurred shortly before and after a controlled underground chemical explosion as part of the United States Department of Energy's Source Physics Experiments (SPE-5) series. The high-resolution overlapping photographs were used to create 3D photogrammetric models of the site, which then served to map changes in the landscape down to 1-cm-scale. Separate models were created for two areas, herein referred to as the test table grid region and the nearfield grid region. The test table grid includes the region within 40 m from surface ground zero, with photographs collected at a flight altitude of 8.5 m above ground level (AGL). The near-field grid area covered a broader area, 90-130 m from surface ground zero, and collected at a flight altitude of 22 m AGL. The photographs, processed using Agisoft Photoscan® in conjunction with 125 surveyed ground control point targets, yielded a 6-mm pixel-size digital elevation model (DEM) for the test table grid region. This provided the ≤3 cm resolution in the topographic data to map in fine detail a suite of features related to the underground explosion: uplift, subsidence, surface fractures, and morphological change detection. The near-field grid region data collection resulted in a 2-cm pixel-size DEM, enabling mapping of a broader range of features related to the explosion, including: uplift and subsidence, rock fall, and slope sloughing. This study represents one of the first works to constrain, both temporally and spatially, explosion-related surface damage using a UAS photogrammetric platform; these data will help to advance the science of underground explosion detection.

  4. Detecting Surface Changes from an Underground Explosion in Granite Using Unmanned Aerial System Photogrammetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schultz-Fellenz, Emily S.; Coppersmith, Ryan T.; Sussman, Aviva J.

    Efficient detection and high-fidelity quantification of surface changes resulting from underground activities are important national and global security efforts. In this investigation, a team performed field-based topographic characterization by gathering high-quality photographs at very low altitudes from an unmanned aerial system (UAS)-borne camera platform. The data collection occurred shortly before and after a controlled underground chemical explosion as part of the United States Department of Energy’s Source Physics Experiments (SPE-5) series. The high-resolution overlapping photographs were used to create 3D photogrammetric models of the site, which then served to map changes in the landscape down to 1-cm-scale. Separate models weremore » created for two areas, herein referred to as the test table grid region and the nearfield grid region. The test table grid includes the region within ~40 m from surface ground zero, with photographs collected at a flight altitude of 8.5 m above ground level (AGL). The near-field grid area covered a broader area, 90–130 m from surface ground zero, and collected at a flight altitude of 22 m AGL. The photographs, processed using Agisoft Photoscan® in conjunction with 125 surveyed ground control point targets, yielded a 6-mm pixel-size digital elevation model (DEM) for the test table grid region. This provided the ≤3 cm resolution in the topographic data to map in fine detail a suite of features related to the underground explosion: uplift, subsidence, surface fractures, and morphological change detection. The near-field grid region data collection resulted in a 2-cm pixel-size DEM, enabling mapping of a broader range of features related to the explosion, including: uplift and subsidence, rock fall, and slope sloughing. This study represents one of the first works to constrain, both temporally and spatially, explosion-related surface damage using a UAS photogrammetric platform; these data will help to advance the science of underground explosion detection.« less

  5. Impacts of Commercial Building Controls on Energy Savings and Peak Load Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fernandez, Nicholas E.P.; Katipamula, Srinivas; Wang, Weimin

    Commercial buildings in the United States use about 18 Quadrillion British thermal units (Quads) of primary energy annually . Studies have shown that as much as 30% of building energy consumption can be avoided by using more accurate sensing, using existing controls better, and deploying advanced controls; hence, the motivation for the work described in this report. Studies also have shown that 10% to 20% of the commercial building peak load can be temporarily managed/curtailed to provide grid services. Although many studies have indicated significant potential for reducing the energy consumption in commercial buildings, very few have documented the actualmore » savings. The studies that did so only provided savings at the whole building level, which makes it difficult to assess the savings potential of each individual measure deployed.« less

  6. Path and site effects deduced from merged transfrontier internet macroseismic data of two recent M4 earthquakes in northwest Europe using a grid cell approach

    NASA Astrophysics Data System (ADS)

    Van Noten, Koen; Lecocq, Thomas; Sira, Christophe; Hinzen, Klaus-G.; Camelbeeck, Thierry

    2017-04-01

    The online collection of earthquake reports in Europe is strongly fragmented across numerous seismological agencies. This paper demonstrates how collecting and merging online institutional macroseismic data strongly improves the density of observations and the quality of intensity shaking maps. Instead of using ZIP code Community Internet Intensity Maps, we geocode individual response addresses for location improvement, assign intensities to grouped answers within 100 km2 grid cells, and generate intensity attenuation relations from the grid cell intensities. Grid cell intensity maps are less subjective and illustrate a more homogeneous intensity distribution than communal ZIP code intensity maps. Using grid cells for ground motion analysis offers an advanced method for exchanging transfrontier equal-area intensity data without sharing any personal information. The applicability of the method is demonstrated on the felt responses of two clearly felt earthquakes: the 8 September 2011 ML 4.3 (Mw 3.7) Goch (Germany) and the 22 May 2015 ML 4.2 (Mw 3.7) Ramsgate (UK) earthquakes. Both events resulted in a non-circular distribution of intensities which is not explained by geometrical amplitude attenuation alone but illustrates an important low-pass filtering due to the sedimentary cover above the Anglo-Brabant Massif and in the Lower Rhine Graben. Our study illustrates the effect of increasing bedrock depth on intensity attenuation and the importance of the WNW-ESE Caledonian structural axis of the Anglo-Brabant Massif for seismic wave propagation. Seismic waves are less attenuated - high Q - along the strike of a tectonic structure but are more strongly attenuated - low Q - perpendicular to this structure, particularly when they cross rheologically different seismotectonic units separated by crustal-rooted faults.

  7. Drought and Heat Wave Impacts on Electricity Grid Reliability in Illinois

    NASA Astrophysics Data System (ADS)

    Stillwell, A. S.; Lubega, W. N.

    2016-12-01

    A large proportion of thermal power plants in the United States use cooling systems that discharge large volumes of heated water into rivers and cooling ponds. To minimize thermal pollution from these discharges, restrictions are placed on temperatures at the edge of defined mixing zones in the receiving waters. However, during extended hydrological droughts and heat waves, power plants are often granted thermal variances permitting them to exceed these temperature restrictions. These thermal variances are often deemed necessary for maintaining electricity reliability, particularly as heat waves cause increased electricity demand. Current practice, however, lacks tools for the development of grid-scale operational policies specifying generator output levels that ensure reliable electricity supply while minimizing thermal variances. Such policies must take into consideration characteristics of individual power plants, topology and characteristics of the electricity grid, and locations of power plants within the river basin. In this work, we develop a methodology for the development of these operational policies that captures necessary factors. We develop optimal rules for different hydrological and meteorological conditions, serving as rule curves for thermal power plants. The rules are conditioned on leading modes of the ambient hydrological and meteorological conditions at the different power plant locations, as the locations are geographically close and hydrologically connected. Heat dissipation in the rivers and cooling ponds is modeled using the equilibrium temperature concept. Optimal rules are determined through a Monte Carlo sampling optimization framework. The methodology is applied to a case study of eight power plants in Illinois that were granted thermal variances in the summer of 2012, with a representative electricity grid model used in place of the actual electricity grid.

  8. A Critical Study of Agglomerated Multigrid Methods for Diffusion

    NASA Technical Reports Server (NTRS)

    Thomas, James L.; Nishikawa, Hiroaki; Diskin, Boris

    2009-01-01

    Agglomerated multigrid techniques used in unstructured-grid methods are studied critically for a model problem representative of laminar diffusion in the incompressible limit. The studied target-grid discretizations and discretizations used on agglomerated grids are typical of current node-centered formulations. Agglomerated multigrid convergence rates are presented using a range of two- and three-dimensional randomly perturbed unstructured grids for simple geometries with isotropic and highly stretched grids. Two agglomeration techniques are used within an overall topology-preserving agglomeration framework. The results show that multigrid with an inconsistent coarse-grid scheme using only the edge terms (also referred to in the literature as a thin-layer formulation) provides considerable speedup over single-grid methods but its convergence deteriorates on finer grids. Multigrid with a Galerkin coarse-grid discretization using piecewise-constant prolongation and a heuristic correction factor is slower and also grid-dependent. In contrast, grid-independent convergence rates are demonstrated for multigrid with consistent coarse-grid discretizations. Actual cycle results are verified using quantitative analysis methods in which parts of the cycle are replaced by their idealized counterparts.

  9. Analysis of potential impacts of climate change on forests of the United States Pacific Northwest

    Treesearch

    Gregory Latta; Hailemariam Temesgen; Darius Adams; Tara Barrett

    2010-01-01

    As global climate changes over the next century, forest productivity is expected to change as well. Using PRISM climate and productivity data measured on a grid of 3356 plots, we developed a simultaneous autoregressive model to estimate the impacts of climate change on potential productivity of Pacific Northwest forests of the United States. The model, coupled with...

  10. An Updated Global Grid Point Surface Air Temperature Anomaly Data Set: 1851-1990 (revised 1991) (NDP-020)

    DOE Data Explorer

    Jones, P. D. [University of East Anglia, Norwich, United Kingdom; Raper, S. C.B. [University of East Anglia, Norwich, United Kingdom; Cherry, B. S.G. [University of East Anglia, Norwich, United Kingdom; Goodess, C. M. [University of East Anglia, Norwich, United Kingdom; Wigley, T. M. L. [University of East Anglia, Norwich, United Kingdom; Santer, B. [University of East Anglia, Norwich, United Kingdom; Kelly, P. M. [University of East Anglia, Norwich, United Kingdom; Bradley, R. S. [University of Massachusetts, Amherst, Massachusetts (USA); Diaz, H. F. [National Oceanic and Atmospheric Administration (NOAA), Environmental Research Laboratories, Boulder, CO (United States).

    1991-01-01

    This NDP presents land-based monthly surface-air-temperature anomalies (departures from a 1951-1970 reference period mean) on a 5° latitude by 10° longitude global grid. Monthly surface-air-temperature anomalies (departures from a 1957-1975 reference period mean) for the Antarctic (grid points from 65°S to 85°S) are presented in a similar way as a separate data set. The data were derived primarily from the World Weather Records and from the archives of the United Kingdom Meteorological Office. This long-term record of temperature anomalies may be used in studies addressing possible greenhouse-gas-induced climate changes. To date, the data have been employed in producing regional, hemispheric, and global time series for determining whether recent (i.e., post-1900) warming trends have taken place. The present updated version of this data set is identical to the earlier version for all records from 1851-1978 except for the addition of the Antarctic surface-air-temperature anomalies beginning in 1957. Beginning with the 1979 data, this package differs from the earlier version in several ways. Erroneous data for some sites have been corrected after a review of the actual station temperature data, and inconsistencies in the representation of missing values have been removed. For some grid locations, data have been added from stations that had not contributed to the original set. Data from satellites have also been used to correct station records where large discrepancies were evident. The present package also extends the record by adding monthly surface-air-temperature anomalies for the Northern (grid points from 85°N to 0°) and Southern (grid points from 5°S to 60°S) Hemispheres for 1985-1990. In addition, this updated package presents the monthly-mean-temperature records for the individual stations that were used to produce the set of gridded anomalies. The periods of record vary by station. Northern Hemisphere data have been corrected for inhomogeneities, while Southern Hemisphere data are presented in uncorrected form.

  11. Global hydrodynamic modelling of flood inundation in continental rivers: How can we achieve it?

    NASA Astrophysics Data System (ADS)

    Yamazaki, D.

    2016-12-01

    Global-scale modelling of river hydrodynamics is essential for understanding global hydrological cycle, and is also required in interdisciplinary research fields . Global river models have been developed continuously for more than two decades, but modelling river flow at a global scale is still a challenging topic because surface water movement in continental rivers is a multi-spatial-scale phenomena. We have to consider the basin-wide water balance (>1000km scale), while hydrodynamics in river channels and floodplains is regulated by much smaller-scale topography (<100m scale). For example, heavy precipitation in upstream regions may later cause flooding in farthest downstream reaches. In order to realistically simulate the timing and amplitude of flood wave propagation for a long distance, consideration of detailed local topography is unavoidable. I have developed the global hydrodynamic model CaMa-Flood to overcome this scale-discrepancy of continental river flow. The CaMa-Flood divides river basins into multiple "unit-catchments", and assumes the water level is uniform within each unit-catchment. One unit-catchment is assigned to each grid-box defined at the typical spatial resolution of global climate models (10 100 km scale). Adopting a uniform water level in a >10km river segment seems to be a big assumption, but it is actually a good approximation for hydrodynamic modelling of continental rivers. The number of grid points required for global hydrodynamic simulations is largely reduced by this "unit-catchment assumption". Alternative to calculating 2-dimensional floodplain flows as in regional flood models, the CaMa-Flood treats floodplain inundation in a unit-catchment as a sub-grid physics. The water level and inundated area in each unit-catchment are diagnosed from water volume using topography parameters derived from high-resolution digital elevation models. Thus, the CaMa-Flood is at least 1000 times computationally more efficient compared to regional flood inundation models while the reality of simulated flood dynamics is kept. I will explain in detail how the CaMa-Flood model has been constructed from high-resolution topography datasets, and how the model can be used for various interdisciplinary applications.

  12. Integrated Canada-U.S. Power Sector Modeling with the Regional Energy Deployment System (ReEDS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martinez, A.; Eurek, K.; Mai, T.

    2013-02-01

    The electric power system in North America is linked between the United States and Canada. Canada has historically been a net exporter of electricity to the United States. The extent to which this remains true will depend on the future evolution of power markets, technology deployment, and policies. To evaluate these and related questions, we modify the Regional Energy Deployment System (ReEDS) model to include an explicit representation of the grid-connected power system in Canada to the continental United States. ReEDS is unique among long-term capacity expansion models for its high spatial resolution and statistical treatment of the impact ofmore » variable renewable generation on capacity planning and dispatch. These unique traits are extended to new Canadian regions. We present example scenario results using the fully integrated Canada-U.S. version of ReEDS to demonstrate model capabilities. The newly developed, integrated Canada-U.S. ReEDS model can be used to analyze the dynamics of electricity transfers and other grid services between the two countries under different scenarios.« less

  13. Evaluation of arctic multibeam sonar data quality using nadir crossover error analysis and compilation of a full-resolution data product

    NASA Astrophysics Data System (ADS)

    Flinders, Ashton F.; Mayer, Larry A.; Calder, Brian A.; Armstrong, Andrew A.

    2014-05-01

    We document a new high-resolution multibeam bathymetry compilation for the Canada Basin and Chukchi Borderland in the Arctic Ocean - United States Arctic Multibeam Compilation (USAMBC Version 1.0). The compilation preserves the highest native resolution of the bathymetric data, allowing for more detailed interpretation of seafloor morphology than has been previously possible. The compilation was created from multibeam bathymetry data available through openly accessible government and academic repositories. Much of the new data was collected during dedicated mapping cruises in support of the United States effort to map extended continental shelf regions beyond the 200 nm Exclusive Economic Zone. Data quality was evaluated using nadir-beam crossover-error statistics, making it possible to assess the precision of multibeam depth soundings collected from a wide range of vessels and sonar systems. Data were compiled into a single high-resolution grid through a vertical stacking method, preserving the highest quality data source in any specific grid cell. The crossover-error analysis and method of data compilation can be applied to other multi-source multibeam data sets, and is particularly useful for government agencies targeting extended continental shelf regions but with limited hydrographic capabilities. Both the gridded compilation and an easily distributed geospatial PDF map are freely available through the University of New Hampshire's Center for Coastal and Ocean Mapping (ccom.unh.edu/theme/law-sea). The geospatial pdf is a full resolution, small file-size product that supports interpretation of Arctic seafloor morphology without the need for specialized gridding/visualization software.

  14. Evaluation of downscaled, gridded climate data for the conterminous United States

    USGS Publications Warehouse

    Robert J. Behnke,; Stephen J. Vavrus,; Andrew Allstadt,; Thomas P. Albright,; Thogmartin, Wayne E.; Volker C. Radeloff,

    2016-01-01

    Weather and climate affect many ecological processes, making spatially continuous yet fine-resolution weather data desirable for ecological research and predictions. Numerous downscaled weather data sets exist, but little attempt has been made to evaluate them systematically. Here we address this shortcoming by focusing on four major questions: (1) How accurate are downscaled, gridded climate data sets in terms of temperature and precipitation estimates?, (2) Are there significant regional differences in accuracy among data sets?, (3) How accurate are their mean values compared with extremes?, and (4) Does their accuracy depend on spatial resolution? We compared eight widely used downscaled data sets that provide gridded daily weather data for recent decades across the United States. We found considerable differences among data sets and between downscaled and weather station data. Temperature is represented more accurately than precipitation, and climate averages are more accurate than weather extremes. The data set exhibiting the best agreement with station data varies among ecoregions. Surprisingly, the accuracy of the data sets does not depend on spatial resolution. Although some inherent differences among data sets and weather station data are to be expected, our findings highlight how much different interpolation methods affect downscaled weather data, even for local comparisons with nearby weather stations located inside a grid cell. More broadly, our results highlight the need for careful consideration among different available data sets in terms of which variables they describe best, where they perform best, and their resolution, when selecting a downscaled weather data set for a given ecological application.

  15. AVERT, COBRA, GHG Inventory and GreenHouse Gas (GHG) Reporting Program (2017 EIC)

    EPA Pesticide Factsheets

    AVERT captures the actual historical behavior of electricity generating units' (EGUs’) operation on an hourly basis to predict how EGUs will operate with additional EE/RE delivered to the electricity grid.

  16. Rolling scheduling of electric power system with wind power based on improved NNIA algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Q. S.; Luo, C. J.; Yang, D. J.; Fan, Y. H.; Sang, Z. X.; Lei, H.

    2017-11-01

    This paper puts forth a rolling modification strategy for day-ahead scheduling of electric power system with wind power, which takes the operation cost increment of unit and curtailed wind power of power grid as double modification functions. Additionally, an improved Nondominated Neighbor Immune Algorithm (NNIA) is proposed for solution. The proposed rolling scheduling model has further improved the operation cost of system in the intra-day generation process, enhanced the system’s accommodation capacity of wind power, and modified the key transmission section power flow in a rolling manner to satisfy the security constraint of power grid. The improved NNIA algorithm has defined an antibody preference relation model based on equal incremental rate, regulation deviation constraints and maximum & minimum technical outputs of units. The model can noticeably guide the direction of antibody evolution, and significantly speed up the process of algorithm convergence to final solution, and enhance the local search capability.

  17. Utilization of optical sensors for phasor measurement units

    DOE PAGES

    Yao, Wenxuan; Wells, David; King, Daniel; ...

    2017-11-10

    With the help of GPS signals for synchronization, increasingly ubiquitous phasor measurement units (PMUs) provide power grid operators unprecedented system monitoring and control opportunities. However, the performance of PMUs is limited by the inherent deficiencies in traditional transformers. To address these issues, an optical sensor is used in PMU for signal acquisition to replace the traditional transformers. This is the first time the utilization of an optical sensor in PMUs has ever been reported. The accuracy of frequency, angle, and amplitude are evaluated via experiments. Lastly, the optical sensor based PMU can achieve the accuracy of 9.03 × 10 –4more » Hz for frequency, 6.38 × 10 –3 rad for angle and 6.73 × 10 –2 V for amplitude with real power grid signal, demonstrating the practicability of optical sensors in future PMUs.« less

  18. Utilization of optical sensors for phasor measurement units

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Wenxuan; Wells, David; King, Daniel

    With the help of GPS signals for synchronization, increasingly ubiquitous phasor measurement units (PMUs) provide power grid operators unprecedented system monitoring and control opportunities. However, the performance of PMUs is limited by the inherent deficiencies in traditional transformers. To address these issues, an optical sensor is used in PMU for signal acquisition to replace the traditional transformers. This is the first time the utilization of an optical sensor in PMUs has ever been reported. The accuracy of frequency, angle, and amplitude are evaluated via experiments. Lastly, the optical sensor based PMU can achieve the accuracy of 9.03 × 10 –4more » Hz for frequency, 6.38 × 10 –3 rad for angle and 6.73 × 10 –2 V for amplitude with real power grid signal, demonstrating the practicability of optical sensors in future PMUs.« less

  19. Multi-port power router and its impact on resilient power grid systems

    NASA Astrophysics Data System (ADS)

    Kado, Yuichi; Iwatsuki, Katsumi; Wada, Keiji

    2016-02-01

    We propose a Y-configuration power router as a unit cell to easily construct a power delivery system that can meet many types of user requirements. The Y-configuration power router controls the direction and magnitude of power flow among three ports regardless of DC and AC. We constructed a prototype three-way isolated DC/DC converter that is the core unit of the Y-configuration power router and tested the power flow control operation. Experimental results revealed that our methodology based on the governing equation was appropriate for the power flow control of the three-way DC/DC converter. In addition, the hexagonal distribution network composed of the power routers has the ability to easily interchange electric power between autonomous microgrid cells. We also explored the requirements for communication between energy routers to achieve dynamic adjustments of energy flow in a coordinated manner and its impact on resilient power grid systems.

  20. Dispersed solar thermal generation employing parabolic dish-electric transport with field modulated generator systems

    NASA Technical Reports Server (NTRS)

    Ramakumar, R.; Bahrami, K.

    1981-01-01

    This paper discusses the application of field modulated generator systems (FMGS) to dispersed solar-thermal-electric generation from a parabolic dish field with electric transport. Each solar generation unit is rated at 15 kWe and the power generated by an array of such units is electrically collected for insertion into an existing utility grid. Such an approach appears to be most suitable when the heat engine rotational speeds are high (greater than 6000 r/min) and, in particular, if they are operated in the variable speed mode and if utility-grade a.c. is required for direct insertion into the grid without an intermediate electric energy storage and reconversion system. Predictions of overall efficiencies based on conservative efficiency figures for the FMGS are in the range of 25 per cent and should be encouraging to those involved in the development of cost-effective dispersed solar thermal power systems.

  1. Power in the loop real time simulation platform for renewable energy generation

    NASA Astrophysics Data System (ADS)

    Li, Yang; Shi, Wenhui; Zhang, Xing; He, Guoqing

    2018-02-01

    Nowadays, a large scale of renewable energy sources has been connecting to power system and the real time simulation platform is widely used to carry out research on integration control algorithm, power system stability etc. Compared to traditional pure digital simulation and hardware in the loop simulation, power in the loop simulation has higher accuracy and degree of reliability. In this paper, a power in the loop analog digital hybrid simulation platform has been built and it can be used not only for the single generation unit connecting to grid, but also for multiple new energy generation units connecting to grid. A wind generator inertia control experiment was carried out on the platform. The structure of the inertia control platform was researched and the results verify that the platform is up to need for renewable power in the loop real time simulation.

  2. Spatiotemporal video deinterlacing using control grid interpolation

    NASA Astrophysics Data System (ADS)

    Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin

    2015-03-01

    With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.

  3. Bayesian function-on-function regression for multilevel functional data.

    PubMed

    Meyer, Mark J; Coull, Brent A; Versace, Francesco; Cinciripini, Paul; Morris, Jeffrey S

    2015-09-01

    Medical and public health research increasingly involves the collection of complex and high dimensional data. In particular, functional data-where the unit of observation is a curve or set of curves that are finely sampled over a grid-is frequently obtained. Moreover, researchers often sample multiple curves per person resulting in repeated functional measures. A common question is how to analyze the relationship between two functional variables. We propose a general function-on-function regression model for repeatedly sampled functional data on a fine grid, presenting a simple model as well as a more extensive mixed model framework, and introducing various functional Bayesian inferential procedures that account for multiple testing. We examine these models via simulation and a data analysis with data from a study that used event-related potentials to examine how the brain processes various types of images. © 2015, The International Biometric Society.

  4. Anomaly Detection Using Optimally-Placed μPMU Sensors in Distribution Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jamei, Mahdi; Scaglione, Anna; Roberts, Ciaran

    IEEE As the distribution grid moves toward a tightly-monitored network, it is important to automate the analysis of the enormous amount of data produced by the sensors to increase the operators situational awareness about the system. Here, focusing on Micro-Phasor Measurement Unit (μPMU) data, we propose a hierarchical architecture for monitoring the grid and establish a set of analytics and sensor fusion primitives for the detection of abnormal behavior in the control perimeter. And due to the key role of the μPMU devices in our architecture, a source-constrained optimal μPMU placement is also described that finds the best location ofmore » the devices with respect to our rules. The effectiveness of the proposed methods are tested through the synthetic and real μPMU data.« less

  5. Anomaly Detection Using Optimally-Placed μPMU Sensors in Distribution Grids

    DOE PAGES

    Jamei, Mahdi; Scaglione, Anna; Roberts, Ciaran; ...

    2017-10-25

    IEEE As the distribution grid moves toward a tightly-monitored network, it is important to automate the analysis of the enormous amount of data produced by the sensors to increase the operators situational awareness about the system. Here, focusing on Micro-Phasor Measurement Unit (μPMU) data, we propose a hierarchical architecture for monitoring the grid and establish a set of analytics and sensor fusion primitives for the detection of abnormal behavior in the control perimeter. And due to the key role of the μPMU devices in our architecture, a source-constrained optimal μPMU placement is also described that finds the best location ofmore » the devices with respect to our rules. The effectiveness of the proposed methods are tested through the synthetic and real μPMU data.« less

  6. Does resolution of flow field observation influence apparent habitat use and energy expenditure in juvenile coho salmon?

    USGS Publications Warehouse

    Tullos, Desiree D.; Walter, Cara; Dunham, Jason B.

    2016-01-01

    This study investigated how the resolution of observation influences interpretation of how fish, juvenile Coho Salmon (Oncorhynchus kisutch), exploit the hydraulic environment in streams. Our objectives were to evaluate how spatial resolution of the flow field observation influenced: (1) the velocities considered to be representative of habitat units; (2) patterns of use of the hydraulic environment by fish; and (3) estimates of energy expenditure. We addressed these objectives using observations within a 1:1 scale physical model of a full-channel log jam in an outdoor experimental stream. Velocities were measured with Acoustic Doppler Velocimetry at a 10 cm grid spacing, whereas fish locations and tailbeat frequencies were documented over time using underwater videogrammetry. Results highlighted that resolution of observation did impact perceived habitat use and energy expenditure, as did the location of measurement within habitat units and the use of averaging to summarize velocities within a habitat unit. In this experiment, the range of velocities and energy expenditure estimates increased with coarsening resolution (grid spacing from 10 to 100 cm), reducing the likelihood of measuring the velocities locally experienced by fish. In addition, the coarser resolutions contributed to fish appearing to select velocities that were higher than what was measured at finer resolutions. These findings indicate the need for careful attention to and communication of resolution of observation in investigating the hydraulic environment and in determining the habitat needs and bioenergetics of aquatic biota.

  7. Can developing countries leapfrog the centralized electrification paradigm?

    DOE PAGES

    Levin, Todd; Thomas, Valerie M.

    2016-02-04

    Due to the rapidly decreasing costs of small renewable electricity generation systems, centralized power systems are no longer a necessary condition of universal access to modern energy services. Developing countries, where centralized electricity infrastructures are less developed, may be able to adopt these new technologies more quickly. We first review the costs of grid extension and distributed solar home systems (SHSs) as reported by a number of different studies. We then present a general analytic framework for analyzing the choice between extending the grid and implementing distributed solar home systems. Drawing upon reported grid expansion cost data for three specificmore » regions, we demonstrate this framework by determining the electricity consumption levels at which the costs of provision through centralized and decentralized approaches are equivalent in these regions. We then calculate SHS capital costs that are necessary for these technologies provide each of five tiers of energy access, as defined by the United Nations Sustainable Energy for All initiative. Our results suggest that solar home systems can play an important role in achieving universal access to basic energy services. The extent of this role depends on three primary factors: SHS costs, grid expansion costs, and centralized generation costs. Given current technology costs, centralized systems will still be required to enable higher levels of consumption; however, cost reduction trends have the potential to disrupt this paradigm. Furthermore, by looking ahead rather than replicating older infrastructure styles, developing countries can leapfrog to a more distributed electricity service model.« less

  8. MODFLOW–USG version 1: An unstructured grid version of MODFLOW for simulating groundwater flow and tightly coupled processes using a control volume finite-difference formulation

    USGS Publications Warehouse

    Panday, Sorab; Langevin, Christian D.; Niswonger, Richard G.; Ibaraki, Motomu; Hughes, Joseph D.

    2013-01-01

    A new version of MODFLOW, called MODFLOW–USG (for UnStructured Grid), was developed to support a wide variety of structured and unstructured grid types, including nested grids and grids based on prismatic triangles, rectangles, hexagons, and other cell shapes. Flexibility in grid design can be used to focus resolution along rivers and around wells, for example, or to subdiscretize individual layers to better represent hydrostratigraphic units. MODFLOW–USG is based on an underlying control volume finite difference (CVFD) formulation in which a cell can be connected to an arbitrary number of adjacent cells. To improve accuracy of the CVFD formulation for irregular grid-cell geometries or nested grids, a generalized Ghost Node Correction (GNC) Package was developed, which uses interpolated heads in the flow calculation between adjacent connected cells. MODFLOW–USG includes a Groundwater Flow (GWF) Process, based on the GWF Process in MODFLOW–2005, as well as a new Connected Linear Network (CLN) Process to simulate the effects of multi-node wells, karst conduits, and tile drains, for example. The CLN Process is tightly coupled with the GWF Process in that the equations from both processes are formulated into one matrix equation and solved simultaneously. This robustness results from using an unstructured grid with unstructured matrix storage and solution schemes. MODFLOW–USG also contains an optional Newton-Raphson formulation, based on the formulation in MODFLOW–NWT, for improving solution convergence and avoiding problems with the drying and rewetting of cells. Because the existing MODFLOW solvers were developed for structured and symmetric matrices, they were replaced with a new Sparse Matrix Solver (SMS) Package developed specifically for MODFLOW–USG. The SMS Package provides several methods for resolving nonlinearities and multiple symmetric and asymmetric linear solution schemes to solve the matrix arising from the flow equations and the Newton-Raphson formulation, respectively.

  9. Developing Information Power Grid Based Algorithms and Software

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This was an exploratory study to enhance our understanding of problems involved in developing large scale applications in a heterogeneous distributed environment. It is likely that the large scale applications of the future will be built by coupling specialized computational modules together. For example, efforts now exist to couple ocean and atmospheric prediction codes to simulate a more complete climate system. These two applications differ in many respects. They have different grids, the data is in different unit systems and the algorithms for inte,-rating in time are different. In addition the code for each application is likely to have been developed on different architectures and tend to have poor performance when run on an architecture for which the code was not designed, if it runs at all. Architectural differences may also induce differences in data representation which effect precision and convergence criteria as well as data transfer issues. In order to couple such dissimilar codes some form of translation must be present. This translation should be able to handle interpolation from one grid to another as well as construction of the correct data field in the correct units from available data. Even if a code is to be developed from scratch, a modular approach will likely be followed in that standard scientific packages will be used to do the more mundane tasks such as linear algebra or Fourier transform operations. This approach allows the developers to concentrate on their science rather than becoming experts in linear algebra or signal processing. Problems associated with this development approach include difficulties associated with data extraction and translation from one module to another, module performance on different nodal architectures, and others. In addition to these data and software issues there exists operational issues such as platform stability and resource management.

  10. Electronic-type vacuum gauges with replaceable elements

    DOEpatents

    Edwards, Jr., David

    1984-01-01

    In electronic devices for measuring pressures in vacuum systems, the metal elements which undergo thermal deterioration are made readily replaceable by making them parts of a simple plug-in unit. Thus, in ionization gauges, the filament and grid or electron collector are mounted on the novel plug-in unit. In thermocouple pressure gauges, the heater and attached thermocouple are mounted on the plug-in unit. Plug-in units have been designed to function, alternatively, as ionization gauge and as thermocouple gauge, thus providing new gauges capable of measuring broader pressure ranges than is possible with either an ionization gauge or a thermocouple gauge.

  11. An integrated Bayesian model for estimating the long-term health effects of air pollution by fusing modelled and measured pollution data: A case study of nitrogen dioxide concentrations in Scotland.

    PubMed

    Huang, Guowen; Lee, Duncan; Scott, Marian

    2015-01-01

    The long-term health effects of air pollution can be estimated using a spatio-temporal ecological study, where the disease data are counts of hospital admissions from populations in small areal units at yearly intervals. Spatially representative pollution concentrations for each areal unit are typically estimated by applying Kriging to data from a sparse monitoring network, or by computing averages over grid level concentrations from an atmospheric dispersion model. We propose a novel fusion model for estimating spatially aggregated pollution concentrations using both the modelled and monitored data, and relate these concentrations to respiratory disease in a new study in Scotland between 2007 and 2011. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Interim Report by Asia International Grid Connection Study Group

    NASA Astrophysics Data System (ADS)

    Omatsu, Ryo

    2018-01-01

    The Asia International Grid Connection Study Group Interim Report examines the feasibility of developing an international grid connection in Japan. The Group has investigated different cases of grid connections in Europe and conducted research on electricity markets in Northeast Asia, and identifies the barriers and challenges for developing an international grid network including Japan. This presentation introduces basic contents of the interim report by the Study Group.

  13. Grid Integration and the Carrying Capacity of the U.S. Grid to Incorporate Variable Renewable Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochran, Jaquelin; Denholm, Paul; Speer, Bethany

    2015-04-23

    In the United States and elsewhere, renewable energy (RE) generation supplies an increasingly large percentage of annual demand, including nine U.S. states where wind comprised over 10% of in-state generation in 2013. This white paper summarizes the challenges to integrating increasing amounts of variable RE, identifies emerging practices in power system planning and operation that can facilitate grid integration, and proposes a unifying concept—economic carrying capacity—that can provide a framework for evaluating actions to accommodate higher penetrations of RE. There is growing recognition that while technical challenges to variable RE integration are real, they can generally be addressed via amore » variety of solutions that vary in implementation cost. As a result, limits to RE penetration are primarily economic, driven by factors that include transmission and the flexibility of the power grid to balance supply and demand. This limit can be expressed as economic carrying capacity, or the point at which variable RE is no longer economically competitive or desirable to the system or society.« less

  14. Biomass energy inventory and mapping system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kasile, J.D.

    1993-12-31

    A four-stage biomass energy inventory and mapping system was conducted for the entire State of Ohio. The product is a set of maps and an inventory of the State of Ohio. The set of amps and an inventory of the State`s energy biomass resource are to a one kilometer grid square basis on the Universal Transverse Mercator (UTM) system. Each square kilometer is identified and mapped showing total British Thermal Unit (BTU) energy availability. Land cover percentages and BTU values are provided for each of nine biomass strata types for each one kilometer grid square. LANDSAT satellite data was usedmore » as the primary stratifier. The second stage sampling was the photointerpretation of randomly selected one kilometer grid squares that exactly corresponded to the LANDSAT one kilometer grid square classification orientation. Field sampling comprised the third stage of the energy biomass inventory system and was combined with the fourth stage sample of laboratory biomass energy analysis using a Bomb calorimeter and was then used to assign BTU values to the photointerpretation and to adjust the LANDSAT classification. The sampling error for the whole system was 3.91%.« less

  15. BNL ATLAS Grid Computing

    ScienceCinema

    Michael Ernst

    2017-12-09

    As the sole Tier-1 computing facility for ATLAS in the United States and the largest ATLAS computing center worldwide Brookhaven provides a large portion of the overall computing resources for U.S. collaborators and serves as the central hub for storing,

  16. Corrosion evaluation of mechanically stabilized earth walls.

    DOT National Transportation Integrated Search

    2005-09-01

    Numerous reinforced walls and slopes have been built over the past four decades in Kentucky, the United States, as well as worldwide. Tensile elements used in constructing low-cost reinforcing walls and slopes consist of metal polymer strips or grids...

  17. US Greenhouse Gas (GHG) Emissions and Avoided Emissions and Generation Tool Training (AVERT) (2015 EIC)

    EPA Pesticide Factsheets

    AVERT captures the actual historical behavior of electricity generating units' (EGUs’) operation on an hourly basis to predict how EGUs will operate with additional EE/RE delivered to the electricity grid.

  18. Advanced Photovoltaic Inverter Control Development and Validation in a Controller-Hardware-in-the-Loop Test Bed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabakar, Kumaraguru; Shirazi, Mariko; Singh, Akanksha

    Penetration levels of solar photovoltaic (PV) generation on the electric grid have increased in recent years. In the past, most PV installations have not included grid-support functionalities. But today, standards such as the upcoming revisions to IEEE 1547 recommend grid support and anti-islanding functions-including volt-var, frequency-watt, volt-watt, frequency/voltage ride-through, and other inverter functions. These functions allow for the standardized interconnection of distributed energy resources into the grid. This paper develops and tests low-level inverter current control and high-level grid support functions. The controller was developed to integrate advanced inverter functions in a systematic approach, thus avoiding conflict among the differentmore » control objectives. The algorithms were then programmed on an off-the-shelf, embedded controller with a dual-core computer processing unit and field-programmable gate array (FPGA). This programmed controller was tested using a controller-hardware-in-the-loop (CHIL) test bed setup using an FPGA-based real-time simulator. The CHIL was run at a time step of 500 ns to accommodate the 20-kHz switching frequency of the developed controller. The details of the advanced control function and CHIL test bed provided here will aide future researchers when designing, implementing, and testing advanced functions of PV inverters.« less

  19. Use of Emerging Grid Computing Technologies for the Analysis of LIGO Data

    NASA Astrophysics Data System (ADS)

    Koranda, Scott

    2004-03-01

    The LIGO Scientific Collaboration (LSC) today faces the challenge of enabling analysis of terabytes of LIGO data by hundreds of scientists from institutions all around the world. To meet this challenge the LSC is developing tools, infrastructure, applications, and expertise leveraging Grid Computing technologies available today, and making available to LSC scientists compute resources at sites across the United States and Europe. We use digital credentials for strong and secure authentication and authorization to compute resources and data. Building on top of products from the Globus project for high-speed data transfer and information discovery we have created the Lightweight Data Replicator (LDR) to securely and robustly replicate data to resource sites. We have deployed at our computing sites the Virtual Data Toolkit (VDT) Server and Client packages, developed in collaboration with our partners in the GriPhyN and iVDGL projects, providing uniform access to distributed resources for users and their applications. Taken together these Grid Computing technologies and infrastructure have formed the LSC DataGrid--a coherent and uniform environment across two continents for the analysis of gravitational-wave detector data. Much work, however, remains in order to scale current analyses and recent lessons learned need to be integrated into the next generation of Grid middleware.

  20. Three-dimensional hydrogeologic framework model for use with a steady-state numerical ground-water flow model of the Death Valley regional flow system, Nevada and California

    USGS Publications Warehouse

    Belcher, Wayne R.; Faunt, Claudia C.; D'Agnese, Frank A.

    2002-01-01

    The U.S. Geological Survey, in cooperation with the Department of Energy and other Federal, State, and local agencies, is evaluating the hydrogeologic characteristics of the Death Valley regional ground-water flow system. The ground-water flow system covers an area of about 100,000 square kilometers from latitude 35? to 38?15' North to longitude 115? to 118? West, with the flow system proper comprising about 45,000 square kilometers. The Death Valley regional ground-water flow system is one of the larger flow systems within the Southwestern United States and includes in its boundaries the Nevada Test Site, Yucca Mountain, and much of Death Valley. Part of this study includes the construction of a three-dimensional hydrogeologic framework model to serve as the foundation for the development of a steady-state regional ground-water flow model. The digital framework model provides a computer-based description of the geometry and composition of the hydrogeologic units that control regional flow. The framework model of the region was constructed by merging two previous framework models constructed for the Yucca Mountain Project and the Environmental Restoration Program Underground Test Area studies at the Nevada Test Site. The hydrologic characteristics of the region result from a currently arid climate and complex geology. Interbasinal regional ground-water flow occurs through a thick carbonate-rock sequence of Paleozoic age, a locally thick volcanic-rock sequence of Tertiary age, and basin-fill alluvium of Tertiary and Quaternary age. Throughout the system, deep and shallow ground-water flow may be controlled by extensive and pervasive regional and local faults and fractures. The framework model was constructed using data from several sources to define the geometry of the regional hydrogeologic units. These data sources include (1) a 1:250,000-scale hydrogeologic-map compilation of the region; (2) regional-scale geologic cross sections; (3) borehole information, and (4) gridded surfaces from a previous three-dimensional geologic model. In addition, digital elevation model data were used in conjunction with these data to define ground-surface altitudes. These data, properly oriented in three dimensions by using geographic information systems, were combined and gridded to produce the upper surfaces of the hydrogeologic units used in the flow model. The final geometry of the framework model is constructed as a volumetric model by incorporating the intersections of these gridded surfaces and by applying fault truncation rules to structural features from the geologic map and cross sections. The cells defining the geometry of the hydrogeologic framework model can be assigned several attributes such as lithology, hydrogeologic unit, thickness, and top and bottom altitudes.

  1. Synchronized Phasor Data for Analyzing Wind Power Plant Dynamic Behavior and Model Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan, Y. H.

    2013-01-01

    The U.S. power industry is undertaking several initiatives that will improve the operations of the power grid. One of those is the implementation of 'wide area measurements' using phasor measurement units (PMUs) to dynamically monitor the operations and the status of the network and provide advanced situational awareness and stability assessment. This project seeks to obtain PMU data from wind power plants and grid reference points and develop software tools to analyze and visualize synchrophasor data for the purpose of better understanding wind power plant dynamic behaviors under normal and contingency conditions.

  2. Taguchi Experimental Design for Cleaning PWAs with Ball Grid Arrays

    NASA Technical Reports Server (NTRS)

    Bonner, J. K.; Mehta, A.; Walton, S.

    1997-01-01

    Ball grid arrays (BGAs), and other area array packages, are becoming more prominent as a way to increase component pin count while avoiding the manufacturing difficulties inherent in processing quad flat packs (QFPs)...Cleaning printed wiring assemblies (PWAs) with BGA components mounted on the surface is problematic...Currently, a low flash point semi-aqueous material, in conjunction with a batch cleaning unit, is being used to clean PWAs. The approach taken at JPL was to investigate the use of (1) semi-aqueous materials having a high flash point and (2) aqueous cleaning involving a saponifier.

  3. Crystallization of SHARPIN using an automated two-dimensional grid screen for optimization.

    PubMed

    Stieglitz, Benjamin; Rittinger, Katrin; Haire, Lesley F

    2012-07-01

    An N-terminal fragment of human SHARPIN was recombinantly expressed in Escherichia coli, purified and crystallized. Crystals suitable for X-ray diffraction were obtained by a one-step optimization of seed dilution and protein concentration using a two-dimensional grid screen. The crystals belonged to the primitive tetragonal space group P4(3)2(1)2, with unit-cell parameters a = b = 61.55, c = 222.81 Å. Complete data sets were collected from native and selenomethionine-substituted protein crystals at 100 K to 2.6 and 2.0 Å resolution, respectively.

  4. Understanding Cognitive and Collaborative Work: Observations in an Electric Transmission Operations Control Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Obradovich, Jodi H.

    2011-09-30

    This paper describes research that is part of an ongoing project to design tools to assist in the integration of renewable energy into the electric grid. These tools will support control room dispatchers in real-time system operations of the electric power transmission system which serves much of the Western United States. Field observations comprise the first phase of this research in which 15 operators have been observed over various shifts and times of day for approximately 90 hours. Findings describing some of the cognitive and environmental challenges of managing the dynamically changing electric grid are presented.

  5. Evaluation of a binary optimization approach to find the optimum locations of energy storage devices in a power grid with stochastically varying loads and wind generation

    NASA Astrophysics Data System (ADS)

    Dar, Zamiyad

    The prices in the electricity market change every five minutes. The prices in peak demand hours can be four or five times more than the prices in normal off peak hours. Renewable energy such as wind power has zero marginal cost and a large percentage of wind energy in a power grid can reduce the price significantly. The variability of wind power prevents it from being constantly available in peak hours. The price differentials between off-peak and on-peak hours due to wind power variations provide an opportunity for a storage device owner to buy energy at a low price and sell it in high price hours. In a large and complex power grid, there are many locations for installation of a storage device. Storage device owners prefer to install their device at locations that allow them to maximize profit. Market participants do not possess much information about the system operator's dispatch, power grid, competing generators and transmission system. The publicly available data from the system operator usually consists of Locational Marginal Prices (LMP), load, reserve prices and regulation prices. In this thesis, we develop a method to find the optimum location of a storage device without using the grid, transmission or generator data. We formulate and solve an optimization problem to find the most profitable location for a storage device using only the publicly available market pricing data such as LMPs, and reserve prices. We consider constraints arising due to storage device operation limitations in our objective function. We use binary optimization and branch and bound method to optimize the operation of a storage device at a given location to earn maximum profit. We use two different versions of our method and optimize the profitability of a storage unit at each location in a 36 bus model of north eastern United States and south eastern Canada for four representative days representing four seasons in a year. Finally, we compare our results from the two versions of our method with a multi period stochastically optimized economic dispatch of the same power system with storage device at locations proposed by our method. We observe a small gap in profit values arising due to the effect of storage device on market prices. However, we observe that the ranking of different locations in terms of profitability remains almost unchanged. This leads us to conclude that our method can successfully predict the optimum locations for installation of storage units in a complex grid using only the publicly available electricity market data.

  6. mizuRoute version 1: A river network routing tool for a continental domain water resources applications

    USGS Publications Warehouse

    Mizukami, Naoki; Clark, Martyn P.; Sampson, Kevin; Nijssen, Bart; Mao, Yixin; McMillan, Hilary; Viger, Roland; Markstrom, Steven; Hay, Lauren E.; Woods, Ross; Arnold, Jeffrey R.; Brekke, Levi D.

    2016-01-01

    This paper describes the first version of a stand-alone runoff routing tool, mizuRoute. The mizuRoute tool post-processes runoff outputs from any distributed hydrologic model or land surface model to produce spatially distributed streamflow at various spatial scales from headwater basins to continental-wide river systems. The tool can utilize both traditional grid-based river network and vector-based river network data. Both types of river network include river segment lines and the associated drainage basin polygons, but the vector-based river network can represent finer-scale river lines than the grid-based network. Streamflow estimates at any desired location in the river network can be easily extracted from the output of mizuRoute. The routing process is simulated as two separate steps. First, hillslope routing is performed with a gamma-distribution-based unit-hydrograph to transport runoff from a hillslope to a catchment outlet. The second step is river channel routing, which is performed with one of two routing scheme options: (1) a kinematic wave tracking (KWT) routing procedure; and (2) an impulse response function – unit-hydrograph (IRF-UH) routing procedure. The mizuRoute tool also includes scripts (python, NetCDF operators) to pre-process spatial river network data. This paper demonstrates mizuRoute's capabilities to produce spatially distributed streamflow simulations based on river networks from the United States Geological Survey (USGS) Geospatial Fabric (GF) data set in which over 54 000 river segments and their contributing areas are mapped across the contiguous United States (CONUS). A brief analysis of model parameter sensitivity is also provided. The mizuRoute tool can assist model-based water resources assessments including studies of the impacts of climate change on streamflow.

  7. Options for pricing ancillary services in a deregulated power system

    NASA Astrophysics Data System (ADS)

    Yamin, Hatim Yahya

    2001-07-01

    GENCOs in restructured systems are compensated for selling energy in the market. In a restructured market, a mechanism is required to entice participants in the market to provide ancillary services and to ensure adequate compensation that would guarantee its economic viability. The ISO controls the dispatch of generation, manages the reliability of the transmission grid, provides open access to the transmission, buys and provides ancillary services as required, coordinates day-ahead, hour-ahead schedules and performs real time balancing of load and generation, settles real time imbalances and ancillary services sales and purchases. The ISO, also, administers congestion management protocols for the transmission grid. Since the ISO does not own any generating units it must ensure that there is enough reserves for maintaining reliability according to FERC regulations, and sufficient unloaded generating capacity for balancing services in a real-time market. The ISO could meet these requirements by creating a competitive market for ancillary services, which are metered and remain unbundled to provide an accurate compensation for each supplier and cost to each consumer, In this study, we give an overview for restructuring and ancillary services in a restructured power marketplace. Also, we discuss the effect of GENCOs' actions in the competitive energy and ancillary service markets. In addition, we propose an auction market design for hedging ancillary service costs in California market. Furthermore, we show how to include the n-1 and voltage contingencies in security constrained unit commitment. Finally, we present two approaches for GENCOs' unit commitment in a restructured power market; one is based on game theory and the other is based on market price forecasting. In each of the two GENCOs' unit commitment approaches, we discuss the GENCOs' optimal bidding strategies in energy and ancillary service markets to maximize the GENCOs' profit.

  8. Very Large-Scale Multiuser Detection (VLSMUD)

    DTIC Science & Technology

    2006-09-01

    networks: A market -based approach,” IEEE/ACM Transactions on Networking, vol. 13, no. 6, pp. 1325– 1338, December 2005. [5] F. Meshkati, H. V. Poor, S . C... s / / s / DAVID HENCH WARREN H. DEBANY, Jr. Work Unit Manager Technical Advisor, Information Grid Division...5e. TASK NUMBER UD 6. AUTHOR( S ) H. Vincent Poor 5f. WORK UNIT NUMBER 01 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) Princeton

  9. SLGRID: spectral synthesis software in the grid

    NASA Astrophysics Data System (ADS)

    Sabater, J.; Sánchez, S.; Verdes-Montenegro, L.

    2011-11-01

    SLGRID (http://www.e-ciencia.es/wiki/index.php/Slgrid) is a pilot project proposed by the e-Science Initiative of Andalusia (eCA) and supported by the Spanish e-Science Network in the frame of the European Grid Initiative (EGI). The aim of the project was to adapt the spectral synthesis software Starlight (Cid-Fernandes et al. 2005) to the Grid infrastructure. Starlight is used to estimate the underlying stellar populations (their ages and metallicities) using an optical spectrum, hence, it is possible to obtain a clean nebular spectrum that can be used for the diagnostic of the presence of an Active Galactic Nucleus (Sabater et al. 2008, 2009). The typical serial execution of the code for big samples of galaxies made it ideal to be integrated into the Grid. We obtain an improvement on the computational time of order N, being N the number of nodes available in the Grid. In a real case we obtained our results in 3 hours with SLGRID instead of the 60 days spent using Starlight in a PC. The code has already been ported to the Grid. The first tests were made within the e-CA infrastrusture and, later, itwas tested and improved with the colaboration of the CETA-CIEMAT. The SLGRID project has been recently renewed. In a future it is planned to adapt the code for the reduction of data from Integral Field Units where each dataset is composed of hundreds of spectra. Electronic version of the poster at http://www.iaa.es/~jsm/SEA2010

  10. Uncertainty in coal property valuation in West Virginia: A case study

    USGS Publications Warehouse

    Hohn, M.E.; McDowell, R.R.

    2001-01-01

    Interpolated grids of coal bed thickness are being considered for use in a proposed method for taxation of coal in the state of West Virginia (United States). To assess the origin and magnitude of possible inaccuracies in calculated coal tonnage, we used conditional simulation to generate equiprobable realizations of net coal thickness for two coals on a 7 1/2 min topographic quadrangle, and a third coal in a second quadrangle. Coals differed in average thickness and proportion of original coal that had been removed by erosion; all three coals crop out in the study area. Coal tonnage was calculated for each realization and for each interpolated grid for actual and artificial property parcels, and differences were summarized as graphs of percent difference between tonnage calculated from the grid and average tonnage from simulations. Coal in individual parcels was considered minable for valuation purposes if average thickness in each parcel exceeded 30 inches. Results of this study show that over 75% of the parcels are classified correctly as minable or unminable based on interpolation grids of coal bed thickness. Although between 80 and 90% of the tonnages differ by less than 20% between interpolated values and simulated values, a nonlinear conditional bias might exist in estimation of coal tonnage from interpolated thickness, such that tonnage is underestimated where coal is thin, and overestimated where coal is thick. The largest percent differences occur for parcels that are small in area, although because of the small quantities of coal in question, bias is small on an absolute scale for these parcels. For a given parcel size, maximum apparent overestimation of coal tonnage occurs in parcels with an average coal bed thickness near the minable cutoff of 30 in. Conditional bias in tonnage for parcels having a coal thickness exceeding the cutoff by 10 in. or more is constant for two of the three coals studied, and increases slightly with average thickness for the third coal. ?? 2001 International Association for Mathematical Geology.

  11. On the uncertainties associated with using gridded rainfall data as a proxy for observed

    NASA Astrophysics Data System (ADS)

    Tozer, C. R.; Kiem, A. S.; Verdon-Kidd, D. C.

    2011-09-01

    Gridded rainfall datasets are used in many hydrological and climatological studies, in Australia and elsewhere, including for hydroclimatic forecasting, climate attribution studies and climate model performance assessments. The attraction of the spatial coverage provided by gridded data is clear, particularly in Australia where the spatial and temporal resolution of the rainfall gauge network is sparse. However, the question that must be asked is whether it is suitable to use gridded data as a proxy for observed point data, given that gridded data is inherently "smoothed" and may not necessarily capture the temporal and spatial variability of Australian rainfall which leads to hydroclimatic extremes (i.e. droughts, floods)? This study investigates this question through a statistical analysis of three monthly gridded Australian rainfall datasets - the Bureau of Meteorology (BOM) dataset, the Australian Water Availability Project (AWAP) and the SILO dataset. To demonstrate the hydrological implications of using gridded data as a proxy for gauged data, a rainfall-runoff model is applied to one catchment in South Australia (SA) initially using gridded data as the source of rainfall input and then gauged rainfall data. The results indicate a markedly different runoff response associated with each of the different sources of rainfall data. It should be noted that this study does not seek to identify which gridded dataset is the "best" for Australia, as each gridded data source has its pros and cons, as does gauged or point data. Rather the intention is to quantify differences between various gridded data sources and how they compare with gauged data so that these differences can be considered and accounted for in studies that utilise these gridded datasets. Ultimately, if key decisions are going to be based on the outputs of models that use gridded data, an estimate (or at least an understanding) of the uncertainties relating to the assumptions made in the development of gridded data and how that gridded data compares with reality should be made.

  12. Intelligent Operation and Maintenance of Micro-grid Technology and System Development

    NASA Astrophysics Data System (ADS)

    Fu, Ming; Song, Jinyan; Zhao, Jingtao; Du, Jian

    2018-01-01

    In order to achieve the micro-grid operation and management, Studying the micro-grid operation and maintenance knowledge base. Based on the advanced Petri net theory, the fault diagnosis model of micro-grid is established, and the intelligent diagnosis and analysis method of micro-grid fault is put forward. Based on the technology, the functional system and architecture of the intelligent operation and maintenance system of micro-grid are studied, and the microcomputer fault diagnosis function is introduced in detail. Finally, the system is deployed based on the micro-grid of a park, and the micro-grid fault diagnosis and analysis is carried out based on the micro-grid operation. The system operation and maintenance function interface is displayed, which verifies the correctness and reliability of the system.

  13. Stationary flywheel energy storage systems

    NASA Astrophysics Data System (ADS)

    Gilhaus, A.; Hau, E.; Gassner, G.; Huss, G.; Schauberger, H.

    1982-07-01

    A study intended to discover industrial applications of Stationary Flywheel Energy Accumulators. The economic value for the consumer and the effects on the power supply grid were investigated. A possibility for energy storage by flywheels exists where energy otherwise lost can be used effectively as in brake energy storage in vehicles. The future use of flywheels in wind power plants also seems to be promising. Attractive savings of energy can be obtained by introducing modern flywheel technology for emergency power supply units which are employed, for instance, in telecommunication systems.

  14. Imputing historical statistics, soils information, and other land-use data to crop area

    NASA Technical Reports Server (NTRS)

    Perry, C. R., Jr.; Willis, R. W.; Lautenschlager, L.

    1982-01-01

    In foreign crop condition monitoring, satellite acquired imagery is routinely used. To facilitate interpretation of this imagery, it is advantageous to have estimates of the crop types and their extent for small area units, i.e., grid cells on a map represent, at 60 deg latitude, an area nominally 25 by 25 nautical miles in size. The feasibility of imputing historical crop statistics, soils information, and other ancillary data to crop area for a province in Argentina is studied.

  15. Compact CFB: The next generation CFB boiler

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Utt, J.

    1996-12-31

    The next generation of compact circulating fluidized bed (CFB) boilers is described in outline form. The following topics are discussed: compact CFB = pyroflow + compact separator; compact CFB; compact separator is a breakthrough design; advantages of CFB; new design with substantial development history; KUHMO: successful demo unit; KUHMO: good performance over load range with low emissions; KOKKOLA: first commercial unit and emissions; KOKKOLA: first commercial unit and emissions; compact CFB installations; next generation CFB boiler; grid nozzle upgrades; cast segmented vortex finders; vortex finder installation; ceramic anchors; pre-cast vertical bullnose; refractory upgrades; and wet gunning.

  16. Importance of Grid Center Arrangement

    NASA Astrophysics Data System (ADS)

    Pasaogullari, O.; Usul, N.

    2012-12-01

    In Digital Elevation Modeling, grid size is accepted to be the most important parameter. Despite the point density and/or scale of the source data, it is freely decided by the user. Most of the time, arrangement of the grid centers are ignored, even most GIS packages omit the choice of grid center coordinate selection. In our study; importance of the arrangement of grid centers is investigated. Using the analogy between "Raster Grid DEM" and "Bitmap Image", importance of placement of grid centers in DEMs are measured. The study has been conducted on four different grid DEMs obtained from a half ellipsoid. These grid DEMs are obtained in such a way that they are half grid size apart from each other. Resulting grid DEMs are investigated through similarity measures. Image processing scientists use different measures to investigate the dis/similarity between the images and the amount of different information they carry. Grid DEMs are projected to a finer grid in order to co-center. Similarity measures are then applied to each grid DEM pairs. These similarity measures are adapted to DEM with band reduction and real number operation. One of the measures gives function graph and the others give measure matrices. Application of similarity measures to six grid DEM pairs shows interesting results. These four different grid DEMs are created with the same method for the same area, surprisingly; thirteen out of 14 measures state that, the half grid size apart grid DEMs are different from each other. The results indicated that although grid DEMs carry mutual information, they have also additional individual information. In other words, half grid size apart constructed grid DEMs have non-redundant information.; Joint Probability Distributions Function Graphs

  17. NREL Partners With General Electric, Duke Energy on Grid Voltage Regulation

    Science.gov Websites

    Study | Energy Systems Integration Facility | NREL NREL Partners With General Electric, Duke Energy on Grid Voltage Regulation Study NREL Partners With General Electric, Duke Energy on Grid Voltage Regulation Study When a large solar photovoltaic (PV) system is connected to the electric grid, a utility's

  18. Impedance-Based Stability Analysis in Grid Interconnection Impact Study Owing to the Increased Adoption of Converter-Interfaced Generators

    DOE PAGES

    Cho, Youngho; Hur, Kyeon; Kang, Yong; ...

    2017-09-08

    This study investigates the emerging harmonic stability concerns to be addressed by grid planners in generation interconnection studies, owing to the increased adoption of renewable energy resources connected to the grid via power electronic converters. The wideband and high-frequency electromagnetic transient (EMT) characteristics of these converter-interfaced generators (CIGs) and their interaction with the grid impedance are not accurately captured in the typical dynamic studies conducted by grid planners. This paper thus identifies the desired components to be studied and subsequently develops a practical process for integrating a new CIG into a grid with the existing CIGs. The steps of thismore » process are as follows: the impedance equation of a CIG using its control dynamics and an interface filter to the grid, for example, an LCL filter (inductor-capacitor-inductor type), is developed; an equivalent impedance model including the existing CIGs nearby and the grid observed from the point of common coupling are derived; the system stability for credible operating scenarios is assessed. Detailed EMT simulations validate the accuracy of the impedance models and stability assessment for various connection scenarios. Here, by complementing the conventional EMT simulation studies, the proposed analytical approach enables grid planners to identify critical design parameters for seamlessly integrating a new CIG and ensuring the reliability of the grid.« less

  19. On the uncertainties associated with using gridded rainfall data as a proxy for observed

    NASA Astrophysics Data System (ADS)

    Tozer, C. R.; Kiem, A. S.; Verdon-Kidd, D. C.

    2012-05-01

    Gridded rainfall datasets are used in many hydrological and climatological studies, in Australia and elsewhere, including for hydroclimatic forecasting, climate attribution studies and climate model performance assessments. The attraction of the spatial coverage provided by gridded data is clear, particularly in Australia where the spatial and temporal resolution of the rainfall gauge network is sparse. However, the question that must be asked is whether it is suitable to use gridded data as a proxy for observed point data, given that gridded data is inherently "smoothed" and may not necessarily capture the temporal and spatial variability of Australian rainfall which leads to hydroclimatic extremes (i.e. droughts, floods). This study investigates this question through a statistical analysis of three monthly gridded Australian rainfall datasets - the Bureau of Meteorology (BOM) dataset, the Australian Water Availability Project (AWAP) and the SILO dataset. The results of the monthly, seasonal and annual comparisons show that not only are the three gridded datasets different relative to each other, there are also marked differences between the gridded rainfall data and the rainfall observed at gauges within the corresponding grids - particularly for extremely wet or extremely dry conditions. Also important is that the differences observed appear to be non-systematic. To demonstrate the hydrological implications of using gridded data as a proxy for gauged data, a rainfall-runoff model is applied to one catchment in South Australia initially using gauged data as the source of rainfall input and then gridded rainfall data. The results indicate a markedly different runoff response associated with each of the different sources of rainfall data. It should be noted that this study does not seek to identify which gridded dataset is the "best" for Australia, as each gridded data source has its pros and cons, as does gauged data. Rather, the intention is to quantify differences between various gridded data sources and how they compare with gauged data so that these differences can be considered and accounted for in studies that utilise these gridded datasets. Ultimately, if key decisions are going to be based on the outputs of models that use gridded data, an estimate (or at least an understanding) of the uncertainties relating to the assumptions made in the development of gridded data and how that gridded data compares with reality should be made.

  20. How well do terrestrial biosphere models simulate coarse-scale runoff in the contiguous United States?

    DOE PAGES

    Schwalm, C.; Huntzinger, Deborah N.; Cook, Robert B.; ...

    2015-03-11

    Significant changes in the water cycle are expected under current global environmental change. Robust assessment of present-day water cycle dynamics at continental to global scales is confounded by shortcomings in the observed record. Modeled assessments also yield conflicting results which are linked to differences in model structure and simulation protocol. Here we compare simulated gridded (1 spatial resolution) runoff from six terrestrial biosphere models (TBMs), seven reanalysis products, and one gridded surface station product in the contiguous United States (CONUS) from 2001 to 2005. We evaluate the consistency of these 14 estimates with stream gauge data, both as depleted flowmore » and corrected for net withdrawals (2005 only), at the CONUS and water resource region scale, as well as examining similarity across TBMs and reanalysis products at the grid cell scale. Mean runoff across all simulated products and regions varies widely (range: 71 to 356 mm yr(-1)) relative to observed continental-scale runoff (209 or 280 mm yr(-1) when corrected for net withdrawals). Across all 14 products 8 exhibit Nash-Sutcliffe efficiency values in excess of 0.8 and three are within 10% of the observed value. Region-level mismatch exhibits a weak pattern of overestimation in western and underestimation in eastern regions although two products are systematically biased across all regions and largely scales with water use. Although gridded composite TBM and reanalysis runoff show some regional similarities, individual product values are highly variable. At the coarse scales used here we find that progress in better constraining simulated runoff requires standardized forcing data and the explicit incorporation of human effects (e.g., water withdrawals by source, fire, and land use change). (C) 2015 Elsevier B.V. All rights reserved.« less

  1. Why is China’s wind power generation not living up to its potential?

    NASA Astrophysics Data System (ADS)

    Huenteler, Joern; Tang, Tian; Chan, Gabriel; Diaz Anadon, Laura

    2018-04-01

    Following a decade of unprecedented investment, China now has the world’s largest installed base of wind power capacity. Yet, despite siting most wind farms in the wind-rich Northern and Western provinces, electricity generation from Chinese wind farms has not reached the performance benchmarks of the United States and many other advanced economies. This has resulted in lower environmental, economic, and health benefits than anticipated. We develop a framework to explain the performance of the Chinese and US wind sectors, accounting for a comprehensive set of driving factors. We apply this framework to a novel dataset of virtually all wind farms installed in China and the United States through the end of 2013. We first estimate the wind sector’s technical potential using a methodology that produces consistent estimates for both countries. We compare this potential to actual performance and find that Chinese wind farms generated electricity at 37%–45% of their annual technical potential during 2006–2013 compared to 54%–61% in the United States. Our findings underscore that the larger gap between actual performance and technical potential in China compared to the United States is significantly driven by delays in grid connection (14% of the gap) and curtailment due to constraints in grid management (10% of the gap), two challenges of China’s wind power expansion covered extensively in the literature. However, our findings show that China’s underperformance is also driven by suboptimal turbine model selection (31% of the gap), wind farm siting (23% of the gap), and turbine hub heights (6% of the gap)—factors that have received less attention in the literature and, crucially, are locked-in for the lifetime of wind farms. This suggests that besides addressing grid connection delays and curtailment, China will also need policy measures to address turbine siting and technology choices to achieve its national goals and increase utilization up to US levels.

  2. Computational investigations and grid refinement study of 3D transient flow in a cylindrical tank using OpenFOAM

    NASA Astrophysics Data System (ADS)

    Mohd Sakri, F.; Mat Ali, M. S.; Sheikh Salim, S. A. Z.

    2016-10-01

    The study of physic fluid for a liquid draining inside a tank is easily accessible using numerical simulation. However, numerical simulation is expensive when the liquid draining involves the multi-phase problem. Since an accurate numerical simulation can be obtained if a proper method for error estimation is accomplished, this paper provides systematic assessment of error estimation due to grid convergence error using OpenFOAM. OpenFOAM is an open source CFD-toolbox and it is well-known among the researchers and institutions because of its free applications and ready to use. In this study, three types of grid resolution are used: coarse, medium and fine grids. Grid Convergence Index (GCI) is applied to estimate the error due to the grid sensitivity. A monotonic convergence condition is obtained in this study that shows the grid convergence error has been progressively reduced. The fine grid has the GCI value below 1%. The extrapolated value from Richardson Extrapolation is in the range of the GCI obtained.

  3. Solar activity and economic fundamentals: Evidence from 12 geographically disparate power grids

    NASA Astrophysics Data System (ADS)

    Forbes, Kevin F.; St. Cyr, O. C.

    2008-10-01

    This study uses local (ground-based) magnetometer data as a proxy for geomagnetically induced currents (GICs) to address whether there is a space weather/electricity market relationship in 12 geographically disparate power grids: Eirgrid, the power grid that serves the Republic of Ireland; Scottish and Southern Electricity, the power grid that served northern Scotland until April 2005; Scottish Power, the power grid that served southern Scotland until April 2005; the power grid that serves the Czech Republic; E.ON Netz, the transmission system operator in central Germany; the power grid in England and Wales; the power grid in New Zealand; the power grid that serves the vast proportion of the population in Australia; ISO New England, the power grid that serves New England; PJM, a power grid that over the sample period served all or parts of Delaware, Maryland, New Jersey, Ohio, Pennsylvania, Virginia, West Virginia, and the District of Columbia; NYISO, the power grid that serves New York State; and the power grid in the Netherlands. This study tests the hypothesis that GIC levels (proxied by the time variation of local magnetic field measurements (dH/dt)) and electricity grid conditions are related using Pearson's chi-squared statistic. The metrics of power grid conditions include measures of electricity market imbalances, energy losses, congestion costs, and actions by system operators to restore grid stability. The results of the analysis indicate that real-time market conditions in these power grids are statistically related with the GIC proxy.

  4. Algebraic grid generation for coolant passages of turbine blades with serpentine channels and pin fins

    NASA Technical Reports Server (NTRS)

    Shih, T. I.-P.; Roelke, R. J.; Steinthorsson, E.

    1991-01-01

    In order to study numerically details of the flow and heat transfer within coolant passages of turbine blades, a method must first be developed to generate grid systems within the very complicated geometries involved. In this study, a grid generation package was developed that is capable of generating the required grid systems. The package developed is based on an algebraic grid generation technique that permits the user considerable control over how grid points are to be distributed in a very explicit way. These controls include orthogonality of grid lines next to boundary surfaces and ability to cluster about arbitrary points, lines, and surfaces. This paper describes that grid generation package and shows how it can be used to generate grid systems within complicated-shaped coolant passages via an example.

  5. Worldwide Consortium for the Grid (W2COG) Research Initiative Phase 1

    DTIC Science & Technology

    2006-03-31

    weight 3 Observable features A changed set of observables 3 Aggregation A change in overall composition 4 Components A change in units aggregated ...Knowledge Unit • Health • Process • Prerequisite • Supply • Supplier 145 • Transport • Target • Center of Gravity • Line of Communication...number of successful and well documented netcentric pilots. The aggregate value of these pilots far exceeds the investment, and proves the hypothesis

  6. Development of Low Cost, High Energy-Per-Unit-Area Solar Cell Modules

    NASA Technical Reports Server (NTRS)

    Jones, G. T.; Chitre, S.

    1977-01-01

    Work on the development of low cost, high energy per unit area solar cell modules was conducted. Hexagonal solar cell and module efficiencies, module packing ratio, and solar cell design calculations were made. The cell grid structure and interconnection pattern was designed and the module substrates were fabricated for the three modules to be used. It was demonstrated that surface macrostructures significantly improve cell power output and photovoltaic energy conversion efficiency.

  7. Study of Swept Angle Effects on Grid Fins Aerodynamics Performance

    NASA Astrophysics Data System (ADS)

    Faza, G. A.; Fadillah, H.; Silitonga, F. Y.; Agoes Moelyadi, Mochamad

    2018-04-01

    Grid fin is an aerodynamic control surface that usually used on missiles and rockets. In the recent several years many researches have conducted to develop a more efficient grid fins. There are many possibilities of geometric combination could be done to improve aerodynamics characteristic of a grid fin. This paper will only discuss about the aerodynamics characteristics of grid fins compared by another grid fins with different swept angle. The methodology that used to compare the aerodynamics is Computational Fluid Dynamics (CFD). The result of this paper might be used for future studies to answer our former question or as a reference for related studies.

  8. An islanding detection methodology combining decision trees and Sandia frequency shift for inverter-based distributed generations

    DOE PAGES

    Azim, Riyasat; Li, Fangxing; Xue, Yaosuo; ...

    2017-07-14

    Distributed generations (DGs) for grid-connected applications require an accurate and reliable islanding detection methodology (IDM) for secure system operation. This paper presents an IDM for grid-connected inverter-based DGs. The proposed method is a combination of passive and active islanding detection techniques for aggregation of their advantages and elimination/minimisation of the drawbacks. In the proposed IDM, the passive method utilises critical system attributes extracted from local voltage measurements at target DG locations as well as employs decision tree-based classifiers for characterisation and detection of islanding events. The active method is based on Sandia frequency shift technique and is initiated only whenmore » the passive method is unable to differentiate islanding events from other system events. Thus, the power quality degradation introduced into the system by active islanding detection techniques can be minimised. Furthermore, a combination of active and passive techniques allows detection of islanding events under low power mismatch scenarios eliminating the disadvantage associated with the use of passive techniques alone. Finally, detailed case study results demonstrate the effectiveness of the proposed method in detection of islanding events under various power mismatch scenarios, load quality factors and in the presence of single or multiple grid-connected inverter-based DG units.« less

  9. An islanding detection methodology combining decision trees and Sandia frequency shift for inverter-based distributed generations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azim, Riyasat; Li, Fangxing; Xue, Yaosuo

    Distributed generations (DGs) for grid-connected applications require an accurate and reliable islanding detection methodology (IDM) for secure system operation. This paper presents an IDM for grid-connected inverter-based DGs. The proposed method is a combination of passive and active islanding detection techniques for aggregation of their advantages and elimination/minimisation of the drawbacks. In the proposed IDM, the passive method utilises critical system attributes extracted from local voltage measurements at target DG locations as well as employs decision tree-based classifiers for characterisation and detection of islanding events. The active method is based on Sandia frequency shift technique and is initiated only whenmore » the passive method is unable to differentiate islanding events from other system events. Thus, the power quality degradation introduced into the system by active islanding detection techniques can be minimised. Furthermore, a combination of active and passive techniques allows detection of islanding events under low power mismatch scenarios eliminating the disadvantage associated with the use of passive techniques alone. Finally, detailed case study results demonstrate the effectiveness of the proposed method in detection of islanding events under various power mismatch scenarios, load quality factors and in the presence of single or multiple grid-connected inverter-based DG units.« less

  10. Security of Electronic Voting in the United States

    DOE PAGES

    King, Charity; Thompson, Michael

    2016-10-20

    In the midst of numerous high-profile cyber-attacks, the US is considering whether to categorize the US electronic voting system as “critical infrastructure”, to be protected and invested in much the same way as the US power grid or waterways.

  11. NREL’s Advanced Analytics Research for Buildings – Social Media Version

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Forty percent of the total energy consumption in the United States comes from buildings. Working together, we can dramatically shrink that number. NREL’s advanced analytics research has already proven to reduce energy use, save money, and stabilize the grid.

  12. 76 FR 53434 - Free Flow Power Corporation, Northland Power Mississippi River LLC; Notice of Competing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-26

    ...) Up to 160 TREK generating units installed in a matrix on the bottom of the river; (2) the total... each matrix power to a substation; and (4) a transmission line would interconnect with the power grid...

  13. Comparative analysis of zonal systems for macro-level crash modeling.

    PubMed

    Cai, Qing; Abdel-Aty, Mohamed; Lee, Jaeyoung; Eluru, Naveen

    2017-06-01

    Macro-level traffic safety analysis has been undertaken at different spatial configurations. However, clear guidelines for the appropriate zonal system selection for safety analysis are unavailable. In this study, a comparative analysis was conducted to determine the optimal zonal system for macroscopic crash modeling considering census tracts (CTs), state-wide traffic analysis zones (STAZs), and a newly developed traffic-related zone system labeled traffic analysis districts (TADs). Poisson lognormal models for three crash types (i.e., total, severe, and non-motorized mode crashes) are developed based on the three zonal systems without and with consideration of spatial autocorrelation. The study proposes a method to compare the modeling performance of the three types of geographic units at different spatial configurations through a grid based framework. Specifically, the study region is partitioned to grids of various sizes and the model prediction accuracy of the various macro models is considered within these grids of various sizes. These model comparison results for all crash types indicated that the models based on TADs consistently offer a better performance compared to the others. Besides, the models considering spatial autocorrelation outperform the ones that do not consider it. Based on the modeling results and motivation for developing the different zonal systems, it is recommended using CTs for socio-demographic data collection, employing TAZs for transportation demand forecasting, and adopting TADs for transportation safety planning. The findings from this study can help practitioners select appropriate zonal systems for traffic crash modeling, which leads to develop more efficient policies to enhance transportation safety. Copyright © 2017 Elsevier Ltd and National Safety Council. All rights reserved.

  14. Electronic-type vacuum gauges with replaceable elements

    DOEpatents

    Edwards, D. Jr.

    1984-09-18

    In electronic devices for measuring pressures in vacuum systems, the metal elements which undergo thermal deterioration are made readily replaceable by making them parts of a simple plug-in unit. Thus, in ionization gauges, the filament and grid or electron collector are mounted on the novel plug-in unit. In thermocouple pressure gauges, the heater and attached thermocouple are mounted on the plug-in unit. Plug-in units have been designed to function, alternatively, as ionization gauge and as thermocouple gauge, thus providing new gauges capable of measuring broader pressure ranges than is possible with either an ionization gauge or a thermocouple gauge. 5 figs.

  15. Spatiotemporal patterns of precipitation inferred from streamflow observations across the Sierra Nevada mountain range

    NASA Astrophysics Data System (ADS)

    Henn, Brian; Clark, Martyn P.; Kavetski, Dmitri; Newman, Andrew J.; Hughes, Mimi; McGurk, Bruce; Lundquist, Jessica D.

    2018-01-01

    Given uncertainty in precipitation gauge-based gridded datasets over complex terrain, we use multiple streamflow observations as an additional source of information about precipitation, in order to identify spatial and temporal differences between a gridded precipitation dataset and precipitation inferred from streamflow. We test whether gridded datasets capture across-crest and regional spatial patterns of variability, as well as year-to-year variability and trends in precipitation, in comparison to precipitation inferred from streamflow. We use a Bayesian model calibration routine with multiple lumped hydrologic model structures to infer the most likely basin-mean, water-year total precipitation for 56 basins with long-term (>30 year) streamflow records in the Sierra Nevada mountain range of California. We compare basin-mean precipitation derived from this approach with basin-mean precipitation from a precipitation gauge-based, 1/16° gridded dataset that has been used to simulate and evaluate trends in Western United States streamflow and snowpack over the 20th century. We find that the long-term average spatial patterns differ: in particular, there is less precipitation in the gridded dataset in higher-elevation basins whose aspect faces prevailing cool-season winds, as compared to precipitation inferred from streamflow. In a few years and basins, there is less gridded precipitation than there is observed streamflow. Lower-elevation, southern, and east-of-crest basins show better agreement between gridded and inferred precipitation. Implied actual evapotranspiration (calculated as precipitation minus streamflow) then also varies between the streamflow-based estimates and the gridded dataset. Absolute uncertainty in precipitation inferred from streamflow is substantial, but the signal of basin-to-basin and year-to-year differences are likely more robust. The findings suggest that considering streamflow when spatially distributing precipitation in complex terrain may improve its representation, particularly for basins whose orientations (e.g., windward-facing) are favored for orographic precipitation enhancement.

  16. An assessment of differences in gridded precipitation datasets in complex terrain

    NASA Astrophysics Data System (ADS)

    Henn, Brian; Newman, Andrew J.; Livneh, Ben; Daly, Christopher; Lundquist, Jessica D.

    2018-01-01

    Hydrologic modeling and other geophysical applications are sensitive to precipitation forcing data quality, and there are known challenges in spatially distributing gauge-based precipitation over complex terrain. We conduct a comparison of six high-resolution, daily and monthly gridded precipitation datasets over the Western United States. We compare the long-term average spatial patterns, and interannual variability of water-year total precipitation, as well as multi-year trends in precipitation across the datasets. We find that the greatest absolute differences among datasets occur in high-elevation areas and in the maritime mountain ranges of the Western United States, while the greatest percent differences among datasets relative to annual total precipitation occur in arid and rain-shadowed areas. Differences between datasets in some high-elevation areas exceed 200 mm yr-1 on average, and relative differences range from 5 to 60% across the Western United States. In areas of high topographic relief, true uncertainties and biases are likely higher than the differences among the datasets; we present evidence of this based on streamflow observations. Precipitation trends in the datasets differ in magnitude and sign at smaller scales, and are sensitive to how temporal inhomogeneities in the underlying precipitation gauge data are handled.

  17. Grid-connected photovoltaic (PV) systems with batteries storage as solution to electrical grid outages in Burkina Faso

    NASA Astrophysics Data System (ADS)

    Abdoulaye, D.; Koalaga, Z.; Zougmore, F.

    2012-02-01

    This paper deals with a key solution for power outages problem experienced by many African countries and this through grid-connected photovoltaic (PV) systems with batteries storage. African grids are characterized by an insufficient power supply and frequent interruptions. Due to this fact, users who especially use classical grid-connected photovoltaic systems are unable to profit from their installation even if there is sun. In this study, we suggest the using of a grid-connected photovoltaic system with batteries storage as a solution to these problems. This photovoltaic system works by injecting the surplus of electricity production into grid and can also deliver electricity as a stand-alone system with all security needed. To achieve our study objectives, firstly we conducted a survey of a real situation of one African electrical grid, the case of Burkina Faso (SONABEL: National Electricity Company of Burkina). Secondly, as study case, we undertake a sizing, a modeling and a simulation of a grid-connected PV system with batteries storage for the LAME laboratory at the University of Ouagadougou. The simulation shows that the proposed grid-connected system allows users to profit from their photovoltaic installation at any time even if the public electrical grid has some failures either during the day or at night.

  18. A strategy for assessing potential future changes in climate, hydrology, and vegetation in the Western United States

    USGS Publications Warehouse

    Thompson, Robert Stephen; Hostetler, Steven W.; Bartlein, Patrick J.; Anderson, Katherine H.

    1998-01-01

    Historical and geological data indicate that significant changes can occur in the Earth's climate on time scales ranging from years to millennia. In addition to natural climatic change, climatic changes may occur in the near future due to increased concentrations of carbon dioxide and other trace gases in the atmosphere that are the result of human activities. International research efforts using atmospheric general circulation models (AGCM's) to assess potential climatic conditions under atmospheric carbon dioxide concentrations of twice the pre-industrial level (a '2 X CO2' atmosphere) conclude that climate would warm on a global basis. However, it is difficult to assess how the projected warmer climatic conditions would be distributed on a regional scale and what the effects of such warming would be on the landscape, especially for temperate mountainous regions such as the Western United States. In this report, we present a strategy to assess the regional sensitivity to global climatic change. The strategy makes use of a hierarchy of models ranging from an AGCM, to a regional climate model, to landscape-scale process models of hydrology and vegetation. A 2 X CO2 global climate simulation conducted with the National Center for Atmospheric Research (NCAR) GENESIS AGCM on a grid of approximately 4.5o of latitude by 7.5o of longitude was used to drive the NCAR regional climate model (RegCM) over the Western United States on a grid of 60 km by 60 km. The output from the RegCM is used directly (for hydrologic models) or interpolated onto a 15-km grid (for vegetation models) to quantify possible future environmental conditions on a spatial scale relevant to policy makers and land managers.

  19. Renewable Electricity Futures for the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mai, Trieu; Hand, Maureen; Baldwin, Sam F.

    2014-04-14

    This paper highlights the key results from the Renewable Electricity (RE) Futures Study. It is a detailed consideration of renewable electricity in the United States. The paper focuses on technical issues related to the operability of the U. S. electricity grid and provides initial answers to important questions about the integration of high penetrations of renewable electricity technologies from a national perspective. The results indicate that the future U. S. electricity system that is largely powered by renewable sources is possible and the further work is warranted to investigate this clean generation pathway. The central conclusion of the analysis ismore » that renewable electricity generation from technologies that are commercially available today, in combination with a more flexible electric system, is more than adequate to supply 80% of the total U. S. electricity generation in 2050 while meeting electricity demand on an hourly basis in every region of the United States.« less

  20. Assess and Predict Automatic Generation Control Performances for Thermal Power Generation Units Based on Modeling Techniques

    NASA Astrophysics Data System (ADS)

    Zhao, Yan; Yang, Zijiang; Gao, Song; Liu, Jinbiao

    2018-02-01

    Automatic generation control(AGC) is a key technology to maintain real time power generation and load balance, and to ensure the quality of power supply. Power grids require each power generation unit to have a satisfactory AGC performance, being specified in two detailed rules. The two rules provide a set of indices to measure the AGC performance of power generation unit. However, the commonly-used method to calculate these indices is based on particular data samples from AGC responses and will lead to incorrect results in practice. This paper proposes a new method to estimate the AGC performance indices via system identification techniques. In addition, a nonlinear regression model between performance indices and load command is built in order to predict the AGC performance indices. The effectiveness of the proposed method is validated through industrial case studies.

  1. Can developing countries leapfrog the centralized electrification paradigm?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levin, Todd; Thomas, Valerie M.

    Due to the rapidly decreasing costs of small renewable electricity generation 'systems, centralized power systems are no longer a necessary condition of universal access to modern energy services. Developing countries, where centralized electricity infrastructures are less developed, may be able to adopt these new technologies more quickly. We first review the costs of grid extension and distributed solar home systems (SHSs) as reported by a number of different studies. We then present a general analytic framework for analyzing the choice between extending the grid and implementing distributed solar home systems. Drawing upon reported grid expansion cost data for three specificmore » regions, we demonstrate this framework by determining the electricity consumption levels at which the costs of provision through centralized and decentralized approaches are equivalent in these regions. We then calculate SHS capital costs that are necessary for these technologies provide each of five tiers of energy access, as defined by the United Nations Sustainable Energy for All initiative. Our results suggest that solar home systems can play an important role in achieving universal access to basic energy services. The extent of this role depends on three primary factors: SHS costs, grid expansion costs, and centralized generation costs. Given current technology costs, centralized systems will still be required to enable higher levels of consumption; however, cost reduction trends have the potential to disrupt this paradigm. By looking ahead rather than replicating older infrastructure styles, developing countries can leapfrog to a more distributed electricity service model. (C) 2016 International Energy Initiative. Published by Elsevier Inc. All rights reserved.« less

  2. Regional climates in the GISS general circulation model: Surface air temperature

    NASA Technical Reports Server (NTRS)

    Hewitson, Bruce

    1994-01-01

    One of the more viable research techniques into global climate change for the purpose of understanding the consequent environmental impacts is based on the use of general circulation models (GCMs). However, GCMs are currently unable to reliably predict the regional climate change resulting from global warming, and it is at the regional scale that predictions are required for understanding human and environmental responses. Regional climates in the extratropics are in large part governed by the synoptic-scale circulation and the feasibility of using this interscale relationship is explored to provide a way of moving to grid cell and sub-grid cell scales in the model. The relationships between the daily circulation systems and surface air temperature for points across the continental United States are first developed in a quantitative form using a multivariate index based on principal components analysis (PCA) of the surface circulation. These relationships are then validated by predicting daily temperature using observed circulation and comparing the predicted values with the observed temperatures. The relationships predict surface temperature accurately over the major portion of the country in winter, and for half the country in summer. These relationships are then applied to the surface synoptic circulation of the Goddard Institute for Space Studies (GISS) GCM control run, and a set of surface grid cell temperatures are generated. These temperatures, based on the larger-scale validated circulation, may now be used with greater confidence at the regional scale. The generated temperatures are compared to those of the model and show that the model has regional errors of up to 10 C in individual grid cells.

  3. Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Regional Study: Gujarat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochran, Jaquelin M; Palchak, Joseph D; Ehlen, Annaliese K

    This chapter on Andhra Pradesh is one of six state chapters included in Appendix C of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study). The objective of the state chapters is to provide modeling assumptions, results, and next steps to use and improve the model specific to each state. The model has inherent uncertainties, particularly in how the intrastate transmission network and RE generation projects will develop (e.g., locations, capacities). The model also does not include information on contracts or must-run status of particular plantsmore » for reliability purposes. By providing details on the higher spatial resolution model of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study), which better represents the impact of congestion on least-cost scheduling and dispatch, provides a deeper understanding of the relationship among renewable energy (RE) location, transmission, and system flexibility with regard to RE integration, compared to 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. I - National Study.'« less

  4. Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Regional Study: Tamil Nadu

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochran, Jaquelin M; Palchak, Joseph D; Ehlen, Annaliese K

    This chapter on Tamil Nadu is one of six state chapters included in Appendix C of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study). The objective of the state chapters is to provide modeling assumptions, results, and next steps to use and improve the model specific to each state. The model has inherent uncertainties, particularly in how the intrastate transmission network and RE generation projects will develop (e.g., locations, capacities). The model also does not include information on contracts or must-run status of particular plantsmore » for reliability purposes. By providing details on the higher spatial resolution model of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study), which better represents the impact of congestion on least-cost scheduling and dispatch, provides a deeper understanding of the relationship among renewable energy (RE) location, transmission, and system flexibility with regard to RE integration, compared to 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. I - National Study.'« less

  5. Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Regional Study: Rajasthan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochran, Jaquelin M; Palchak, Joseph D; Ehlen, Annaliese K

    This chapter on Andhra Pradesh is one of six state chapters included in Appendix C of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study). The objective of the state chapters is to provide modeling assumptions, results, and next steps to use and improve the model specific to each state. The model has inherent uncertainties, particularly in how the intrastate transmission network and RE generation projects will develop (e.g., locations, capacities). The model also does not include information on contracts or must-run status of particular plantsmore » for reliability purposes. By providing details on the higher spatial resolution model of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study), which better represents the impact of congestion on least-cost scheduling and dispatch, provides a deeper understanding of the relationship among renewable energy (RE) location, transmission, and system flexibility with regard to RE integration, compared to 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. I - National Study.'« less

  6. Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Regional Study: Andhra Pradesh

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochran, Jaquelin M; Palchak, Joseph D; Ehlen, Annaliese K

    This chapter on Andhra Pradesh is one of six state chapters included in Appendix C of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study). The objective of the state chapters is to provide modeling assumptions, results, and next steps to use and improve the model specific to each state. The model has inherent uncertainties, particularly in how the intrastate transmission network and RE generation projects will develop (e.g., locations, capacities). The model also does not include information on contracts or must-run status of particular plantsmore » for reliability purposes. By providing details on the higher spatial resolution model of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study), which better represents the impact of congestion on least-cost scheduling and dispatch, provides a deeper understanding of the relationship among renewable energy (RE) location, transmission, and system flexibility with regard to RE integration, compared to 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. I - National Study.'« less

  7. Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Regional Study: Karnataka

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochran, Jaquelin M; Palchak, Joseph D; Ehlen, Annaliese K

    This chapter on Andhra Pradesh is one of six state chapters included in Appendix C of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study). The objective of the state chapters is to provide modeling assumptions, results, and next steps to use and improve the model specific to each state. The model has inherent uncertainties, particularly in how the intrastate transmission network and RE generation projects will develop (e.g., locations, capacities). The model also does not include information on contracts or must-run status of particular plantsmore » for reliability purposes. By providing details on the higher spatial resolution model of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study), which better represents the impact of congestion on least-cost scheduling and dispatch, provides a deeper understanding of the relationship among renewable energy (RE) location, transmission, and system flexibility with regard to RE integration, compared to 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. I - National Study.'« less

  8. Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Regional Study: Maharashtra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cochran, Jaquelin M; Palchak, Joseph D; Ehlen, Annaliese K

    This chapter on Andhra Pradesh is one of six state chapters included in Appendix C of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study). The objective of the state chapters is to provide modeling assumptions, results, and next steps to use and improve the model specific to each state. The model has inherent uncertainties, particularly in how the intrastate transmission network and RE generation projects will develop (e.g., locations, capacities). The model also does not include information on contracts or must-run status of particular plantsmore » for reliability purposes. By providing details on the higher spatial resolution model of 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. II - Regional Study' (the Regional Study), which better represents the impact of congestion on least-cost scheduling and dispatch, provides a deeper understanding of the relationship among renewable energy (RE) location, transmission, and system flexibility with regard to RE integration, compared to 'Greening the Grid: Pathways to Integrate 175 Gigawatts of Renewable Energy into India's Electric Grid, Vol. I - National Study.'« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdullayev, A. M.; Kulish, G. V.; Slyeptsov, O.

    The evaluation of WWER-1000 Westinghouse fuel performance was done using the results of post–irradiation examinations of six LTAs and the WFA reload batches that have operated normally in mixed cores at South-Ukraine NPP, Unit-3 and Unit-2. The data on WFA/LTA elongation, FR growth and bow, WFA bow and twist, RCCA drag force and drag work, RCCA drop time, FR cladding integrity as well as the visual observation of fuel assemblies obtained during the 2006-2012 outages was utilized. The analysis of the measured data showed that assembly growth, FR bow, irradiation growth, and Zr-1%Nb grid and ZIRLO cladding corrosion lies withinmore » the design limits. The RCCA drop time measured for the LTA/WFA is about 1.9 s at BOC and practically does not change at EOC. The measured WFA bow and twist, and data of drag work on RCCA insertion showed that the WFA deformation in the mixed core is mostly controlled by the distortion of Russian FAs (TVSA) having the higher lateral stiffness. The visual inspection of WFAs carried out during the 2012 outages revealed some damage to the Zr-1%Nb grid outer strap for some WFAs during the loading sequence. The performed fundamental investigations allowed identifying the root cause of grid outer strap deformation and proposing the WFA design modifications for preventing damage to SG at a 225 kg handling trip limit.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Youngho; Hur, Kyeon; Kang, Yong

    This study investigates the emerging harmonic stability concerns to be addressed by grid planners in generation interconnection studies, owing to the increased adoption of renewable energy resources connected to the grid via power electronic converters. The wideband and high-frequency electromagnetic transient (EMT) characteristics of these converter-interfaced generators (CIGs) and their interaction with the grid impedance are not accurately captured in the typical dynamic studies conducted by grid planners. This paper thus identifies the desired components to be studied and subsequently develops a practical process for integrating a new CIG into a grid with the existing CIGs. The steps of thismore » process are as follows: the impedance equation of a CIG using its control dynamics and an interface filter to the grid, for example, an LCL filter (inductor-capacitor-inductor type), is developed; an equivalent impedance model including the existing CIGs nearby and the grid observed from the point of common coupling are derived; the system stability for credible operating scenarios is assessed. Detailed EMT simulations validate the accuracy of the impedance models and stability assessment for various connection scenarios. Here, by complementing the conventional EMT simulation studies, the proposed analytical approach enables grid planners to identify critical design parameters for seamlessly integrating a new CIG and ensuring the reliability of the grid.« less

  11. Harmonic analysis and suppression in hybrid wind & PV solar system

    NASA Astrophysics Data System (ADS)

    Gupta, Tripti; Namekar, Swapnil

    2018-04-01

    The growing demand of electricity has led to produce power through non-conventional source of energy such as solar energy, wind energy, hydro power, energy through biogas and biomass etc. Hybrid system is taken to complement the shortcoming of either sources of energy. The proposed system is grid connected hybrid wind and solar system. A 2.1 MW Doubly fed Induction Generator (DFIG) has been taken for analysis of wind farm whose rotor part is connected to two back-to-back converters. A 250 KW Photovoltaic (PV) array taken to analyze solar farm where inverter is required to convert power from DC to AC since electricity generated through solar PV is in the form of DC. Stability and reliability of the system is very important when the system is grid connected. Harmonics is the major Power quality issue which degrades the quality of power at load side. Harmonics in hybrid system arise through the use of power conversion unit. The other causes of harmonics are fluctuation in wind speed and solar irradiance. The power delivered to grid must be free from harmonics and within the limits specified by Indian grid codes. In proposed work, harmonic analysis of the hybrid system is performed in Electrical Transient Analysis program (ETAP) and single tuned harmonic filter is designed to maintain the utility grid harmonics within limits.

  12. Spatial services grid

    NASA Astrophysics Data System (ADS)

    Cao, Jian; Li, Qi; Cheng, Jicheng

    2005-10-01

    This paper discusses the concept, key technologies and main application of Spatial Services Grid. The technologies of Grid computing and Webservice is playing a revolutionary role in studying the spatial information services. The concept of the SSG (Spatial Services Grid) is put forward based on the SIG (Spatial Information Grid) and OGSA (open grid service architecture). Firstly, the grid computing is reviewed and the key technologies of SIG and their main applications are reviewed. Secondly, the grid computing and three kinds of SIG (in broad sense)--SDG (spatial data grid), SIG (spatial information grid) and SSG (spatial services grid) and their relationships are proposed. Thirdly, the key technologies of the SSG (spatial services grid) is put forward. Finally, three representative applications of SSG (spatial services grid) are discussed. The first application is urban location based services gird, which is a typical spatial services grid and can be constructed on OGSA (Open Grid Services Architecture) and digital city platform. The second application is region sustainable development grid which is the key to the urban development. The third application is Region disaster and emergency management services grid.

  13. Grid Convergence for Turbulent Flows(Invited)

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.; Rumsey, Christopher L.; Schwoppe, Axel

    2015-01-01

    A detailed grid convergence study has been conducted to establish accurate reference solutions corresponding to the one-equation linear eddy-viscosity Spalart-Allmaras turbulence model for two dimensional turbulent flows around the NACA 0012 airfoil and a flat plate. The study involved three widely used codes, CFL3D (NASA), FUN3D (NASA), and TAU (DLR), and families of uniformly refined structured grids that differ in the grid density patterns. Solutions computed by different codes on different grid families appear to converge to the same continuous limit, but exhibit different convergence characteristics. The grid resolution in the vicinity of geometric singularities, such as a sharp trailing edge, is found to be the major factor affecting accuracy and convergence of discrete solutions, more prominent than differences in discretization schemes and/or grid elements. The results reported for these relatively simple turbulent flows demonstrate that CFL3D, FUN3D, and TAU solutions are very accurate on the finest grids used in the study, but even those grids are not sufficient to conclusively establish an asymptotic convergence order.

  14. Groundwater Quality Data in the Mojave Study Unit, 2008: Results from the California GAMA Program

    USGS Publications Warehouse

    Mathany, Timothy M.; Belitz, Kenneth

    2009-01-01

    Groundwater quality in the approximately 1,500 square-mile Mojave (MOJO) study unit was investigated from February to April 2008, as part of the Priority Basin Project of the Groundwater Ambient Monitoring and Assessment (GAMA) Program. The GAMA Priority Basin Project was developed in response to the Groundwater Quality Monitoring Act of 2001 and is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). MOJO was the 23rd of 37 study units to be sampled as part of the GAMA Priority Basin Project. The MOJO study was designed to provide a spatially unbiased assessment of the quality of untreated ground water used for public water supplies within MOJO, and to facilitate statistically consistent comparisons of groundwater quality throughout California. Samples were collected from 59 wells in San Bernardino and Los Angeles Counties. Fifty-two of the wells were selected using a spatially distributed, randomized grid-based method to provide statistical representation of the study area (grid wells), and seven were selected to aid in evaluation of specific water-quality issues (understanding wells). The groundwater samples were analyzed for a large number of organic constituents [volatile organic compounds (VOCs), pesticides and pesticide degradates, and pharmaceutical compounds], constituents of special interest (perchlorate and N-nitrosodimethylamine [NDMA]) naturally occurring inorganic constituents (nutrients, dissolved organic carbon [DOC], major and minor ions, silica, total dissolved solids [TDS], and trace elements), and radioactive constituents (gross alpha and gross beta radioactivity, radium isotopes, and radon-222). Naturally occurring isotopes (stable isotopes of hydrogen, oxygen, and carbon, stable isotopes of nitrogen and oxygen in nitrate, and activities of tritium and carbon-14), and dissolved noble gases also were measured to help identify the sources and ages of the sampled ground water. In total, over 230 constituents and water-quality indicators (field parameters) were investigated. Three types of quality-control samples (blanks, replicates, and matrix spikes) each were collected at approximately 5-8 percent of the wells, and the results for these samples were used to evaluate the quality of the data for the groundwater samples. Field blanks rarely contained detectable concentrations of any constituent, suggesting that contamination was not a significant source of bias in the data for the groundwater samples. Differences between replicate samples generally were within acceptable ranges, indicating acceptable analytical reproducibility. Matrix spike recoveries were within acceptable ranges for most compounds. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, untreated groundwater typically is treated, disinfected, or blended with other waters to maintain water quality. Regulatory thresholds apply to water that is served to the consumer, not to untreated ground water. However, to provide some context for the results, concentrations of constituents measured in the untreated ground water were compared with regulatory and non-regulatory health-based thresholds established by the U.S. Environmental Protection Agency (USEPA) and California Department of Public Health (CDPH) and thresholds established for aesthetic and technical concerns by CDPH. Comparisons between data collected for this study and thresholds for drinking-water are for illustrative purposes only, and are not indicative of compliance or non-compliance with those thresholds. Most constituents that were detected in groundwater samples in the 59 wells in MOJO were found at concentrations below drinking-water thresholds. In MOJO's 52 grid wells, volatile organic compounds (VOCs) were detected in 40 percent of the wells, and pesticides and pesticide degradates were detected in 23 percent of the grid wel

  15. Groundwater Quality Data for the Tahoe-Martis Study Unit, 2007: Results from the California GAMA Program

    USGS Publications Warehouse

    Fram, Miranda S.; Munday, Cathy; Belitz, Kenneth

    2009-01-01

    Groundwater quality in the approximately 460-square-mile Tahoe-Martis study unit was investigated in June through September 2007 as part of the Priority Basin Project of the Groundwater Ambient Monitoring and Assessment (GAMA) Program. The GAMA Priority Basin Project was developed in response to the Groundwater Quality Monitoring Act of 2001 and is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). The study was designed to provide a spatially unbiased assessment of the quality of raw groundwater used for public water supplies within the Tahoe-Martis study unit (Tahoe-Martis) and to facilitate statistically consistent comparisons of groundwater quality throughout California. Samples were collected from 52 wells in El Dorado, Placer, and Nevada Counties. Forty-one of the wells were selected using a spatially distributed, randomized grid-based method to provide statistical representation of the study area (grid wells), and 11 were selected to aid in evaluation of specific water-quality issues (understanding wells). The groundwater samples were analyzed for a large number of synthetic organic constituents (volatile organic compounds [VOC], pesticides and pesticide degradates, and pharmaceutical compounds), constituents of special interest (perchlorate and N-nitrosodimethylamine [NDMA]), naturally occurring inorganic constituents (nutrients, major and minor ions, and trace elements), radioactive constituents, and microbial indicators. Naturally occurring isotopes (tritium, carbon-14, strontium isotope ratio, and stable isotopes of hydrogen and oxygen of water), and dissolved noble gases also were measured to help identify the sources and ages of the sampled groundwater. In total, 240 constituents and water-quality indicators were investigated. Three types of quality-control samples (blanks, replicates, and samples for matrix spikes) each were collected at 12 percent of the wells, and the results obtained from these samples were used to evaluate the quality of the data for the groundwater samples. Field blanks rarely contained detectable concentrations of any constituent, suggesting that data for the groundwater samples were not compromised by possible contamination during sample collection, handling or analysis. Differences between replicate samples were within acceptable ranges. Matrix spike recoveries were within acceptable ranges for most compounds. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, raw water typically is treated, disinfected, or blended with other waters to maintain water quality. Regulatory thresholds apply to water that is served to the consumer, not to raw groundwater. However, to provide some context for the results, concentrations of constituents measured in the raw groundwater were compared with regulatory and nonregulatory health-based thresholds established by the U.S. Environmental Protection Agency (USEPA) and the California Department of Public Health (CDPH), and with aesthetic and technical thresholds established by CDPH. Comparisons between data collected for this study and drinking-water thresholds are for illustrative purposes only and do not indicate of compliance or noncompliance with regulatory thresholds. The concentrations of most constituents detected in groundwater samples from the Tahoe-Martis wells were below drinking-water thresholds. Organic compounds (VOCs and pesticides) were detected in about 40 percent of the samples from grid wells, and most concentrations were less than 1/100th of regulatory and nonregulatory health-based thresholds, although the conentration of perchloroethene in one sample was above the USEPA maximum contaminant level (MCL-US). Concentrations of all trace elements and nutrients in samples from grid wells were below regulatory and nonregulatory health-based thresholds, with five exceptions. Concentra

  16. Relationships between early literacy and nonlinguistic rhythmic processes in kindergarteners.

    PubMed

    Ozernov-Palchik, Ola; Wolf, Maryanne; Patel, Aniruddh D

    2018-03-01

    A growing number of studies report links between nonlinguistic rhythmic abilities and certain linguistic abilities, particularly phonological skills. The current study investigated the relationship between nonlinguistic rhythmic processing, phonological abilities, and early literacy abilities in kindergarteners. A distinctive aspect of the current work was the exploration of whether processing of different types of rhythmic patterns is differentially related to kindergarteners' phonological and reading-related abilities. Specifically, we examined the processing of metrical versus nonmetrical rhythmic patterns, that is, patterns capable of being subdivided into equal temporal intervals or not (Povel & Essens, 1985). This is an important comparison because most music involves metrical sequences, in which rhythm often has an underlying temporal grid of isochronous units. In contrast, nonmetrical sequences are arguably more typical to speech rhythm, which is temporally structured but does not involve an underlying grid of equal temporal units. A rhythm discrimination app with metrical and nonmetrical patterns was administered to 74 kindergarteners in conjunction with cognitive and preliteracy measures. Findings support a relationship among rhythm perception, phonological awareness, and letter-sound knowledge (an essential precursor of reading). A mediation analysis revealed that the association between rhythm perception and letter-sound knowledge is mediated through phonological awareness. Furthermore, metrical perception accounted for unique variance in letter-sound knowledge above all other language and cognitive measures. These results point to a unique role for temporal regularity processing in the association between musical rhythm and literacy in young children. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. 76 FR 53448 - Northland Power Mississippi River LLC; Notice of Preliminary Permit Application Accepted for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-26

    ... grid. The proposed project would have an average annual generation of 788.0 gigawatt-hours (GWh), which... following: (1) Up to 360 TREK generating units installed in a matrix on the bottom of the river; (2) the...

  18. 76 FR 54765 - Northland Power Mississippi River LLC; Notice of Preliminary Permit Application Accepted for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-02

    ... power grid. The proposed project would have an average annual generation of 876.0 gigawatt-hours (GWh... following: (1) Up to 400 TREK generating units installed in a matrix on the bottom of the river; (2) the...

  19. Accuracy assessment of NOAA gridded daily reference evapotranspiration for the Texas High Plains

    USDA-ARS?s Scientific Manuscript database

    The National Oceanic and Atmospheric Administration (NOAA) provides daily reference evapotranspiration (ETref) maps for the contiguous United States using climatic data from North American Land Data Assimilation System (NLDAS). This data provides large-scale spatial representation of ETref, which i...

  20. 76 FR 53432 - Free Flow Power Corporation; Northland Power Mississippi River LLC; Notice of Competing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-26

    ...: (1) Up to 1,053 TREK generating units installed in a matrix on the bottom of the river; (2) the total... each matrix power to a substation; and (4) a transmission line would interconnect with the power grid...

  1. Carbon Fiber Reinforced Polymer Grids for Shear and End Zone Reinforcement in Bridge Beams

    DOT National Transportation Integrated Search

    2018-01-01

    Corrosion of reinforcing steel reduces life spans of bridges throughout the United States; therefore, using non-corroding carbon fiber reinforced polymer (CFRP) reinforcement is seen as a way to increase service life. The use of CFRP as the flexural ...

  2. Plug-in hybrid electric vehicles as a source of distributed frequency regulation

    NASA Astrophysics Data System (ADS)

    Mullen, Sara Kathryn

    The movement to transform the North American power grid into a smart grid may be accomplished by expanding integrated sensing, communications, and control technologies to include every part of the grid to the point of end-use. Plug-in hybrid electric vehicles (PHEV) provide an opportunity for small-scale distributed storage while they are plugged-in. With large numbers of PHEV and the communications and sensing associated with the smart grid, PHEV could provide ancillary services for the grid. Frequency regulation is an ideal service for PHEV because the duration of supply is short (order of minutes) and it is the highest priced ancillary service on the market offering greater financial returns for vehicle owners. Using Simulink a power system simulator modeling the IEEE 14 Bus System was combined with a model of PHEV charging and the controllers which facilitate vehicle-to-grid (V2G) regulation supply. The system includes a V2G controller for each vehicle which makes regulation supply decisions based on battery state, user preferences, and the recommended level of supply. A PHEV coordinator controller located higher in the system has access to reliable frequency measurements and can determine a suitable local automatic generation control (AGC) raise/lower signal for participating vehicles. A first step implementation of the V2G supply system where battery charging is modulated to provide regulation was developed. The system was simulated following a step change in loading using three scenarios: (1) Central generating units provide frequency regulation, (2) PHEV contribute to primary regulation analogous to generator speed governor control, and (3) PHEV contribute to primary and secondary regulation using an additional integral term in the PHEV control signal. In both cases the additional regulation provided by PHEV reduced the area control error (ACE) compared to the base case. Unique contributions resulting from this work include: (1) Studied PHEV energy systems and limitations on battery charging/discharging, (2) Reviewed standards for interconnection of distributed resources and electric vehicle charging [1], [2], (3) Explored strategies for distributed control of PHEV charging, (4) Developed controllers to accommodate PHEV regulation, and (5) Developed a simulator combining a power system model and PHEV/V2G components.

  3. SU-E-T-419: Fabricating Cerrobend Grids with 3D Printing for Spatially Modulated Radiation Therapy: A Feasibility Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, X; Driewer, J; Lei, Y

    2015-06-15

    Purpose: Grid therapy has promising applications in the radiation treatment of bulky and large tumors. However, research and applications of grid therapy is limited by the accessibility of the specialized blocks that produce the grid of pencil-like radiation beams. In this study, a Cerrobend grid block was fabricated using a 3D printing technique. Methods: A grid block mold was designed with divergent tubes following beam central rays. The mold was printed using a resin with the working temperature below 230 °C. The melted Cerrobend liquid at 120°oC was cast into the resin mold to yield a block with a thicknessmore » of 7.4 cm. The grid had a hexagonal pattern, with each pencil beam diameter of 1.4 cm at the iso-center plane; the distance between the beam centers was 2 cm. The dosimetric properties of the grid block were studied using radiographic film and small field dosimeters. Results: the grid block was fabricated to be mounted at the third accessory mount of a Siemens Oncor linear accelerator. Fabricating a grid block using 3D printing is similar to making cutouts for traditional radiotherapy photon blocks, with the difference being that the mold was created by a 3D printer rather than foam. In this study, the valley-to-peak ratio for a 6MV photon grid beam was 20% at dmax, and 30% at 10 cm depth, respectively. Conclusion: We have demonstrated a novel process for implementing grid radiotherapy using 3D printing techniques. Compared to existing approaches, our technique combines reduced cost, accessibility, and flexibility in customization with efficient delivery. This lays the groundwork for future studies to improve our understanding of the efficacy of grid therapy and apply it to improve cancer treatment.« less

  4. Multiport power router and its impact on future smart grids

    NASA Astrophysics Data System (ADS)

    Kado, Yuichi; Shichijo, Daiki; Wada, Keiji; Iwatsuki, Katsumi

    2016-07-01

    We propose a Y configuration power router as a unit cell to easily construct a power delivery system that can meet many types of user requirements. The Y configuration power router controls the direction and magnitude of power flows between three ports regardless of DC or AC. We constructed a prototype three-way isolated DC/DC converter that is the core unit of the Y configuration power router. The electrical insulation between three ports assures safety and reliability for power network systems. We then tested the operation of power flow control. The experimental results revealed that our methodology based on a governing equation was appropriate to control the power flow of the three-way DC/DC converter. In addition, a distribution network composed of power routers had the ability to easily enable interchanges of electrical power between autonomous microgrid cells. We also explored the requirements for communication between energy routers to achieve dynamic adjustments of energy flows in a coordinated manner and their impact on resilient power grid systems.

  5. The Future of Centrally-Organized Wholesale Electricity Markets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glazer, Craig; Morrison, Jay; Breakman, Paul

    The electricity grid in the United States is organized around a network of large, centralized power plants and high voltage transmission lines that transport electricity, sometimes over large distances, before it is delivered to the customer through a local distribution grid. This network of centralized generation and high voltage transmission lines is called the “bulk power system.” Costs relating to bulk power generation typically account for more than half of a customer’s electric bill.1 For this reason, the structure and functioning of wholesale electricity markets have major impacts on costs and economic value for consumers, as well as energy securitymore » and national security. Diverse arrangements for bulk power wholesale markets have evolved over the last several decades. The Southeast and Western United States outside of California have a “bilateral-based” bulk power system where market participants enter into long-term bilateral agreements — using competitive procurements through power marketers, direct arrangements among utilities or with other generation owners, and auctions and exchanges.« less

  6. Development of an Ion Thruster and Power Processor for New Millennium's Deep Space 1 Mission

    NASA Technical Reports Server (NTRS)

    Sovey, James S.; Hamley, John A.; Haag, Thomas W.; Patterson, Michael J.; Pencil, Eric J.; Peterson, Todd T.; Pinero, Luis R.; Power, John L.; Rawlin, Vincent K.; Sarmiento, Charles J.; hide

    1997-01-01

    The NASA Solar Electric Propulsion Technology Applications Readiness Program (NSTAR) will provide a single-string primary propulsion system to NASA's New Millennium Deep Space 1 Mission which will perform comet and asteroid flybys in the years 1999 and 2000. The propulsion system includes a 30-cm diameter ion thruster, a xenon feed system, a power processing unit, and a digital control and interface unit. A total of four engineering model ion thrusters, three breadboard power processors, and a controller have been built, integrated, and tested. An extensive set of development tests has been completed along with thruster design verification tests of 2000 h and 1000 h. An 8000 h Life Demonstration Test is ongoing and has successfully demonstrated more than 6000 h of operation. In situ measurements of accelerator grid wear are consistent with grid lifetimes well in excess of the 12,000 h qualification test requirement. Flight hardware is now being assembled in preparation for integration, functional, and acceptance tests.

  7. A pilot study on conducting mobile learning activities for clinical nursing courses based on the repertory grid approach.

    PubMed

    Wu, Po-Han; Hwang, Gwo-Jen; Tsai, Chin-Chung; Chen, Ya-Chun; Huang, Yueh-Min

    2011-11-01

    In clinical nursing courses, students are trained to identify the status of the target patients. The mastery of such ability and skills is very important since patients frequently need to be cared for immediately. In this pilot study, a repertory grid-oriented clinical mobile learning system is developed for a nursing training program. With the assistance of the mobile learning system, the nursing school students are able to learn in an authentic learning scenario, in which they can physically face the target patients, with the personal guidance and supplementary materials from the learning system to support them. To show the effectiveness of this innovative approach, an experiment has been conducted on the "respiratory system" unit of a nursing course. The experimental results show that the innovative approach is helpful to students in improving their learning achievements. Moreover, from the questionnaire surveys, it was found that most students showed favorable attitudes toward the usage of the mobile learning system and their participation in the training program. Copyright © 2010 Elsevier Ltd. All rights reserved.

  8. Identification of linearised RMS-voltage dip patterns based on clustering in renewable plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    García-Sánchez, Tania; Gómez-Lázaro, Emilio; Muljadi, Edward

    Generation units connected to the grid are currently required to meet low-voltage ride-through (LVRT) requirements. In most developed countries, these requirements also apply to renewable sources, mainly wind power plants and photovoltaic installations connected to the grid. This study proposes an alternative characterisation solution to classify and visualise a large number of collected events in light of current limits and requirements. The authors' approach is based on linearised root-mean-square-(RMS)-voltage trajectories, taking into account LRVT requirements, and a clustering process to identify the most likely pattern trajectories. The proposed solution gives extensive information on an event's severity by providing a simplemore » but complete visualisation of the linearised RMS-voltage patterns. In addition, these patterns are compared to current LVRT requirements to determine similarities or discrepancies. A large number of collected events can then be automatically classified and visualised for comparative purposes. Real disturbances collected from renewable sources in Spain are used to assess the proposed solution. Extensive results and discussions are also included in this study.« less

  9. A Control Chart Approach for Representing and Mining Data Streams with Shape Based Similarity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Omitaomu, Olufemi A

    The mining of data streams for online condition monitoring is a challenging task in several domains including (electric) power grid system, intelligent manufacturing, and consumer science. Considering a power grid application in which thousands of sensors, called the phasor measurement units, are deployed on the power grid network to continuously collect streams of digital data for real-time situational awareness and system management. Depending on design, each sensor could stream between ten and sixty data samples per second. The myriad of sensory data captured could convey deeper insights about sequence of events in real-time and before major damages are done. However,more » the timely processing and analysis of these high-velocity and high-volume data streams is a challenge. Hence, a new data processing and transformation approach, based on the concept of control charts, for representing sequence of data streams from sensors is proposed. In addition, an application of the proposed approach for enhancing data mining tasks such as clustering using real-world power grid data streams is presented. The results indicate that the proposed approach is very efficient for data streams storage and manipulation.« less

  10. The swiss army knife of job submission tools: grid-control

    NASA Astrophysics Data System (ADS)

    Stober, F.; Fischer, M.; Schleper, P.; Stadie, H.; Garbers, C.; Lange, J.; Kovalchuk, N.

    2017-10-01

    grid-control is a lightweight and highly portable open source submission tool that supports all common workflows in high energy physics (HEP). It has been used by a sizeable number of HEP analyses to process tasks that sometimes consist of up to 100k jobs. grid-control is built around a powerful plugin and configuration system, that allows users to easily specify all aspects of the desired workflow. Job submission to a wide range of local or remote batch systems or grid middleware is supported. Tasks can be conveniently specified through the parameter space that will be processed, which can consist of any number of variables and data sources with complex dependencies on each other. Dataset information is processed through a configurable pipeline of dataset filters, partition plugins and partition filters. The partition plugins can take the number of files, size of the work units, metadata or combinations thereof into account. All changes to the input datasets or variables are propagated through the processing pipeline and can transparently trigger adjustments to the parameter space and the job submission. While the core functionality is completely experiment independent, full integration with the CMS computing environment is provided by a small set of plugins.

  11. The Spectrum of Wind Power Fluctuations

    NASA Astrophysics Data System (ADS)

    Bandi, Mahesh

    2016-11-01

    Wind is a variable energy source whose fluctuations threaten electrical grid stability and complicate dynamical load balancing. The power generated by a wind turbine fluctuates due to the variable wind speed that blows past the turbine. Indeed, the spectrum of wind power fluctuations is widely believed to reflect the Kolmogorov spectrum; both vary with frequency f as f - 5 / 3. This variability decreases when aggregate power fluctuations from geographically distributed wind farms are averaged at the grid via a mechanism known as geographic smoothing. Neither the f - 5 / 3 wind power fluctuation spectrum nor the mechanism of geographic smoothing are understood. In this work, we explain the wind power fluctuation spectrum from the turbine through grid scales. The f - 5 / 3 wind power fluctuation spectrum results from the largest length scales of atmospheric turbulence of order 200 km influencing the small scales where individual turbines operate. This long-range influence spatially couples geographically distributed wind farms and synchronizes farm outputs over a range of frequencies and decreases with increasing inter-farm distance. Consequently, aggregate grid-scale power fluctuations remain correlated, and are smoothed until they reach a limiting f - 7 / 3 spectrum. This work was funded by the Collective Interactions Unit, OIST Graduate University, Japan.

  12. Vehicle to grid: electric vehicles as an energy storage solution

    NASA Astrophysics Data System (ADS)

    McGee, Rodney; Waite, Nicholas; Wells, Nicole; Kiamilev, Fouad E.; Kempton, Willett M.

    2013-05-01

    With increased focus on intermittent renewable energy sources such as wind turbines and photovoltaics, there comes a rising need for large-scale energy storage. The vehicle to grid (V2G) project seeks to meet this need using electric vehicles, whose high power capacity and existing power electronics make them a promising energy storage solution. This paper will describe a charging system designed by the V2G team that facilitates selective charging and backfeeding by electric vehicles. The system consists of a custom circuit board attached to an embedded linux computer that is installed both in the EVSE (electric vehicle supply equipment) and in the power electronics unit of the vehicle. The boards establish an in-band communication link between the EVSE and the vehicle, giving the vehicle internet connectivity and the ability to make intelligent decisions about when to charge and discharge. This is done while maintaining compliance with existing charging protocols (SAEJ1772, IEC62196) and compatibility with standard "nonintelligent" cars and chargers. Through this system, the vehicles in a test fleet have been able to successfully serve as portable temporary grid storage, which has implications for regulating the electrical grid, providing emergency power, or supplying power to forward military bases.

  13. Institutional Support | Grid Modernization | NREL

    Science.gov Websites

    the challenges posed by grid modernization. Photo of two people standing in front of a display showing results from a grid study. The demand for objective technical assistance and information on grid related to grid modernization and increasing deployment of distributed energy and renewable resources. As

  14. Status and understanding of groundwater quality in the Tahoe-Martis, Central Sierra, and Southern Sierra study units, 2006-2007--California GAMA Priority Basin Project

    USGS Publications Warehouse

    Fram, Miranda S.; Belitz, Kenneth

    2012-01-01

    Groundwater quality in the Tahoe-Martis, Central Sierra, and Southern Sierra study units was investigated as part of the Priority Basin Project of the California Groundwater Ambient Monitoring and Assessment (GAMA) Program. The three study units are located in the Sierra Nevada region of California in parts of Nevada, Placer, El Dorado, Madera, Tulare, and Kern Counties. The GAMA Priority Basin Project is being conducted by the California State Water Resources Control Board, in collaboration with the U.S. Geological Survey (USGS) and the Lawrence Livermore National Laboratory. The project was designed to provide statistically robust assessments of untreated groundwater quality within the primary aquifer systems used for drinking water. The primary aquifer systems (hereinafter, primary aquifers) for each study unit are defined by the depth of the screened or open intervals of the wells listed in the California Department of Public Health (CDPH) database of wells used for municipal and community drinking-water supply. The quality of groundwater in shallower or deeper water-bearing zones may differ from that in the primary aquifers; shallower groundwater may be more vulnerable to contamination from the surface. The assessments for the Tahoe-Martis, Central Sierra, and Southern Sierra study units were based on water-quality and ancillary data collected by the USGS from 132 wells in the three study units during 2006 and 2007 and water-quality data reported in the CDPH database. Two types of assessments were made: (1) status, assessment of the current quality of the groundwater resource, and (2) understanding, identification of the natural and human factors affecting groundwater quality. The assessments characterize untreated groundwater quality, not the quality of treated drinking water delivered to consumers by water purveyors. Relative-concentrations (sample concentrations divided by benchmark concentrations) were used for evaluating groundwater quality for those constituents that have Federal or California regulatory or non-regulatory benchmarks for drinking-water quality. A relative-concentration (RC) greater than (>) 1.0 indicates a concentration above a benchmark. RCs for organic constituents (volatile organic compounds and pesticides) and special-interest constituents were classified as "high" (RC > 1.0), "moderate" (1.0 ≥ RC > 0.1), or "low" (RC ≤ 0.1). For inorganic constituents (major ions, trace elements, nutrients, and radioactive constituents), the boundary between low and moderate RCs was set at 0.5. A new metric, aquifer-scale proportion, was used in the status assessment as the primary metric for evaluating regional-scale groundwater quality. High aquifer-scale proportion is defined as the percentage of the area of the primary aquifers with RC > 1.0 for a particular constituent or class of constituents; moderate and low aquifer-scale proportions are defined as the percentages of the area of the primary aquifer with moderate and low RCs, respectively. Percentages are based on an areal rather than a volumetric basis. Two statistical approaches—grid-based, which used one value per grid cell, and spatially weighted, which used multiple values per grid cell—were used to calculate aquifer-scale proportions for individual constituents and classes of constituents. The spatially weighted estimates of high aquifer-scale proportions were within the 90-percent (%) confidence intervals of the grid-based estimates in all cases. The status assessment showed that inorganic constituents had greater high and moderate aquifer-scale proportions than did organic constituents in all three study units. In the Tahoe-Martis study unit, RCs for inorganic constituents with health-based benchmarks (primarily arsenic) were high in 20% of the primary aquifer, moderate in 13%, and low in 67%. In the Central Sierra study unit, aquifer-scale proportions for inorganic constituents with health-based benchmarks (primarily arsenic, uranium, fluoride, and molybdenum) were 41% high, 36% moderate, and 23% low. In the Southern Sierra study unit, 32, 34, and 34% of the primary aquifer had high, moderate, and low RCs of inorganic constituents with health-based benchmarks (primarily arsenic, uranium, fluoride, boron, and nitrate). The high aquifer-scale proportions for inorganic constituents with non-health-based benchmarks were 14, 34, and 24% for the Tahoe-Martis, Central Sierra, and Southern Sierra study units, respectively, and the primary constituent was manganese for all three study units. Organic constituents with health-based benchmarks were not present at high RCs in the primary aquifers of the Central Sierra and Southern Sierra study units, and were present at high RCs in only 1% of the Tahoe-Martis study unit. Moderate aquifer-scale proportions for organic constituents were 10%: the trihalomethane chloroform in the Tahoe-Martis study unit; chloroform and the herbicide simazine in the Central Sierra study unit; and chloroform, simazine, the herbicide atrazine, and the solvent perchloroethene in the Southern Sierra study unit. The second component of this study, the understanding assessment, identified the natural and human factors that may have affected groundwater quality in the three study units by evaluating statistical correlations between water-quality constituents and potential explanatory factors. The potential explanatory factors evaluated were land use, septic tank density, climate, relative position in the regional flow system, aquifer lithology, geographic location, well depth and depth to the top of the screened or open interval in the well, groundwater age distribution, pH, and dissolved oxygen concentration. Results of the statistical evaluations were used to explain the occurrence and distribution of constituents in the study units. Aquifer lithology (granitic, metamorphic, sedimentary, or volcanic rocks), groundwater age distribution [modern (recharged since 1952), pre-modern (recharged before 1952), or mixed (containing both modern and pre-modern recharge)], geographic location, pH, and dissolved oxygen were the most significant factors explaining the occurrence patterns of most inorganic constituents. High and moderate RCs of arsenic were associated with pre-modern and mixed-age groundwater and two distinct sets of geochemical conditions: (1) oxic, high-pH conditions, particularly in volcanic rocks, and (2) low-oxygen to anoxic conditions and low- to neutral-pH conditions, particularly in granitic rocks. In granitic and metamorphic rocks, high and moderate RCs of uranium were associated with pre-modern and mixed-age groundwater, low-oxygen to anoxic conditions, and location within parts of the Central Sierra and Southern Sierra study units known to have rocks with anomalously high uranium content compared to other parts of the Sierra Nevada. High and moderate RCs of uranium in sedimentary rocks were associated with pre-modern-age groundwater, oxic and high-pH conditions, and location in the Tahoe Valley South subbasin within the Tahoe-Martis study unit. Land use within 500 meters of the well and groundwater age were the most significant factors explaining occurrence patterns of organic constituents. Herbicide detections were most strongly associated with modern- and mixed-age groundwater from wells with agricultural land use. Trihalomethane detections were most strongly associated with modern- and mixed-age groundwater from wells with > 10% urban land use and (or) septic tank density > 7 tanks per square kilometer. Solvent detections were not significantly related to groundwater age. Eighty-three percent of the wells with modern- or mixed-age groundwater, and 86% of wells with detections of herbicides and (or) THMs had depths to the top of the screened or open interval of 5% agricultural land use and detection of a herbicide or solvent had the highest nitrate concentrations. Comparison between observed and predicted detection frequencies of perchlorate suggests that the perchlorate detected at concentrations < 1 microgram per liter likely reflects the distribution of perchlorate under natural conditions, and that the perchlorate detected at higher concentrations may reflect redistribution of originally natural perchlorate salts by irrigation in the agricultural areas of the Southern Sierra study unit.

  15. The Dynamic General Vegetation Model MC1 over the United States and Canada at a 5-arcminute resolution: model inputs and outputs

    Treesearch

    Ray Drapek; John B. Kim; Ronald P. Neilson

    2015-01-01

    Land managers need to include climate change in their decisionmaking, but the climate models that project future climates operate at spatial scales that are too coarse to be of direct use. To create a dataset more useful to managers, soil and historical climate were assembled for the United States and Canada at a 5-arcminute grid resolution. Nine CMIP3 future climate...

  16. An equivalent layer magnetization model for the United States derived from MAGSAT data

    NASA Technical Reports Server (NTRS)

    Mayhew, M. A.; Galliher, S. C. (Principal Investigator)

    1982-01-01

    Long wavelength anomalies in the total magnetic field measured field measured by MAGSAT over the United States and adjacent areas are inverted to an equivalent layer crustal magnetization distribution. The model is based on an equal area dipole grid at the Earth's surface. Model resolution having physical significance, is about 220 km for MAGSAT data in the elevation range 300-500 km. The magnetization contours correlate well with large-scale tectonic provinces.

  17. Use of In-Situ and Remotely Sensed Snow Observations for the National Water Model in Both an Analysis and Calibration Framework.

    NASA Astrophysics Data System (ADS)

    Karsten, L. R.; Gochis, D.; Dugger, A. L.; McCreight, J. L.; Barlage, M. J.; Fall, G. M.; Olheiser, C.

    2017-12-01

    Since version 1.0 of the National Water Model (NWM) has gone operational in Summer 2016, several upgrades to the model have occurred to improve hydrologic prediction for the continental United States. Version 1.1 of the NWM (Spring 2017) includes upgrades to parameter datasets impacting land surface hydrologic processes. These parameter datasets were upgraded using an automated calibration workflow that utilizes the Dynamic Data Search (DDS) algorithm to adjust parameter values using observed streamflow. As such, these upgrades to parameter values took advantage of various observations collected for snow analysis. In particular, in-situ SNOTEL observations in the Western US, volunteer in-situ observations across the entire US, gamma-derived snow water equivalent (SWE) observations courtesy of the NWS NOAA Corps program, gridded snow depth and SWE products from the Jet Propulsion Laboratory (JPL) Airborne Snow Observatory (ASO), gridded remotely sensed satellite-based snow products (MODIS,AMSR2,VIIRS,ATMS), and gridded SWE from the NWS Snow Data Assimilation System (SNODAS). This study explores the use of these observations to quantify NWM error and improvements from version 1.0 to version 1.1, along with subsequent work since then. In addition, this study explores the use of snow observations for use within the automated calibration workflow. Gridded parameter fields impacting the accumulation and ablation of snow states in the NWM were adjusted and calibrated using gridded remotely sensed snow states, SNODAS products, and in-situ snow observations. This calibration adjustment took place over various ecological regions in snow-dominated parts of the US for a retrospective period of time to capture a variety of climatological conditions. Specifically, the latest calibrated parameters impacting streamflow were held constant and only parameters impacting snow physics were tuned using snow observations and analysis. The adjusted parameter datasets were then used to force the model over an independent period for analysis against both snow and streamflow observations to see if improvements took place. The goal of this work is to further improve snow physics in the NWM, along with identifying areas where further work will take place in the future, such as data assimilation or further forcing improvements.

  18. Ground-Water Quality Data in the San Francisco Bay Study Unit, 2007: Results from the California GAMA Program

    USGS Publications Warehouse

    Ray, Mary C.; Kulongoski, Justin T.; Belitz, Kenneth

    2009-01-01

    Ground-water quality in the approximately 620-square-mile San Francisco Bay study unit (SFBAY) was investigated from April through June 2007 as part of the Priority Basin project of the Ground-Water Ambient Monitoring and Assessment (GAMA) Program. The GAMA Priority Basin project was developed in response to the Groundwater Quality Monitoring Act of 2001, and is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). The study was designed to provide a spatially unbiased assessment of raw ground-water quality, as well as a statistically consistent basis for comparing water quality throughout California. Samples in SFBAY were collected from 79 wells in San Francisco, San Mateo, Santa Clara, Alameda, and Contra Costa Counties. Forty-three of the wells sampled were selected using a spatially distributed, randomized grid-based method to provide statistical representation of the study unit (grid wells). Thirty-six wells were sampled to aid in evaluation of specific water-quality issues (understanding wells). The ground-water samples were analyzed for a large number of synthetic organic constituents (volatile organic compounds [VOC], pesticides and pesticide degradates, pharmaceutical compounds, and potential wastewater-indicator compounds), constituents of special interest (perchlorate and N-nitrosodimethylamine [NDMA]), naturally occurring inorganic constituents (nutrients, major and minor ions, trace elements, chloride and bromide isotopes, and uranium and strontium isotopes), radioactive constituents, and microbial indicators. Naturally occurring isotopes (tritium, carbon-14 isotopes, and stable isotopes of hydrogen, oxygen, nitrogen, boron, and carbon), and dissolved noble gases (noble gases were analyzed in collaboration with Lawrence Livermore National Laboratory) also were measured to help identify the source and age of the sampled ground water. Quality-control samples (blank samples, replicate samples, matrix spike samples) were collected for approximately one-third of the wells, and the results for these samples were used to evaluate the quality of the data for the ground-water samples. Assessment of the quality-control information from the field blanks resulted in applying 'V' codes to approximately 0.1 percent of the data collected for ground-water samples (meaning a constituent was detected in blanks as well as the corresponding environmental data). See the Appendix section 'Quality-Control-Sample Results'. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, water typically is treated, disinfected, and (or) blended with other waters to maintain acceptable water quality. Regulatory thresholds apply to treated water that is delivered to the consumer, not to raw ground water. However, to provide some context for the results, concentrations of constituents measured in the raw ground water were compared with regulatory and non-regulatory health-based thresholds established by the U.S. Environmental Protection Agency (USEPA) and California Department of Public Health (CDPH) and thresholds established for aesthetic concerns (secondary maximum contaminant levels, SMCL-CA) by CDPH. VOCs were detected in about one-half of the grid wells, while pesticides were detected in about one-fifth of the grid wells. Concentrations of all VOCs and pesticides detected in samples from all SFBAY wells were below health-based thresholds. No pharmaceutical compounds were detected in any SFBAY well. One potential wastewater-indicator compound, caffeine, was detected in one grid well in SFBAY. Concentrations of most trace elements and nutrients detected in samples from all SFBAY wells were below health-based thresholds. Exceptions include nitrate, detected above the USEPA maximum contaminant level (MCL-US) in 3samples; arsenic, above the USEPA maximum contaminant level (MCL-US) in 3 samples; c

  19. Stability assessment of a multi-port power electronic interface for hybrid micro-grid applications

    NASA Astrophysics Data System (ADS)

    Shamsi, Pourya

    Migration to an industrial society increases the demand for electrical energy. Meanwhile, social causes for preserving the environment and reducing pollutions seek cleaner forms of energy sources. Therefore, there has been a growth in distributed generation from renewable sources in the past decade. Existing regulations and power system coordination does not allow for massive integration of distributed generation throughout the grid. Moreover, the current infrastructures are not designed for interfacing distributed and deregulated generation. In order to remedy this problem, a hybrid micro-grid based on nano-grids is introduced. This system consists of a reliable micro-grid structure that provides a smooth transition from the current distribution networks to smart micro-grid systems. Multi-port power electronic interfaces are introduced to manage the local generation, storage, and consumption. Afterwards, a model for this micro-grid is derived. Using this model, the stability of the system under a variety of source and load induced disturbances is studied. Moreover, pole-zero study of the micro-grid is performed under various loading conditions. An experimental setup of this micro-grid is developed, and the validity of the model in emulating the dynamic behavior of the system is verified. This study provides a theory for a novel hybrid micro-grid as well as models for stability assessment of the proposed micro-grid.

  20. Status of groundwater quality in the Santa Barbara Study Unit, 2011: California GAMA Priority Basin Project

    USGS Publications Warehouse

    Davis, Tracy A.; Kulongoski, Justin T.

    2016-10-03

    Groundwater quality in the 48-square-mile Santa Barbara study unit was investigated in 2011 as part of the California State Water Resources Control Board’s Groundwater Ambient Monitoring and Assessment (GAMA) Program Priority Basin Project. The study unit is mostly in Santa Barbara County and is in the Transverse and Selected Peninsular Ranges hydrogeologic province. The GAMA Priority Basin Project is carried out by the U.S. Geological Survey in collaboration with the California State Water Resources Control Board and Lawrence Livermore National Laboratory.The GAMA Priority Basin Project was designed to provide a statistically unbiased, spatially distributed assessment of the quality of untreated groundwater in the primary aquifer system of California. The primary aquifer system is defined as that part of the aquifer corresponding to the perforation interval of wells listed in the California Department of Public Health database for the Santa Barbara study unit. This status assessment is intended to characterize the quality of groundwater resources in the primary aquifer system of the Santa Barbara study unit, not the treated drinking water delivered to consumers by water purveyors.The status assessment for the Santa Barbara study unit was based on water-quality and ancillary data collected in 2011 by the U.S. Geological Survey from 23 sites and on water-quality data from the California Department of Public Health database for January 24, 2008–January 23, 2011. The data used for the assessment included volatile organic compounds; pesticides; pharmaceutical compounds; two constituents of special interest, perchlorate and N-nitrosodimethylamine (NDMA); and naturally present inorganic constituents, such as major ions and trace elements. Relative-concentrations (sample concentration divided by the health- or aesthetic-based benchmark concentration) were used to evaluate groundwater quality for those constituents that have federal or California regulatory and non-regulatory benchmarks for drinking-water quality. For inorganic, organic, and special-interest constituents, a relative-concentration greater than 1.0 indicates a concentration greater than the benchmark and is classified as high. Inorganic constituents are classified as moderate if relative-concentrations are greater than 0.5 and less than or equal to 1.0 and are classified as low if relative-concentrations are less than or equal to 0.5. For organic and special-interest constituents, the boundary between moderate and low relative-concentrations was set at 0.1.Aquifer-scale proportion was used as the primary metric for evaluating regional-scale groundwater quality. High aquifer-scale proportion is defined as the areal percentage of the primary aquifer system with a relative-concentration greater than 1.0 for a particular constituent or class of constituents. Moderate and low aquifer-scale proportions were defined as the areal percentage of the primary aquifer system that had moderate and low relative-concentrations, respectively. Two statistical approaches—grid based and spatially weighted—were used to calculate aquifer-scale proportions for individual constituents and constituent classes. Grid-based and spatially weighted estimates were comparable in this the study (within 90-percent confidence intervals). Grid-based results were selected for use in the status assessment unless, as was observed in a few cases, a grid-based result was zero and the spatially weighted result was not zero, in which case, the spatially weighted result was used.Inorganic constituents that have human-health benchmarks were present at high relative-concentrations in 5.3 percent of the primary aquifer system and at moderate concentrations in 32 percent. High aquifer-scale proportions of inorganic constituents primarily were a result of high aquifer-scale proportions of boron (5.3 percent) and fluoride (5.3 percent). Inorganic constituents that have aesthetic-based benchmarks, referred to as secondary maximum contaminant levels, were present at high relative-concentrations in 58 percent of the primary aquifer system and at moderate concentrations in 37 percent. Iron, manganese, sulfate, and total dissolved solids were the inorganic constituents with secondary maximum contaminant levels present at high relative-concentrations.In contrast, organic and special-interest constituents that have health-based benchmarks were not detected at high relative-concentrations in the primary aquifer system. Of the 218 organic constituents analyzed, 10 were detected—9 that had human-health benchmarks. Organic constituents were present at moderate relative-concentrations in 11 percent of the primary aquifer system. The moderate aquifer-scale proportions were a result of moderate relative-concentrations of the volatile organic compounds methyl tert-butyl ether (MTBE, 11 percent) and 1,2-dichloroethane (5.6 percent). The volatile organic compounds 1,1,1-trichloroethane, 1,1-dichloroethane, bromodichloromethane, chloroform, MTBE, and perchloroethene (PCE); the pesticide simazine; and the special-interest constituent perchlorate were detected at more than 10 percent of the sites in the Santa Barbara study unit. Perchlorate was present at moderate relative-concentrations in 50 percent of the primary aquifer system. Pharmaceutical compounds and NDMA were not detected in the Santa Barbara study unit.

  1. Implementing Extreme Value Analysis in a Geospatial Workflow for Storm Surge Hazard Assessment

    NASA Astrophysics Data System (ADS)

    Catelli, J.; Nong, S.

    2014-12-01

    Gridded data of 100-yr (1%) and 500-yr (0.2%) storm surge flood elevations for the United States, Gulf of Mexico, and East Coast are critical to understanding this natural hazard. Storm surge heights were calculated across the study area utilizing SLOSH (Sea, Lake, and Overland Surges from Hurricanes) model data for thousands of synthetic US landfalling hurricanes. Based on the results derived from SLOSH, a series of interpolations were performed using spatial analysis in a geographic information system (GIS) at both the SLOSH basin and the synthetic event levels. The result was a single grid of maximum flood elevations for each synthetic event. This project addresses the need to utilize extreme value theory in a geospatial environment to analyze coincident cells across multiple synthetic events. The results are 100-yr (1%) and 500-yr (0.2%) values for each grid cell in the study area. This talk details a geospatial approach to move raster data to SciPy's NumPy Array structure using the Python programming language. The data are then connected through a Python library to an outside statistical package like R to fit cell values to extreme value theory distributions and return values for specified recurrence intervals. While this is not a new process, the value behind this work is the ability to keep this process in a single geospatial environment and be able to easily replicate this process for other natural hazard applications and extreme event modeling.

  2. Comparative study of bowtie and patient scatter in diagnostic CT

    NASA Astrophysics Data System (ADS)

    Prakash, Prakhar; Boudry, John M.

    2017-03-01

    A fast, GPU accelerated Monte Carlo engine for simulating relevant photon interaction processes over the diagnostic energy range in third-generation CT systems was developed to study the relative contributions of bowtie and object scatter to the total scatter reaching an imaging detector. Primary and scattered projections for an elliptical water phantom (major axis set to 300mm) with muscle and fat inserts were simulated for a typical diagnostic CT system as a function of anti-scatter grid (ASG) configurations. The ASG design space explored grid orientation, i.e. septa either a) parallel or b) parallel and perpendicular to the axis of rotation, as well as septa height. The septa material was Tungsten. The resulting projections were reconstructed and the scatter induced image degradation was quantified using common CT image metrics (such as Hounsfield Unit (HU) inaccuracy and loss in contrast), along with a qualitative review of image artifacts. Results indicate object scatter dominates total scatter in the detector channels under the shadow of the imaged object with the bowtie scatter fraction progressively increasing towards the edges of the object projection. Object scatter was shown to be the driving factor behind HU inaccuracy and contrast reduction in the simulated images while shading artifacts and elevated loss in HU accuracy at the object boundary were largely attributed to bowtie scatter. Because the impact of bowtie scatter could not be sufficiently mitigated with a large grid ratio ASG, algorithmic correction may be necessary to further mitigate these artifacts.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magee, Thoman

    The Consolidated Edison, Inc., of New York (Con Edison) Secure Interoperable Open Smart Grid Demonstration Project (SGDP), sponsored by the United States (US) Department of Energy (DOE), demonstrated that the reliability, efficiency, and flexibility of the grid can be improved through a combination of enhanced monitoring and control capabilities using systems and resources that interoperate within a secure services framework. The project demonstrated the capability to shift, balance, and reduce load where and when needed in response to system contingencies or emergencies by leveraging controllable field assets. The range of field assets includes curtailable customer loads, distributed generation (DG), batterymore » storage, electric vehicle (EV) charging stations, building management systems (BMS), home area networks (HANs), high-voltage monitoring, and advanced metering infrastructure (AMI). The SGDP enables the seamless integration and control of these field assets through a common, cyber-secure, interoperable control platform, which integrates a number of existing legacy control and data systems, as well as new smart grid (SG) systems and applications. By integrating advanced technologies for monitoring and control, the SGDP helps target and reduce peak load growth, improves the reliability and efficiency of Con Edison’s grid, and increases the ability to accommodate the growing use of distributed resources. Con Edison is dedicated to lowering costs, improving reliability and customer service, and reducing its impact on the environment for its customers. These objectives also align with the policy objectives of New York State as a whole. To help meet these objectives, Con Edison’s long-term vision for the distribution grid relies on the successful integration and control of a growing penetration of distributed resources, including demand response (DR) resources, battery storage units, and DG. For example, Con Edison is expecting significant long-term growth of DG. The SGDP enables the efficient, flexible integration of these disparate resources and lays the architectural foundations for future scalability. Con Edison assembled an SGDP team of more than 16 different project partners, including technology vendors, and participating organizations, and the Con Edison team provided overall guidance and project management. Project team members are listed in Table 1-1.« less

  4. Correlation of a scanning laser derived oedema index and visual function following grid laser treatment for diabetic macular oedema.

    PubMed

    Hudson, C; Flanagan, J G; Turner, G S; Chen, H C; Young, L B; McLeod, D

    2003-04-01

    To correlate change of an oedema index derived by scanning laser tomography with change of visual function in patients undergoing grid laser photocoagulation for clinically significant diabetic macular oedema (DMO). The sample comprised 24 diabetic patients with retinal thickening within 500 micro m of the fovea. Inclusion criteria included a logMAR visual acuity of 0.25, or better. Patients were assessed twice before a single session of grid laser treatment and within 1 week of, and at 1, 2, 4, and 12 weeks after, treatment. At each visit, patients underwent logMAR visual acuity, conventional and short wavelength automated perimetry (SWAP), and scanning laser tomography. Each visual function parameter was correlated with the mean oedema index. The mean oedema index represented the z-profile signal width divided by the maximum reflectance intensity (arbitrary units). A Pearson correlation coefficient (Bonferroni corrected) was undertaken on the data set of each patient. 13 patients exhibited significant correlation of the mean oedema index and at least one measure of visual function for the 10 degrees x 10 degrees scan field while 10 patients correlated for the 20 degrees x 20 degrees scan field. Seven patients demonstrated correlation for both scan fields. Laser photocoagulation typically resulted in an immediate loss of perimetric sensitivity whereas the oedema index changed over a period of weeks. Localised oedema did not impact upon visual acuity or letter contrast sensitivity when situated extrafoveally. Correlation of change of the oedema index and of visual function following grid laser photocoagulation was not found in all patients. An absence of correlation can be explained by the localised distribution of DMO in this sample of patients, as well as by differences in the time course of change of the oedema index and visual function. The study has objectively documented change in the magnitude and distribution of DMO following grid laser treatment and has established the relation of this change to the change in visual function.

  5. Reference evapotranspiration from coarse-scale and dynamically downscaled data in complex terrain: Sensitivity to interpolation and resolution

    NASA Astrophysics Data System (ADS)

    Strong, Courtenay; Khatri, Krishna B.; Kochanski, Adam K.; Lewis, Clayton S.; Allen, L. Niel

    2017-05-01

    The main objective of this study was to investigate whether dynamically downscaled high resolution (4-km) climate data from the Weather Research and Forecasting (WRF) model provide physically meaningful additional information for reference evapotranspiration (E) calculation compared to the recently published GridET framework that uses interpolation from coarser-scale simulations run at 32-km resolution. The analysis focuses on complex terrain of Utah in the western United States for years 1985-2010, and comparisons were made statewide with supplemental analyses specifically for regions with irrigated agriculture. E was calculated using the standardized equation and procedures proposed by the American Society of Civil Engineers from hourly data, and climate inputs from WRF and GridET were debiased relative to the same set of observations. For annual mean values, E from WRF (EW) and E from GridET (EG) both agreed well with E derived from observations (r2 = 0.95, bias < 2 mm). Domain-wide, EW and EG were well correlated spatially (r2 = 0.89), however local differences ΔE =EW -EG were as large as +439 mm year-1 (+26%) in some locations, and ΔE averaged +36 mm year-1. After linearly removing the effects of contrasts in solar radiation and wind speed, which are characteristically less reliable under downscaling in complex terrain, approximately half the residual variance was accounted for by contrasts in temperature and humidity between GridET and WRF. These contrasts stemmed from GridET interpolating using an assumed lapse rate of Γ = 6.5 K km-1, whereas WRF produced a thermodynamically-driven lapse rate closer to 5 K km-1 as observed in mountainous terrain. The primary conclusions are that observed lapse rates in complex terrain differ markedly from the commonly assumed Γ = 6.5 K km-1, these lapse rates can be realistically resolved via dynamical downscaling, and use of constant Γ produces differences in E of order as large as 102 mm year-1.

  6. Discerning spatial and temporal LAI and clear-sky FAPAR variability during summer at the Toolik Lake vegetation monitoring grid (North Slope, Alaska)

    NASA Astrophysics Data System (ADS)

    Heim, B.; Beamish, A. L.; Walker, D. A.; Epstein, H. E.; Sachs, T.; Chabrillat, S.; Buchhorn, M.; Prakash, A.

    2016-12-01

    Ground data for the validation of satellite-derived terrestrial Essential Climate Variables (ECVs) at high latitudes are sparse. Also for regional model evaluation (e.g. climate models, land surface models, permafrost models), we lack accurate ranges of terrestrial ground data and face the problem of a large mismatch in scale. Within the German research programs `Regional Climate Change' (REKLIM) and the Environmental Mapping and Analysis Program (EnMAP), we conducted a study on ground data representativeness for vegetation-related variables within a monitoring grid at the Toolik Lake Long-Term Ecological Research station; the Toolik Lake station lies in the Kuparuk River watershed on the North Slope of the Brooks Mountain Range in Alaska. The Toolik Lake grid covers an area of 1 km2 containing Eight five grid points spaced 100 meters apart. Moist acidic tussock tundra is the most dominant vegetation type within the grid. Eight five permanent 1 m2 plots were also established to be representative of the individual gridpoints. Researchers from the University of Alaska Fairbanks have undertaken assessments at these plots, including Leaf Area Index (LAI) and field spectrometry to derive the Normalized Difference Vegetation Index (NDVI). During summer 2016, we conducted field spectrometry and LAI measurements at selected plots during early, peak and late summer. We experimentally measured LAI on more spatially extensive Elementary Sampling Units (ESUs) to investigate the spatial representativeness of the permanent 1 m2 plots and to map ESUs for various tundra types. LAI measurements are potentially influenced by landscape-inherent microtopography, sparse vascular plant cover, and dead woody matter. From field spectrometer measurements, we derived a clear-sky mid-day Fraction of Absorbed Photosynthetically Active Radiation (FAPAR). We will present the first data analyses comparing FAPAR and LAI, and maps of biophysically-focused ESUs for evaluation of the use of remote sensing data to estimate these ecosystem properties.

  7. Design to Improve Visibility: Impact of Corridor Width and Unit Shape.

    PubMed

    Hadi, Khatereh; Zimring, Craig

    2016-07-01

    This study analyzes 10 intensive care units (ICUs) to understand the associations between design features of space layout and nurse-to-patient visibility parameters. Previous studies have explored how different hospital units vary in their visibility relations and how such varied visibility relations result in different nurse behaviors toward patients. However, more limited research has examined the specific design attributes of the layouts that determine the varied visibility relations in the unit. Changes in size, geometry, or other attributes of design elements in nursing units, which might affect patient observation opportunities, require more research. This article reviews the literature to indicate evidence for the impact of hospital unit design on nurse/patient visibility relations and to identify design parameters shown to affect visibility. It further focuses on 10 ICUs to investigate how different layouts diverge regarding their visibility relations using a set of metrics developed by other researchers. Shape geometry and corridor width, as two selected design features, are compared. Corridor width and shape characteristics of ICUs are positively correlated with visibility. Results suggest that floor plans, which are repeatedly broken down into smaller convex (higher convex fragmentation values), or units, which have longer distances between their rooms or between their two opposite ends (longer relative grid distances), might have lower visibility levels across the unit. The findings of this study also suggest that wider corridors positively affect visibility of patient rooms. Changes in overall shape configuration and corridor width of nursing units may have important effects on patient observation and monitoring opportunities. © The Author(s) 2016.

  8. Surface Modeling and Grid Generation of Orbital Sciences X34 Vehicle. Phase 1

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    1997-01-01

    The surface modeling and grid generation requirements, motivations, and methods used to develop Computational Fluid Dynamic volume grids for the X34-Phase 1 are presented. The requirements set forth by the Aerothermodynamics Branch at the NASA Langley Research Center serve as the basis for the final techniques used in the construction of all volume grids, including grids for parametric studies of the X34. The Integrated Computer Engineering and Manufacturing code for Computational Fluid Dynamics (ICEM/CFD), the Grid Generation code (GRIDGEN), the Three-Dimensional Multi-block Advanced Grid Generation System (3DMAGGS) code, and Volume Grid Manipulator (VGM) code are used to enable the necessary surface modeling, surface grid generation, volume grid generation, and grid alterations, respectively. All volume grids generated for the X34, as outlined in this paper, were used for CFD simulations within the Aerothermodynamics Branch.

  9. The R package 'icosa' for coarse resolution global triangular and penta-hexagonal gridding

    NASA Astrophysics Data System (ADS)

    Kocsis, Adam T.

    2017-04-01

    With the development of the internet and the computational power of personal computers, open source programming environments have become indispensable for science in the past decade. This includes the increase of the GIS capacity of the free R environment, which was originally developed for statistical analyses. The flexibility of R made it a preferred programming tool in a multitude of disciplines from the area of the biological and geological sciences. Many of these subdisciplines operate with incidence (occurrence) data that are in a large number of cases to be grained before further analyses can be conducted. This graining is executed mostly by gridding data to cells of a Gaussian grid of various resolutions to increase the density of data in a single unit of the analyses. This method has obvious shortcomings despite the ease of its application: well-known systematic biases are induced to cell sizes and shapes that can interfere with the results of statistical procedures, especially if the number of incidence points influences the metrics in question. The 'icosa' package employs a common method to overcome this obstacle by implementing grids with roughly equal cell sizes and shapes that are based on tessellated icosahedra. These grid objects are essentially polyhedra with xyz Cartesian vertex data that are linked to tables of faces and edges. At its current developmental stage, the package uses a single method of tessellation which balances grid cell size and shape distortions, but its structure allows the implementation of various other types of tessellation algorithms. The resolution of the grids can be set by the number of breakpoints inserted into a segment forming an edge of the original icosahedron. Both the triangular and their inverted penta-hexagonal grids are available for creation with the package. The package also incorporates functions to look up coordinates in the grid very effectively and data containers to link data to the grid structure. The classes defined in the package are communicating with classes of the 'sp' and 'raster' packages and functions are supplied that allow resolution change and type conversions. Three-dimensional rendering is made available with the 'rgl' package and two-dimensional projections can be calculated using 'sp' and 'rgdal'. The package was developed as part of a project funded by the Deutsche Forschungsgemeinschaft (KO - 5382/1-1).

  10. 76 FR 53449 - Northland Power Mississippi River LLC; Notice of Preliminary Permit Application Accepted for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-26

    ...) Up to 320 TREK generating units installed in a matrix on the bottom of the river; (2) the total... each matrix's power to a substation; and (4) a transmission line would interconnect with the power grid. The proposed [[Page 53450

  11. Wireless Sensor Network for Electric Transmission Line Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alphenaar, Bruce

    Generally, federal agencies tasked to oversee power grid reliability are dependent on data from grid infrastructure owners and operators in order to obtain a basic level of situational awareness. Since there are many owners and operators involved in the day-to-day functioning of the power grid, the task of accessing, aggregating and analyzing grid information from these sources is not a trivial one. Seemingly basic tasks such as synchronizing data timestamps between many different data providers and sources can be difficult as evidenced during the post-event analysis of the August 2003 blackout. In this project we investigate the efficacy and costmore » effectiveness of deploying a network of wireless power line monitoring devices as a method of independently monitoring key parts of the power grid as a complement to the data which is currently available to federal agencies from grid system operators. Such a network is modeled on proprietary power line monitoring technologies and networks invented, developed and deployed by Genscape, a Louisville, Kentucky based real-time energy information provider. Genscape measures transmission line power flow using measurements of electromagnetic fields under overhead high voltage transmission power lines in the United States and Europe. Opportunities for optimization of the commercial power line monitoring technology were investigated in this project to enable lower power consumption, lower cost and improvements to measurement methodologies. These optimizations were performed in order to better enable the use of wireless transmission line monitors in large network deployments (perhaps covering several thousand power lines) for federal situational awareness needs. Power consumption and cost reduction were addressed by developing a power line monitor using a low power, low cost wireless telemetry platform known as the ''Mote''. Motes were first developed as smart sensor nodes in wireless mesh networking applications. On such a platform, it has been demonstrated in this project that wireless monitoring units can effectively deliver real-time transmission line power flow information for less than $500 per monitor. The data delivered by such a monitor has during the course of the project been integrated with a national grid situational awareness visualization platform developed by Oak Ridge National Laboratory. Novel vibration energy scavenging methods based on piezoelectric cantilevers were also developed as a proposed method to power such monitors, with a goal of further cost reduction and large-scale deployment. Scavenging methods developed during the project resulted in 50% greater power output than conventional cantilever-based vibrational energy scavenging devices typically used to power smart sensor nodes. Lastly, enhanced and new methods for electromagnetic field sensing using multi-axis magnetometers and infrared reflectometry were investigated for potential monitoring applications in situations with a high density of power lines or high levels of background 60 Hz noise in order to isolate power lines of interest from other power lines in close proximity. The goal of this project was to investigate and demonstrate the feasibility of using small form factor, highly optimized, low cost, low power, non-contact, wireless electric transmission line monitors for delivery of real-time, independent power line monitoring for the US power grid. The project was divided into three main types of activity as follows; (1) Research into expanding the range of applications for non-contact power line monitoring to enable large scale low cost sensor network deployments (Tasks 1, 2); (2) Optimization of individual sensor hardware components to reduce size, cost and power consumption and testing in a pilot field study (Tasks 3,5); and (3) Demonstration of the feasibility of using the data from the network of power line monitors via a range of custom developed alerting and data visualization applications to deliver real-time information to federal agencies and others tasked with grid reliability (Tasks 6,8).« less

  12. Ground-Water Quality Data in the Coachella Valley Study Unit, 2007: Results from the California GAMA Program

    USGS Publications Warehouse

    Goldrath, Dara A.; Wright, Michael T.; Belitz, Kenneth

    2009-01-01

    Ground-water quality in the approximately 820 square-mile Coachella Valley Study Unit (COA) was investigated during February and March 2007 as part of the Priority Basin Project of the Groundwater Ambient Monitoring and Assessment (GAMA) Program. The GAMA Priority Basin Project was developed in response to the Groundwater Quality Monitoring Act of 2001, and is being conducted by the U.S. Geological Survey (USGS) in cooperation with the California State Water Resources Control Board (SWRCB). The study was designed to provide a spatially unbiased assessment of raw ground water used for public-water supplies within the Coachella Valley, and to facilitate statistically consistent comparisons of ground-water quality throughout California. Samples were collected from 35 wells in Riverside County. Nineteen of the wells were selected using a spatially distributed, randomized grid-based method to provide statistical representation of the study unit (grid wells). Sixteen additional wells were sampled to evaluate changes in water chemistry along selected ground-water flow paths, examine land use effects on ground-water quality, and to collect water-quality data in areas where little exists. These wells were referred to as 'understanding wells'. The ground-water samples were analyzed for a large number of organic constituents (volatile organic compounds [VOC], pesticides and pesticide degradates, pharmaceutical compounds, and potential wastewater-indicator compounds), constituents of special interest (perchlorate and 1,2,3-trichloropropane [1,2,3-TCP]), naturally occurring inorganic constituents (nutrients, major and minor ions, and trace elements), radioactive constituents, and microbial indicators. Naturally occurring isotopes (uranium, tritium, carbon-14, and stable isotopes of hydrogen, oxygen, and boron), and dissolved noble gases (the last in collaboration with Lawrence Livermore National Laboratory) also were measured to help identify the source and age of the sampled ground water. A quality-control sample (blank, replicate, or matrix spike) was collected at approximately one quarter of the wells, and the results for these samples were used to evaluate the quality of the data for the ground-water samples. Assessment of the quality-control information resulted in V-coding less than 0.1 percent of the data collected. This study did not attempt to evaluate the quality of water delivered to consumers; after withdrawal from the ground, water typically is treated, disinfected, and (or) blended with other waters to maintain acceptable water quality. Regulatory thresholds apply to treated water that is supplied to the consumer, not to raw ground water. However, to provide some context for the results, concentrations of constituents measured in the raw ground water were compared with health-based thresholds established by the U.S. Environmental Protection Agency (USEPA) and the California Department of Public Health (CDPH) and thresholds established for aesthetic purposes (secondary maximum contaminant levels, SMCL-CA) by CDPH. Most constituents detected in ground-water samples were at concentrations below drinking-water thresholds. Volatile organic compounds, pesticides, and pesticide degradates were detected in less than one-third of the grid well samples collected. All VOC and pesticide concentrations measured were below health-based thresholds. Potential waste-water indicators were detected in less than half of the wells sampled, and no detections were above health-based thresholds. Perchlorate was detected in seven grid wells; concentrations from two wells were above the CDPH maximum contaminant level (MCL-CA). Most detections of trace elements in samples collected from COA Study Unit wells were below water-quality thresholds. Exceptions include five samples of arsenic that were above the USEPA maximum contaminant level (MCL-US), two detections of boron above the CDPH notification level (NL-CA), and two detections of mol

  13. caGrid 1.0: An Enterprise Grid Infrastructure for Biomedical Research

    PubMed Central

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Phillips, Joshua; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2008-01-01

    Objective To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. Design An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG™) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study. Measurements The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. Results The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid. Conclusions While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community. PMID:18096909

  14. caGrid 1.0: an enterprise Grid infrastructure for biomedical research.

    PubMed

    Oster, Scott; Langella, Stephen; Hastings, Shannon; Ervin, David; Madduri, Ravi; Phillips, Joshua; Kurc, Tahsin; Siebenlist, Frank; Covitz, Peter; Shanbhag, Krishnakant; Foster, Ian; Saltz, Joel

    2008-01-01

    To develop software infrastructure that will provide support for discovery, characterization, integrated access, and management of diverse and disparate collections of information sources, analysis methods, and applications in biomedical research. An enterprise Grid software infrastructure, called caGrid version 1.0 (caGrid 1.0), has been developed as the core Grid architecture of the NCI-sponsored cancer Biomedical Informatics Grid (caBIG) program. It is designed to support a wide range of use cases in basic, translational, and clinical research, including 1) discovery, 2) integrated and large-scale data analysis, and 3) coordinated study. The caGrid is built as a Grid software infrastructure and leverages Grid computing technologies and the Web Services Resource Framework standards. It provides a set of core services, toolkits for the development and deployment of new community provided services, and application programming interfaces for building client applications. The caGrid 1.0 was released to the caBIG community in December 2006. It is built on open source components and caGrid source code is publicly and freely available under a liberal open source license. The core software, associated tools, and documentation can be downloaded from the following URL: https://cabig.nci.nih.gov/workspaces/Architecture/caGrid. While caGrid 1.0 is designed to address use cases in cancer research, the requirements associated with discovery, analysis and integration of large scale data, and coordinated studies are common in other biomedical fields. In this respect, caGrid 1.0 is the realization of a framework that can benefit the entire biomedical community.

  15. The Potential Role of Grid-Like Software in Bedside Chest Radiography in Improving Image Quality and Dose Reduction: An Observer Preference Study.

    PubMed

    Ahn, Su Yeon; Chae, Kum Ju; Goo, Jin Mo

    2018-01-01

    To compare the observer preference of image quality and radiation dose between non-grid, grid-like, and grid images. Each of the 38 patients underwent bedside chest radiography with and without a grid. A grid-like image was generated from a non-grid image using SimGrid software (Samsung Electronics Co. Ltd.) employing deep-learning-based scatter correction technology. Two readers recorded the preference for 10 anatomic landmarks and the overall appearance on a five-point scale for a pair of non-grid and grid-like images, and a pair of grid-like and grid images, respectively, which were randomly presented. The dose area product (DAP) was also recorded. Wilcoxon's rank sum test was used to assess the significance of preference. Both readers preferred grid-like images to non-grid images significantly ( p < 0.001); with a significant difference in terms of the preference for grid images to grid-like images ( p = 0.317, 0.034, respectively). In terms of anatomic landmarks, both readers preferred grid-like images to non-grid images ( p < 0.05). No significant differences existed between grid-like and grid images except for the preference for grid images in proximal airways by two readers, and in retrocardiac lung and thoracic spine by one reader. The median DAP were 1.48 (range, 1.37-2.17) dGy * cm 2 in grid images and 1.22 (range, 1.11-1.78) dGy * cm 2 in grid-like images with a significant difference ( p < 0.001). The SimGrid software significantly improved the image quality of non-grid images to a level comparable to that of grid images with a relatively lower level of radiation exposure.

  16. Carbon Dioxide Emissions Effects of Grid-Scale Electricity Storage in a Decarbonizing Power System

    DOE PAGES

    Craig, Michael T.; Jaramillo, Paulina; Hodge, Bri-Mathias

    2018-01-03

    While grid-scale electricity storage (hereafter 'storage') could be crucial for deeply decarbonizing the electric power system, it would increase carbon dioxide (CO 2) emissions in current systems across the United States. To better understand how storage transitions from increasing to decreasing system CO 2 emissions, we quantify the effect of storage on operational CO 2 emissions as a power system decarbonizes under a moderate and strong CO 2 emission reduction target through 2045. Under each target, we compare the effect of storage on CO 2 emissions when storage participates in only energy, only reserve, and energy and reserve markets. Wemore » conduct our study in the Electricity Reliability Council of Texas (ERCOT) system and use a capacity expansion model to forecast generator fleet changes and a unit commitment and economic dispatch model to quantify system CO 2 emissions with and without storage. We find that storage would increase CO 2 emissions in the current ERCOT system, but would decrease CO 2 emissions in 2025 through 2045 under both decarbonization targets. Storage reduces CO 2 emissions primarily by enabling gas-fired generation to displace coal-fired generation, but also by reducing wind and solar curtailment. We further find that the market in which storage participates drives large differences in the magnitude, but not the direction, of the effect of storage on CO 2 emissions.« less

  17. Carbon Dioxide Emissions Effects of Grid-Scale Electricity Storage in a Decarbonizing Power System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craig, Michael T.; Jaramillo, Paulina; Hodge, Bri-Mathias

    While grid-scale electricity storage (hereafter 'storage') could be crucial for deeply decarbonizing the electric power system, it would increase carbon dioxide (CO 2) emissions in current systems across the United States. To better understand how storage transitions from increasing to decreasing system CO 2 emissions, we quantify the effect of storage on operational CO 2 emissions as a power system decarbonizes under a moderate and strong CO 2 emission reduction target through 2045. Under each target, we compare the effect of storage on CO 2 emissions when storage participates in only energy, only reserve, and energy and reserve markets. Wemore » conduct our study in the Electricity Reliability Council of Texas (ERCOT) system and use a capacity expansion model to forecast generator fleet changes and a unit commitment and economic dispatch model to quantify system CO 2 emissions with and without storage. We find that storage would increase CO 2 emissions in the current ERCOT system, but would decrease CO 2 emissions in 2025 through 2045 under both decarbonization targets. Storage reduces CO 2 emissions primarily by enabling gas-fired generation to displace coal-fired generation, but also by reducing wind and solar curtailment. We further find that the market in which storage participates drives large differences in the magnitude, but not the direction, of the effect of storage on CO 2 emissions.« less

  18. Under the Weather: Space Weather. The Magnetic Field of the Heliosphere

    NASA Technical Reports Server (NTRS)

    Roberts, Aaron; Goldstein, Melvyn

    2000-01-01

    Normally, only people in the far north can enjoy the dancing beauty of the aurora borealis; however, an intense collision of charged solar particles with the Earth's magnetic field can magnify the Northern Lights so much that they are visible in the southern United States. Behind the light show lies enough flux of energetic particles carried by solar wind to render our planet uninhabitable. The Earth's magnetic field, also known as the magnetosphere, is the only thing that shields us from the Sun. Even the magnetosphere cannot fully guard us from the wrath of the Sun. In March 1989, a powerful solar flare hit Earth with such energy that it burned out transformers in Quebec's electrical grid, plunging Quebec and the eastern United States into darkness for more than 9 hours. Northern lights and energy grid overloads are not the only ways that a solar wind can affect us. A solar storm in July 1999 interrupted radio broadcasts. Solar activity can disorient radars and satellite sensors, break up cell phone connections, and threaten the safety of astronauts. A large bombardment of solar particles can even reduce the amount of ozone in the upper atmosphere. Magnetohydrodynamics (MHD), the study of magnetic fields in magnetized plasmas, can help scientists predict, and therefore prepare for, the harmful side effects of solar weather in the magnetosphere.

  19. Carbon dioxide emissions effects of grid-scale electricity storage in a decarbonizing power system

    NASA Astrophysics Data System (ADS)

    Craig, Michael T.; Jaramillo, Paulina; Hodge, Bri-Mathias

    2018-01-01

    While grid-scale electricity storage (hereafter ‘storage’) could be crucial for deeply decarbonizing the electric power system, it would increase carbon dioxide (CO2) emissions in current systems across the United States. To better understand how storage transitions from increasing to decreasing system CO2 emissions, we quantify the effect of storage on operational CO2 emissions as a power system decarbonizes under a moderate and strong CO2 emission reduction target through 2045. Under each target, we compare the effect of storage on CO2 emissions when storage participates in only energy, only reserve, and energy and reserve markets. We conduct our study in the Electricity Reliability Council of Texas (ERCOT) system and use a capacity expansion model to forecast generator fleet changes and a unit commitment and economic dispatch model to quantify system CO2 emissions with and without storage. We find that storage would increase CO2 emissions in the current ERCOT system, but would decrease CO2 emissions in 2025 through 2045 under both decarbonization targets. Storage reduces CO2 emissions primarily by enabling gas-fired generation to displace coal-fired generation, but also by reducing wind and solar curtailment. We further find that the market in which storage participates drives large differences in the magnitude, but not the direction, of the effect of storage on CO2 emissions.

  20. School Finance and Technology: A Case Study Using Grid and Group Theory to Explore the Connections

    ERIC Educational Resources Information Center

    Case, Stephoni; Harris, Edward L.

    2014-01-01

    Using grid and group theory (Douglas 1982, 2011), the study described in this article examined the intersections of technology and school finance in four schools located in districts differing in size, wealth, and commitment to technology integration. In grid and group theory, grid refers to the degree to which policies and role prescriptions…

  1. Accuracy of Gradient Reconstruction on Grids with High Aspect Ratio

    NASA Technical Reports Server (NTRS)

    Thomas, James

    2008-01-01

    Gradient approximation methods commonly used in unstructured-grid finite-volume schemes intended for solutions of high Reynolds number flow equations are studied comprehensively. The accuracy of gradients within cells and within faces is evaluated systematically for both node-centered and cell-centered formulations. Computational and analytical evaluations are made on a series of high-aspect-ratio grids with different primal elements, including quadrilateral, triangular, and mixed element grids, with and without random perturbations to the mesh. Both rectangular and cylindrical geometries are considered; the latter serves to study the effects of geometric curvature. The study shows that the accuracy of gradient reconstruction on high-aspect-ratio grids is determined by a combination of the grid and the solution. The contributors to the error are identified and approaches to reduce errors are given, including the addition of higher-order terms in the direction of larger mesh spacing. A parameter GAMMA characterizing accuracy on curved high-aspect-ratio grids is discussed and an approximate-mapped-least-square method using a commonly-available distance function is presented; the method provides accurate gradient reconstruction on general grids. The study is intended to be a reference guide accompanying the construction of accurate and efficient methods for high Reynolds number applications

  2. Grid-Optimization Program for Photovoltaic Cells

    NASA Technical Reports Server (NTRS)

    Daniel, R. E.; Lee, T. S.

    1986-01-01

    CELLOPT program developed to assist in designing grid pattern of current-conducting material on photovoltaic cell. Analyzes parasitic resistance losses and shadow loss associated with metallized grid pattern on both round and rectangular solar cells. Though performs sensitivity studies, used primarily to optimize grid design in terms of bus bar and grid lines by minimizing power loss. CELLOPT written in APL.

  3. The Adoption of Grid Computing Technology by Organizations: A Quantitative Study Using Technology Acceptance Model

    ERIC Educational Resources Information Center

    Udoh, Emmanuel E.

    2010-01-01

    Advances in grid technology have enabled some organizations to harness enormous computational power on demand. However, the prediction of widespread adoption of the grid technology has not materialized despite the obvious grid advantages. This situation has encouraged intense efforts to close the research gap in the grid adoption process. In this…

  4. Evaluation of truncation error and adaptive grid generation for the transonic full potential flow calculations

    NASA Technical Reports Server (NTRS)

    Nakamura, S.

    1983-01-01

    The effects of truncation error on the numerical solution of transonic flows using the full potential equation are studied. The effects of adapting grid point distributions to various solution aspects including shock waves is also discussed. A conclusion is that a rapid change of grid spacing is damaging to the accuracy of the flow solution. Therefore, in a solution adaptive grid application an optimal grid is obtained as a tradeoff between the amount of grid refinement and the rate of grid stretching.

  5. Studies of dished accelerator grids for 30-cm ion thrusters

    NASA Technical Reports Server (NTRS)

    Rawlin, V. K.

    1973-01-01

    Eighteen geometrically different sets of dished accelerator grids were tested on five 30-cm thrusters. The geometric variation of the grids included the grid-to-grid spacing, the screen and accelerator hole diameters and thicknesses, the screen and accelerator open area fractions, ratio of dish depth to dish diameter, compensation, and aperture shape. In general, the data taken over a range of beam currents for each grid set included the minimum total accelerating voltage required to extract a given beam current and the minimum accelerator grid voltage required to prevent electron backstreaming.

  6. A Grid Sourcing and Adaptation Study Using Unstructured Grids for Supersonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Carter, Melissa B.; Deere, Karen A.

    2008-01-01

    NASA created the Supersonics Project as part of the NASA Fundamental Aeronautics Program to advance technology that will make a supersonic flight over land viable. Computational flow solvers have lacked the ability to accurately predict sonic boom from the near to far field. The focus of this investigation was to establish gridding and adaptation techniques to predict near-to-mid-field (<10 body lengths below the aircraft) boom signatures at supersonic speeds using the USM3D unstructured grid flow solver. The study began by examining sources along the body the aircraft, far field sourcing and far field boundaries. The study then examined several techniques for grid adaptation. During the course of the study, volume sourcing was introduced as a new way to source grids using the grid generation code VGRID. Two different methods of using the volume sources were examined. The first method, based on manual insertion of the numerous volume sources, made great improvements in the prediction capability of USM3D for boom signatures. The second method (SSGRID), which uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid and pressure waves, showed similar results with a more automated approach. Due to SSGRID s results and ease of use, the rest of the study focused on developing a best practice using SSGRID. The best practice created by this study for boom predictions using the CFD code USM3D involved: 1) creating a small cylindrical outer boundary either 1 or 2 body lengths in diameter (depending on how far below the aircraft the boom prediction is required), 2) using a single volume source under the aircraft, and 3) using SSGRID to stretch and shear the grid to the desired length.

  7. 3D interactive forward and inversion gravity modelling at different scales: From subduction zone modelling to cavity detection.

    NASA Astrophysics Data System (ADS)

    Götze, Hans-Jürgen; Schmidt, Sabine

    2014-05-01

    Modern geophysical interpretation requires an interdisciplinary approach, particularly when considering the available amount of 'state of the art' information. A combination of different geophysical surveys employing seismic, gravity and EM, together with geological and petrological studies, can provide new insights into the structures and tectonic evolution of the lithosphere, natural deposits and underground cavities. Interdisciplinary interpretation is essential for any numerical modelling of these structures and the processes acting on them Interactive gravity and magnetic modeling can play an important role in the depth imaging workflow of complex projects. The integration of the workflow and the tools is important to meet the needs of today's more interactive and interpretative depth imaging workflows. For the integration of gravity and magnetic models the software IGMAS+ can play an important role in this workflow. For simplicity the focus is on gravity modeling, but all methods can be applied to the modeling of magnetic data as well. Currently there are three common ways to define a 3D gravity model. Grid based models: Grids define the different geological units. The densities of the geological units are constant. Additional grids can be introduced to subdivide the geological units, making it possible to represent density depth relations. Polyhedral models: The interfaces between different geological units are defined by polyhedral, typically triangles. Voxel models: Each voxel in a regular cube has a density assigned. Spherical Earth modeling: Geophysical investigations may cover huge areas of several thousand square kilometers. The depression of the earth's surface due to the curvature of the Earth is 3 km at a distance of 200 km and 20 km at a distance of 500 km. Interactive inversion: Inversion is typically done in batch where constraints are defined beforehand and then after a few minutes or hours a model fitting the data and constraints is generated. As examples I show results from the Central Andes and the North Sea. Both gravity and geoid of the two areas were investigated with regard to their isostatic state, the crustal density structure and rigidity of the Lithosphere. Modern satellite measurements of the recent ESA campaigns are compared to ground observations in the region. Estimates of stress and GPE (gravitational potential energy) at the western South American margin have been derived from an existing 3D density model. Here, sensitivity studies of gravity and gravity gradients indicate that short wavelength lithospheric structures are more pronounced in the gravity gradient tensor than in the gravity field. A medium size example of the North Sea underground demonstrates how interdisciplinary data sets can support aero gravity investigations. At the micro scale an example from the detection of a crypt (Alversdorf, Northern Germany) is shown.

  8. Monte Carlo study of the effects of system geometry and antiscatter grids on cone-beam CT scatter distributions

    PubMed Central

    Sisniega, A.; Zbijewski, W.; Badal, A.; Kyprianou, I. S.; Stayman, J. W.; Vaquero, J. J.; Siewerdsen, J. H.

    2013-01-01

    Purpose: The proliferation of cone-beam CT (CBCT) has created interest in performance optimization, with x-ray scatter identified among the main limitations to image quality. CBCT often contends with elevated scatter, but the wide variety of imaging geometry in different CBCT configurations suggests that not all configurations are affected to the same extent. Graphics processing unit (GPU) accelerated Monte Carlo (MC) simulations are employed over a range of imaging geometries to elucidate the factors governing scatter characteristics, efficacy of antiscatter grids, guide system design, and augment development of scatter correction. Methods: A MC x-ray simulator implemented on GPU was accelerated by inclusion of variance reduction techniques (interaction splitting, forced scattering, and forced detection) and extended to include x-ray spectra and analytical models of antiscatter grids and flat-panel detectors. The simulator was applied to small animal (SA), musculoskeletal (MSK) extremity, otolaryngology (Head), breast, interventional C-arm, and on-board (kilovoltage) linear accelerator (Linac) imaging, with an axis-to-detector distance (ADD) of 5, 12, 22, 32, 60, and 50 cm, respectively. Each configuration was modeled with and without an antiscatter grid and with (i) an elliptical cylinder varying 70–280 mm in major axis; and (ii) digital murine and anthropomorphic models. The effects of scatter were evaluated in terms of the angular distribution of scatter incident upon the detector, scatter-to-primary ratio (SPR), artifact magnitude, contrast, contrast-to-noise ratio (CNR), and visual assessment. Results: Variance reduction yielded improvements in MC simulation efficiency ranging from ∼17-fold (for SA CBCT) to ∼35-fold (for Head and C-arm), with the most significant acceleration due to interaction splitting (∼6 to ∼10-fold increase in efficiency). The benefit of a more extended geometry was evident by virtue of a larger air gap—e.g., for a 16 cm diameter object, the SPR reduced from 1.5 for ADD = 12 cm (MSK geometry) to 1.1 for ADD = 22 cm (Head) and to 0.5 for ADD = 60 cm (C-arm). Grid efficiency was higher for configurations with shorter air gap due to a broader angular distribution of scattered photons—e.g., scatter rejection factor ∼0.8 for MSK geometry versus ∼0.65 for C-arm. Grids reduced cupping for all configurations but had limited improvement on scatter-induced streaks and resulted in a loss of CNR for the SA, Breast, and C-arm. Relative contribution of forward-directed scatter increased with a grid (e.g., Rayleigh scatter fraction increasing from ∼0.15 without a grid to ∼0.25 with a grid for the MSK configuration), resulting in scatter distributions with greater spatial variation (the form of which depended on grid orientation). Conclusions: A fast MC simulator combining GPU acceleration with variance reduction provided a systematic examination of a range of CBCT configurations in relation to scatter, highlighting the magnitude and spatial uniformity of individual scatter components, illustrating tradeoffs in CNR and artifacts and identifying the system geometries for which grids are more beneficial (e.g., MSK) from those in which an extended geometry is the better defense (e.g., C-arm head imaging). Compact geometries with an antiscatter grid challenge assumptions of slowly varying scatter distributions due to increased contribution of Rayleigh scatter. PMID:23635285

  9. Reliability analysis in interdependent smart grid systems

    NASA Astrophysics Data System (ADS)

    Peng, Hao; Kan, Zhe; Zhao, Dandan; Han, Jianmin; Lu, Jianfeng; Hu, Zhaolong

    2018-06-01

    Complex network theory is a useful way to study many real complex systems. In this paper, a reliability analysis model based on complex network theory is introduced in interdependent smart grid systems. In this paper, we focus on understanding the structure of smart grid systems and studying the underlying network model, their interactions, and relationships and how cascading failures occur in the interdependent smart grid systems. We propose a practical model for interdependent smart grid systems using complex theory. Besides, based on percolation theory, we also study the effect of cascading failures effect and reveal detailed mathematical analysis of failure propagation in such systems. We analyze the reliability of our proposed model caused by random attacks or failures by calculating the size of giant functioning components in interdependent smart grid systems. Our simulation results also show that there exists a threshold for the proportion of faulty nodes, beyond which the smart grid systems collapse. Also we determine the critical values for different system parameters. In this way, the reliability analysis model based on complex network theory can be effectively utilized for anti-attack and protection purposes in interdependent smart grid systems.

  10. U.S. Army War College Key Strategic Issues List (KSIL) 2012-2013

    DTIC Science & Technology

    2012-08-01

    modified and what DOTMLPF changes would be needed? 20 11. What are the impacts of changes in the local economy on a local NG unit? Do changes in the... economy have a major effect on the unit and the National Guard as a whole at state and/or federal level? 21 U.S. Army Reserve, Office of the Chief POC...movement to a DoD-wide cloud architecture (the Joint Information Environment), which would allow the repurposing of Global Information Grid ( GIG

  11. Phase II Testing at a Prehistoric Site (32BA418) at Lake Ashtabula (Sheyenne River) Barnes County, North Dakota.

    DTIC Science & Technology

    1984-01-01

    Subtitle) PHASE II TESTING AT 32BA3, S. TYPE OF REPORT & PERIOD COVERED BARNES COUNTY, NORTH DAKOTA. Final 6. PERFORMING ORG. REPORT NUMBER 7 . AUTHOR(a...3 4. Countour map of 32BA418 showing locations of auger test units, 1 m2 test units, cutbank profile (A - A’) and grid system ......... 7 5...Physiographic subdivisions, North Dakota ....... ............. 9 6. Vegetation zones, North Dakota ...... ................... .11 7 . Great Plains

  12. GRID3D-v2: An updated version of the GRID2D/3D computer program for generating grid systems in complex-shaped three-dimensional spatial domains

    NASA Technical Reports Server (NTRS)

    Steinthorsson, E.; Shih, T. I-P.; Roelke, R. J.

    1991-01-01

    In order to generate good quality systems for complicated three-dimensional spatial domains, the grid-generation method used must be able to exert rather precise controls over grid-point distributions. Several techniques are presented that enhance control of grid-point distribution for a class of algebraic grid-generation methods known as the two-, four-, and six-boundary methods. These techniques include variable stretching functions from bilinear interpolation, interpolating functions based on tension splines, and normalized K-factors. The techniques developed in this study were incorporated into a new version of GRID3D called GRID3D-v2. The usefulness of GRID3D-v2 was demonstrated by using it to generate a three-dimensional grid system in the coolent passage of a radial turbine blade with serpentine channels and pin fins.

  13. U.S. Laws and Regulations for Renewable Energy Grid Interconnections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chernyakhovskiy, Ilya; Tian, Tian; McLaren, Joyce

    Rapidly declining costs of wind and solar energy technologies, increasing concerns about the environmental and climate change impacts of fossil fuels, and sustained investment in renewable energy projects all point to a not-so-distant future in which renewable energy plays a pivotal role in the electric power system of the 21st century. In light of public pressures and market factors that hasten the transition towards a low-carbon system, power system planners and regulators are preparing to integrate higher levels of variable renewable generation into the grid. Updating the regulations that govern generator interconnections and operations is crucial to ensure system reliabilitymore » while creating an enabling environment for renewable energy development. This report presents a chronological review of energy laws and regulations concerning grid interconnection procedures in the United States, highlighting the consequences of policies for renewable energy interconnections. Where appropriate, this report places interconnection policies and their impacts on renewable energy within the broader context of power market reform.« less

  14. CRT--Cascade Routing Tool to define and visualize flow paths for grid-based watershed models

    USGS Publications Warehouse

    Henson, Wesley R.; Medina, Rose L.; Mayers, C. Justin; Niswonger, Richard G.; Regan, R.S.

    2013-01-01

    The U.S. Geological Survey Cascade Routing Tool (CRT) is a computer application for watershed models that include the coupled Groundwater and Surface-water FLOW model, GSFLOW, and the Precipitation-Runoff Modeling System (PRMS). CRT generates output to define cascading surface and shallow subsurface flow paths for grid-based model domains. CRT requires a land-surface elevation for each hydrologic response unit (HRU) of the model grid; these elevations can be derived from a Digital Elevation Model raster data set of the area containing the model domain. Additionally, a list is required of the HRUs containing streams, swales, lakes, and other cascade termination features along with indices that uniquely define these features. Cascade flow paths are determined from the altitudes of each HRU. Cascade paths can cross any of the four faces of an HRU to a stream or to a lake within or adjacent to an HRU. Cascades can terminate at a stream, lake, or HRU that has been designated as a watershed outflow location.

  15. Technology advances needed for photovoltaics to achieve widespread grid price parity: Widespread grid price parity for photovoltaics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones-Albertus, Rebecca; Feldman, David; Fu, Ran

    2016-04-20

    To quantify the potential value of technological advances to the photovoltaics (PV) sector, this paper examines the impact of changes to key PV module and system parameters on the levelized cost of energy (LCOE). The parameters selected include module manufacturing cost, efficiency, degradation rate, and service lifetime. NREL's System Advisor Model (SAM) is used to calculate the lifecycle cost per kilowatt-hour (kWh) for residential, commercial, and utility scale PV systems within the contiguous United States, with a focus on utility scale. Different technological pathways are illustrated that may achieve the Department of Energy's SunShot goal of PV electricity that ismore » at grid price parity with conventional electricity sources. In addition, the impacts on the 2015 baseline LCOE due to changes to each parameter are shown. These results may be used to identify research directions with the greatest potential to impact the cost of PV electricity.« less

  16. Regional analysis of ground-water recharge: Chapter B in Ground-water recharge in the arid and semiarid southwestern United States (Professional Paper 1703)

    USGS Publications Warehouse

    Flint, Lorraine E.; Flint, Alan L.; Stonestrom, David A.; Constantz, Jim; Ferré, Ty P.A.; Leake, Stanley A.

    2007-01-01

    A modeling analysis of runoff and ground-water recharge for the arid and semiarid southwestern United States was performed to investigate the interactions of climate and other controlling factors and to place the eight study-site investigations into a regional context. A distributed-parameter water-balance model (the Basin Characterization Model, or BCM) was used in the analysis. Data requirements of the BCM included digital representations of topography, soils, geology, and vegetation, together with monthly time-series of precipitation and air-temperature data. Time-series of potential evapotranspiration were generated by using a submodel for solar radiation, taking into account topographic shading, cloudiness, and vegetation density. Snowpack accumulation and melting were modeled using precipitation and air-temperature data. Amounts of water available for runoff and ground-water recharge were calculated on the basis of water-budget considerations by using measured- and generated-meteorologic time series together with estimates of soil-water storage and saturated hydraulic conductivity of subsoil geologic units. Calculations were made on a computational grid with a horizontal resolution of about 270 meters for the entire 1,033,840 square-kilometer study area. The modeling analysis was composed of 194 basins, including the eight basins containing ground-water recharge-site investigations. For each grid cell, the BCM computed monthly values of potential evapotranspiration, soil-water storage, in-place ground-water recharge, and runoff (potential stream flow). A fixed percentage of runoff was assumed to become recharge beneath channels operating at a finer resolution than the computational grid of the BCM. Monthly precipitation and temperature data from 1941 to 2004 were used to explore climatic variability in runoff and ground-water recharge.The selected approach provided a framework for classifying study-site basins with respect to climate and dominant recharge processes. The average climate for all 194 basins ranged from hyperarid to humid, with arid and semiarid basins predominating (fig. 6, chapter A, this volume). Four of the 194 basins had an aridity index of dry subhumid; two of the basins were humid. Of the eight recharge-study sites, six were in semiarid basins, and two were in arid basins. Average-annual potential evapotranspiration showed a regional gradient from less than 1 m/yr in the northeastern part of the study area to more than 2 m/yr in the southwestern part of the study area. Average-annual precipitation was lowest in the two arid-site basins and highest in the two study-site basins in southern Arizona. The relative amount of runoff to in-place recharge varied throughout the study area, reflecting differences primarily in soil water-holding capacity, saturated hydraulic conductivity of subsoil materials, and snowpack dynamics. Climatic forcing expressed in El Niño and Pacific Decadal Oscillation indices strongly influenced the generation of precipitation throughout the study area. Positive values of both indices correlated with the highest amounts of runoff and ground-water recharge.

  17. A Preliminary, Full Spectrum, Magnetic Anomaly Grid of the United States with Improved Long Wavelengths for Studying Continental Dynamics: A Website for Distribution of Data

    USGS Publications Warehouse

    Ravat, D.; Finn, C.; Hill, P.; Kucks, R.; Phillips, J.; Blakely, R.; Bouligand, C.; Sabaka, T.; Elshayat, A.; Aref, A.; Elawadi, E.

    2009-01-01

    Under an initiative started by Thomas G. Hildenbrand of the U.S. Geological Survey, we have improved the long-wavelength (50-2,500 km) content of the regional magnetic anomaly compilation for the conterminous United States by utilizing a nearly homogeneous set of National Uranium Resource Evaluation (NURE) magnetic surveys flown from 1975 to 1981. The surveys were flown in quadrangles of 2 deg of longitude by 1 deg of latitude with east-west flight lines spaced 4.8 to 9.6 km apart, north-south tie lines variably spaced, and a nominal terrain clearance of 122 m. Many of the surveys used base-station magnetometers to remove external field variations.

  18. Residential expansion as a continental threat to U.S. coastal ecosystems

    Treesearch

    J.G. Bartlett; D.M. Mageean; R.J. O' Connor

    2000-01-01

    Spatially extensive analysis of satellite, climate, and census data reveals human-environment interactions of regional or continental concern in the United States. A grid-based principal components analysis of Bureau of Census variables revealed two independent demographic phenomena, a-settlement reflecting traditional human settlement patterns and p-settlement...

  19. Islanding detection and over voltage mitigation using wireless sensor networks and electric vehicle charging stations.

    DOT National Transportation Integrated Search

    2016-06-01

    An islanding condition occurs when a distributed generation (DG) unit continues to energize a : part of the grid while said part has been isolated from the main electrical utility. In this event, if : the power of the DG exceeds the load, a transient...

  20. Preliminary development of the LBL/USGS three-dimensional site-scale model of Yucca Mountain, Nevada

    USGS Publications Warehouse

    1995-01-01

    A three-dimensional model of moisture flow within the unsaturated zone at Yucca Mountain is being developed at Lawrence Berkeley Laboratory (LBL) in cooperation with the U.S. Geological Survey (USGS). This site-scale model covers and area of about 34 km2 and is bounded by major faults to the north, east and west. The model geometry is defined (1) to represent the variations of hydrogeological units between the ground surface and the water table; (2) to be able to reproduce the effect of abrupt changes in hydrogeological parameters at the boundaries between hyrdogeological units; and (3) to include the influence of major faults. A detailed numerical grid has been developed based on the locations of boreholes, different infiltration zones, hydrogeological units and their outcrops, major faults, and water level data. Contour maps and isopatch maps are presented defining different types of infiltration zones, and the spatial distribution of Tiva Canyon, Paintbrush, and Topopah Spring hydrogeological units. The grid geometry consists of seventeen non-uniform layers which represent the lithological variations within the four main welded and non-welded hydrogeological units. Matrix flow is approximated using the van Genuchten model, and the equivalent continuum approximation is used to account for fracture flow in the welded units. The fault zones are explicitly modeled as porous medium using various assumptions regarding their permeabilities and characteristic curves. One-, two-, and three-dimensional simulations are conducted using the TOUGH2 computer program. Steady-state simulations are performed with various uniform and non-uniform infiltration rates. The results are interpreted in terms of the effect of fault characteristics on the moisture flow distribution, and on location and formation of preferential pathways.

  1. Progress in preliminary studies at Ottana Solar Facility

    NASA Astrophysics Data System (ADS)

    Demontis, V.; Camerada, M.; Cau, G.; Cocco, D.; Damiano, A.; Melis, T.; Musio, M.

    2016-05-01

    The fast increasing share of distributed generation from non-programmable renewable energy sources, such as the strong penetration of photovoltaic technology in the distribution networks, has generated several problems for the management and security of the whole power grid. In order to meet the challenge of a significant share of solar energy in the electricity mix, several actions aimed at increasing the grid flexibility and its hosting capacity, as well as at improving the generation programmability, need to be investigated. This paper focuses on the ongoing preliminary studies at the Ottana Solar Facility, a new experimental power plant located in Sardinia (Italy) currently under construction, which will offer the possibility to progress in the study of solar plants integration in the power grid. The facility integrates a concentrating solar power (CSP) plant, including a thermal energy storage system and an organic Rankine cycle (ORC) unit, with a concentrating photovoltaic (CPV) plant and an electrical energy storage system. The facility has the main goal to assess in real operating conditions the small scale concentrating solar power technology and to study the integration of the two technologies and the storage systems to produce programmable and controllable power profiles. A model for the CSP plant yield was developed to assess different operational strategies that significantly influence the plant yearly yield and its global economic effectiveness. In particular, precise assumptions for the ORC module start-up operation behavior, based on discussions with the manufacturers and technical datasheets, will be described. Finally, the results of the analysis of the: "solar driven", "weather forecasts" and "combined storage state of charge (SOC)/ weather forecasts" operational strategies will be presented.

  2. Aeroacoustic Simulations of a Nose Landing Gear with FUN3D: A Grid Refinement Study

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Lockard, David P.

    2017-01-01

    A systematic grid refinement study is presented for numerical simulations of a partially-dressed, cavity-closed (PDCC) nose landing gear configuration that was tested in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D is used to compute the unsteady flow field for this configuration. Mixed-element grids generated using the Pointwise (Registered Trademark) grid generation software are used for numerical simulations. Particular care is taken to ensure quality cells and proper resolution in critical areas of interest in an effort to minimize errors introduced by numerical artifacts. A set of grids was generated in this manner to create a family of uniformly refined grids. The finest grid was then modified to coarsen the wall-normal spacing to create a grid suitable for the wall-function implementation in FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence modeling approach is used for these simulations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. These CFD solutions are used as input to a FfowcsWilliams-Hawkings (FW-H) noise propagation code to compute the farfield noise levels. The agreement of the computed results with the experimental data improves as the grid is refined.

  3. Analysis and Research on the effect of the Operation of Small Hydropower in the Regional Power Grid

    NASA Astrophysics Data System (ADS)

    Ang, Fu; Guangde, Dong; Xiaojun, Zhu; Ruimiao, Wang; Shengyi, Zhu

    2018-03-01

    The analysis of reactive power balance and voltage of power network not only affects the system voltage quality, but also affects the economic operation of power grid. In the calculation of reactive power balance and voltage analysis in the past, the problem of low power and low system voltage has been the concern of people. When small hydropower stations in the wet period of low load, the analysis of reactive power surplus and high voltage for the system, if small hydropower unit the capability of running in phase is considered, it can effectively solve the system low operation voltage of the key point on the high side.

  4. Optimization and validation of accelerated golden-angle radial sparse MRI reconstruction with self-calibrating GRAPPA operator gridding.

    PubMed

    Benkert, Thomas; Tian, Ye; Huang, Chenchan; DiBella, Edward V R; Chandarana, Hersh; Feng, Li

    2018-07-01

    Golden-angle radial sparse parallel (GRASP) MRI reconstruction requires gridding and regridding to transform data between radial and Cartesian k-space. These operations are repeatedly performed in each iteration, which makes the reconstruction computationally demanding. This work aimed to accelerate GRASP reconstruction using self-calibrating GRAPPA operator gridding (GROG) and to validate its performance in clinical imaging. GROG is an alternative gridding approach based on parallel imaging, in which k-space data acquired on a non-Cartesian grid are shifted onto a Cartesian k-space grid using information from multicoil arrays. For iterative non-Cartesian image reconstruction, GROG is performed only once as a preprocessing step. Therefore, the subsequent iterative reconstruction can be performed directly in Cartesian space, which significantly reduces computational burden. Here, a framework combining GROG with GRASP (GROG-GRASP) is first optimized and then compared with standard GRASP reconstruction in 22 prostate patients. GROG-GRASP achieved approximately 4.2-fold reduction in reconstruction time compared with GRASP (∼333 min versus ∼78 min) while maintaining image quality (structural similarity index ≈ 0.97 and root mean square error ≈ 0.007). Visual image quality assessment by two experienced radiologists did not show significant differences between the two reconstruction schemes. With a graphics processing unit implementation, image reconstruction time can be further reduced to approximately 14 min. The GRASP reconstruction can be substantially accelerated using GROG. This framework is promising toward broader clinical application of GRASP and other iterative non-Cartesian reconstruction methods. Magn Reson Med 80:286-293, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  5. Business Case Analysis of the Marine Corps Base Pendleton Virtual Smart Grid

    DTIC Science & Technology

    2017-06-01

    Metering Infrastructure on DOD installations. An examination of five case studies highlights the costs and benefits of the Virtual Smart Grid (VSG...studies highlights the costs and benefits of the Virtual Smart Grid (VSG) developed by Space and Naval Warfare Systems Command for use at Marine Corps...41 A. SMART GRID BENEFITS .....................................................................41 B. SUMMARY OF VSG ESTIMATED COSTS AND BENEFITS

  6. Foundational Report Series. Advanced Distribution management Systems for Grid Modernization (Importance of DMS for Distribution Grid Modernization)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jianhui

    2015-09-01

    Grid modernization is transforming the operation and management of electric distribution systems from manual, paper-driven business processes to electronic, computer-assisted decisionmaking. At the center of this business transformation is the distribution management system (DMS), which provides a foundation from which optimal levels of performance can be achieved in an increasingly complex business and operating environment. Electric distribution utilities are facing many new challenges that are dramatically increasing the complexity of operating and managing the electric distribution system: growing customer expectations for service reliability and power quality, pressure to achieve better efficiency and utilization of existing distribution system assets, and reductionmore » of greenhouse gas emissions by accommodating high penetration levels of distributed generating resources powered by renewable energy sources (wind, solar, etc.). Recent “storm of the century” events in the northeastern United States and the lengthy power outages and customer hardships that followed have greatly elevated the need to make power delivery systems more resilient to major storm events and to provide a more effective electric utility response during such regional power grid emergencies. Despite these newly emerging challenges for electric distribution system operators, only a small percentage of electric utilities have actually implemented a DMS. This paper discusses reasons why a DMS is needed and why the DMS may emerge as a mission-critical system that will soon be considered essential as electric utilities roll out their grid modernization strategies.« less

  7. Buffering PV output during cloud transients with energy storage

    NASA Astrophysics Data System (ADS)

    Moumouni, Yacouba

    Consideration of the use of the major types of energy storage is attempted in this thesis in order to mitigate the effects of power output transients associated with grid-tied CPV systems due to fast-moving cloud coverage. The approach presented here is to buffer intermittency of CPV output power with an energy storage device (used batteries) purchased cheaply from EV owners or battery leasers. When the CPV is connected to the grid with the proper energy storage, the main goal is to smooth out the intermittent solar power and fluctuant load of the grid with a convenient control strategy. This thesis provides a detailed analysis with appropriate Matlab codes to put onto the grid during the day time a constant amount of power on one hand and on the other, shift the less valuable off-peak electricity to the on-peak time, i.e. between 1pm to 7pm, where the electricity price is much better. In this study, a range of base constant power levels were assumed including 15kW, 20kW, 21kW, 22kW, 23kW, 24kW and 25kW. The hypothesis based on an iterative solution was that the capacity of the battery was increased by steps of 5 while the base supply was decreased by the same step size until satisfactorily results were achieved. Hence, it turned out with the chosen battery capacity of 54kWh coupled to the data from the Amonix CPV 7700 unit for Las Vegas for a 3-month period, it was found that 20kW was the largest constant load the system can supply uninterruptedly to the utility company. Simulated results are presented to show the feasibility of the proposed scheme.

  8. Developing high-resolution urban scale heavy-duty truck emission inventory using the data-driven truck activity model output

    NASA Astrophysics Data System (ADS)

    Perugu, Harikishan; Wei, Heng; Yao, Zhuo

    2017-04-01

    Air quality modelers often rely on regional travel demand models to estimate the vehicle activity data for emission models, however, most of the current travel demand models can only output reliable person travel activity rather than goods/service specific travel activity. This paper presents the successful application of data-driven, Spatial Regression and output optimization Truck model (SPARE-Truck) to develop truck-related activity inputs for the mobile emission model, and eventually to produce truck specific gridded emissions. To validate the proposed methodology, the Cincinnati metropolitan area in United States was selected as a case study site. From the results, it is found that the truck miles traveled predicted using traditional methods tend to underestimate - overall 32% less than proposed model- truck miles traveled. The coefficient of determination values for different truck types range between 0.82 and 0.97, except the motor homes which showed least model fit with 0.51. Consequently, the emission inventories calculated from the traditional methods were also underestimated i.e. -37% for NOx, -35% for SO2, -43% for VOC, -43% for BC, -47% for OC and - 49% for PM2.5. Further, the proposed method also predicted within ∼7% of the national emission inventory for all pollutants. The bottom-up gridding methodology used in this paper could allocate the emissions to grid cell where more truck activity is expected, and it is verified against regional land-use data. Most importantly, using proposed method it is easy to segregate gridded emission inventory by truck type, which is of particular interest for decision makers, since currently there is no reliable method to test different truck-category specific travel-demand management strategies for air pollution control.

  9. A robust adaptive load frequency control for micro-grids.

    PubMed

    Khooban, Mohammad-Hassan; Niknam, Taher; Blaabjerg, Frede; Davari, Pooya; Dragicevic, Tomislav

    2016-11-01

    The goal of this study is to introduce a novel robust load frequency control (LFC) strategy for micro-grid(s) (MG(s)) in islanded mode operation. Admittedly, power generators in MG(s) cannot supply steady electric power output and sometimes cause unbalance between supply and demand. Battery energy storage system (BESS) is one of the effective solutions to these problems. Due to the high cost of the BESS, a new idea of Vehicle-to-Grid (V2G) is that a battery of Electric-Vehicle (EV) can be applied as a tantamount large-scale BESS in MG(s). As a result, a new robust control strategy for an islanded micro-grid (MG) is introduced that can consider electric vehicles׳ (EV(s)) effect. Moreover, in this paper, a new combination of the General Type II Fuzzy Logic Sets (GT2FLS) and the Modified Harmony Search Algorithm (MHSA) technique is applied for adaptive tuning of proportional-integral (PI) controller. Implementing General Type II Fuzzy Systems is computationally expensive. However, using a recently introduced α-plane representation, GT2FLS can be seen as a composition of several Interval Type II Fuzzy Logic Systems (IT2FLS) with a corresponding level of α for each. Real-data from an offshore wind farm in Sweden and solar radiation data in Aberdeen (United Kingdom) was used in order to examine the performance of the proposed novel controller. A comparison is made between the achieved results of Optimal Fuzzy-PI (OFPI) controller and those of Optimal Interval Type II Fuzzy-PI (IT2FPI) controller, which are of most recent advances in the area at hand. The Simulation results prove the successfulness and effectiveness of the proposed controller. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Ford Plug-In Project: Bringing PHEVs to Market Demonstration and Validation Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Annunzio, Julie; Slezak, Lee; Conley, John Jason

    2014-03-26

    This project is in support of our national goal to reduce our dependence on fossil fuels. By supporting efforts that contribute toward the successful mass production of plug-in hybrid electric vehicles, our nation’s transportation-related fuel consumption can be offset with energy from the grid. Over four and a half years ago, when this project was originally initiated, plug-in electric vehicles were not readily available in the mass marketplace. Through the creation of a 21 unit plug-in hybrid vehicle fleet, this program was designed to demonstrate the feasibility of the technology and to help build cross-industry familiarity with the technology andmore » interface of this technology with the grid. Ford Escape PHEV Demonstration Fleet 3 March 26, 2014 Since then, however, plug-in vehicles have become increasingly more commonplace in the market. Ford, itself, now offers an all-electric vehicle and two plug-in hybrid vehicles in North America and has announced a third plug-in vehicle offering for Europe. Lessons learned from this project have helped in these production vehicle launches and are mentioned throughout this report. While the technology of plugging in a vehicle to charge a high voltage battery with energy from the grid is now in production, the ability for vehicle-to-grid or bi-directional energy flow was farther away than originally expected. Several technical, regulatory and potential safety issues prevented progressing the vehicle-to-grid energy flow (V2G) demonstration and, after a review with the DOE, V2G was removed from this demonstration project. Also proving challenging were communications between a plug-in vehicle and the grid or smart meter. While this project successfully demonstrated the vehicle to smart meter interface, cross-industry and regulatory work is still needed to define the vehicle-to-grid communication interface.« less

  11. Linking field-based ecological data with remotely sensed data using a geographic information system in two malaria endemic urban areas of Kenya.

    PubMed

    Eisele, Thomas P; Keating, Joseph; Swalm, Chris; Mbogo, Charles M; Githeko, Andrew K; Regens, James L; Githure, John I; Andrews, Linda; Beier, John C

    2003-12-10

    BACKGROUND: Remote sensing technology provides detailed spectral and thermal images of the earth's surface from which surrogate ecological indicators of complex processes can be measured. METHODS: Remote sensing data were overlaid onto georeferenced entomological and human ecological data randomly sampled during April and May 2001 in the cities of Kisumu (population asymptotically equal to 320,000) and Malindi (population asymptotically equal to 81,000), Kenya. Grid cells of 270 meters x 270 meters were used to generate spatial sampling units for each city for the collection of entomological and human ecological field-based data. Multispectral Thermal Imager (MTI) satellite data in the visible spectrum at five meter resolution were acquired for Kisumu and Malindi during February and March 2001, respectively. The MTI data were fit and aggregated to the 270 meter x 270 meter grid cells used in field-based sampling using a geographic information system. The normalized difference vegetation index (NDVI) was calculated and scaled from MTI data for selected grid cells. Regression analysis was used to assess associations between NDVI values and entomological and human ecological variables at the grid cell level. RESULTS: Multivariate linear regression showed that as household density increased, mean grid cell NDVI decreased (global F-test = 9.81, df 3,72, P-value = <0.01; adjusted R2 = 0.26). Given household density, the number of potential anopheline larval habitats per grid cell also increased with increasing values of mean grid cell NDVI (global F-test = 14.29, df 3,36, P-value = <0.01; adjusted R2 = 0.51). CONCLUSIONS: NDVI values obtained from MTI data were successfully overlaid onto georeferenced entomological and human ecological data spatially sampled at a scale of 270 meters x 270 meters. Results demonstrate that NDVI at such a scale was sufficient to describe variations in entomological and human ecological parameters across both cities.

  12. Spatial and temporal patterns of plantation forests in the United States since the 1930s: an annual and gridded data set for regional Earth system modeling

    NASA Astrophysics Data System (ADS)

    Chen, Guangsheng; Pan, Shufen; Hayes, Daniel J.; Tian, Hanqin

    2017-08-01

    Plantation forest area in the conterminous United States (CONUS) ranked second among the world's nations in the land area apportioned to forest plantation. As compared to the naturally regenerated forests, plantation forests demonstrate significant differences in biophysical characteristics, and biogeochemical and hydrological cycles as a result of more intensive management practices. Inventory data have been reported for multiple time periods on plot, state, and regional scales across the CONUS, but the requisite annual and spatially explicit plantation data set over a long-term period for analysis of the role of plantation management on regional or national scales is lacking. Through synthesis of multiple inventory data sources, this study developed methods to spatialize the time series plantation forest and tree species distribution data for the CONUS over the 1928-2012 time period. According to this new data set, plantation forest area increased from near zero in the 1930s to 268.27 thousand km2 in 2012, accounting for 8.65 % of the total forestland area in the CONUS. Regionally, the South contained the highest proportion of plantation forests, accounting for about 19.34 % of total forestland area in 2012. This time series and gridded data set developed here can be readily applied in regional Earth system modeling frameworks for assessing the impacts of plantation management practices on forest productivity, carbon and nitrogen stocks, and greenhouse gases (e.g., CO2, CH4, and N2O) and water fluxes on regional or national scales. The gridded plantation distribution and tree species maps, and the interpolated state-level annual tree planting area and plantation area during 1928-2012, are available from https://doi.org/10.1594/PANGAEA.873558.

  13. Digital spatial data for predicted nitrate and arsenic concentrations in basin-fill aquifers of the Southwest Principal Aquifers study area

    USGS Publications Warehouse

    McKinney, Tim S.; Anning, David W.

    2012-01-01

    This product "Digital spatial data for predicted nitrate and arsenic concentrations in basin-fill aquifers of the Southwest Principal Aquifers study area" is a 1:250,000-scale vector spatial dataset developed as part of a regional Southwest Principal Aquifers (SWPA) study (Anning and others, 2012). The study examined the vulnerability of basin-fill aquifers in the southwestern United States to nitrate contamination and arsenic enrichment. Statistical models were developed by using the random forest classifier algorithm to predict concentrations of nitrate and arsenic across a model grid that represents local- and basin-scale measures of source, aquifer susceptibility, and geochemical conditions.

  14. Knowledge Transfer Project: Cultivating Smart Energy Solutions through Dynamic Peer-to-Peer Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    As energy policy makers and professionals convene in the Oresund region for the 9th Annual Clean Energy Ministerial (CEM9), the global community is as united as ever around the common goal of accelerating the transition to global clean energy. Through sustained collective effort and thought leadership, CEM partners and stakeholders are systematically addressing the barriers to the widescale deployment of clean energy technologies. Pivotal to their progress is the efficient sharing and dissemination of knowledge. To address that need, the CEM-initiative International SmartGrid Action Network (ISGAN) launched the Knowledge Transfer Project (KTP) in March 2016 to capture, collect, and sharemore » knowledge about smart grid technologies among countries and key stakeholders. Building on ISGAN's experience with delivering deep-dive workshops, the KTP fosters meaningful international dialogue on smart grids with a focus on developing competence and building capacity. After a successful 2016 pilot project and two consecutive projects, each with a different focus and structure, the KTP has become an established practice that can support existing ISGAN or CEM initiatives. To accommodate different purposes, needs, and practical circumstances, ISGAN has adopted three basic models for delivering KTP workshops: Country-Centric, Multilateral, and Hybrid. This fact sheet describes each approach through case studies of workshops in Mexico, India, and Belgium, and invites new ideas and partners for future KTPs.« less

  15. CFD Simulation On The Pressure Distribution For An Isolated Single-Story House With Extension: Grid Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Yahya, W. N. W.; Zaini, S. S.; Ismail, M. A.; Majid, T. A.; Deraman, S. N. C.; Abdullah, J.

    2018-04-01

    Damage due to wind-related disasters is increasing due to global climate change. Many studies have been conducted to study the wind effect surrounding low-rise building using wind tunnel tests or numerical simulations. The use of numerical simulation is relatively cheap but requires very good command in handling the software, acquiring the correct input parameters and obtaining the optimum grid or mesh. However, before a study can be conducted, a grid sensitivity test must be conducted to get a suitable cell number for the final to ensure an accurate result with lesser computing time. This study demonstrates the numerical procedures for conducting a grid sensitivity analysis using five models with different grid schemes. The pressure coefficients (CP) were observed along the wall and roof profile and compared between the models. The results showed that medium grid scheme can be used and able to produce high accuracy results compared to finer grid scheme as the difference in terms of the CP values was found to be insignificant.

  16. Summary of Data from the Fifth AIAA CFD Drag Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Levy, David W.; Laflin, Kelly R.; Tinoco, Edward N.; Vassberg, John C.; Mani, Mori; Rider, Ben; Rumsey, Chris; Wahls, Richard A.; Morrison, Joseph H.; Brodersen, Olaf P.; hide

    2013-01-01

    Results from the Fifth AIAA CFD Drag Prediction Workshop (DPW-V) are presented. As with past workshops, numerical calculations are performed using industry-relevant geometry, methodology, and test cases. This workshop focused on force/moment predictions for the NASA Common Research Model wing-body configuration, including a grid refinement study and an optional buffet study. The grid refinement study used a common grid sequence derived from a multiblock topology structured grid. Six levels of refinement were created resulting in grids ranging from 0.64x10(exp 6) to 138x10(exp 6) hexahedra - a much larger range than is typically seen. The grids were then transformed into structured overset and hexahedral, prismatic, tetrahedral, and hybrid unstructured formats all using the same basic cloud of points. This unique collection of grids was designed to isolate the effects of grid type and solution algorithm by using identical point distributions. This study showed reduced scatter and standard deviation from previous workshops. The second test case studied buffet onset at M=0.85 using the Medium grid (5.1x106 nodes) from the above described sequence. The prescribed alpha sweep used finely spaced intervals through the zone where wing separation was expected to begin. Some solutions exhibited a large side of body separation bubble that was not observed in the wind tunnel results. An optional third case used three sets of geometry, grids, and conditions from the Turbulence Model Resource website prepared by the Turbulence Model Benchmarking Working Group. These simple cases were intended to help identify potential differences in turbulence model implementation. Although a few outliers and issues affecting consistency were identified, the majority of participants produced consistent results.

  17. A century of sprawl in the United States

    PubMed Central

    Barrington-Leigh, Christopher; Millard-Ball, Adam

    2015-01-01

    The urban street network is one of the most permanent features of cities. Once laid down, the pattern of streets determines urban form and the level of sprawl for decades to come. We present a high-resolution time series of urban sprawl, as measured through street network connectivity, in the United States from 1920 to 2012. Sprawl started well before private car ownership was dominant and grew steadily until the mid-1990s. Over the last two decades, however, new streets have become significantly more connected and grid-like; the peak in street-network sprawl in the United States occurred in ∼1994. By one measure of connectivity, the mean nodal degree of intersections, sprawl fell by ∼9% between 1994 and 2012. We analyze spatial variation in these changes and demonstrate the persistence of sprawl. Places that were built with a low-connectivity street network tend to stay that way, even as the network expands. We also find suggestive evidence that local government policies impact sprawl, as the largest increases in connectivity have occurred in places with policies to promote gridded streets and similar New Urbanist design principles. We provide for public use a county-level version of our street-network sprawl dataset comprising a time series of nearly 100 y. PMID:26080422

  18. A century of sprawl in the United States.

    PubMed

    Barrington-Leigh, Christopher; Millard-Ball, Adam

    2015-07-07

    The urban street network is one of the most permanent features of cities. Once laid down, the pattern of streets determines urban form and the level of sprawl for decades to come. We present a high-resolution time series of urban sprawl, as measured through street network connectivity, in the United States from 1920 to 2012. Sprawl started well before private car ownership was dominant and grew steadily until the mid-1990s. Over the last two decades, however, new streets have become significantly more connected and grid-like; the peak in street-network sprawl in the United States occurred in ∼ 1994. By one measure of connectivity, the mean nodal degree of intersections, sprawl fell by ∼ 9% between 1994 and 2012. We analyze spatial variation in these changes and demonstrate the persistence of sprawl. Places that were built with a low-connectivity street network tend to stay that way, even as the network expands. We also find suggestive evidence that local government policies impact sprawl, as the largest increases in connectivity have occurred in places with policies to promote gridded streets and similar New Urbanist design principles. We provide for public use a county-level version of our street-network sprawl dataset comprising a time series of nearly 100 y.

  19. A high throughput geocomputing system for remote sensing quantitative retrieval and a case study

    NASA Astrophysics Data System (ADS)

    Xue, Yong; Chen, Ziqiang; Xu, Hui; Ai, Jianwen; Jiang, Shuzheng; Li, Yingjie; Wang, Ying; Guang, Jie; Mei, Linlu; Jiao, Xijuan; He, Xingwei; Hou, Tingting

    2011-12-01

    The quality and accuracy of remote sensing instruments have been improved significantly, however, rapid processing of large-scale remote sensing data becomes the bottleneck for remote sensing quantitative retrieval applications. The remote sensing quantitative retrieval is a data-intensive computation application, which is one of the research issues of high throughput computation. The remote sensing quantitative retrieval Grid workflow is a high-level core component of remote sensing Grid, which is used to support the modeling, reconstruction and implementation of large-scale complex applications of remote sensing science. In this paper, we intend to study middleware components of the remote sensing Grid - the dynamic Grid workflow based on the remote sensing quantitative retrieval application on Grid platform. We designed a novel architecture for the remote sensing Grid workflow. According to this architecture, we constructed the Remote Sensing Information Service Grid Node (RSSN) with Condor. We developed a graphic user interface (GUI) tools to compose remote sensing processing Grid workflows, and took the aerosol optical depth (AOD) retrieval as an example. The case study showed that significant improvement in the system performance could be achieved with this implementation. The results also give a perspective on the potential of applying Grid workflow practices to remote sensing quantitative retrieval problems using commodity class PCs.

  20. Multigrid method for stability problems

    NASA Technical Reports Server (NTRS)

    Taasan, Shlomo

    1988-01-01

    The problem of calculating the stability of steady state solutions of differential equations is treated. Leading eigenvalues (i.e., having maximal real part) of large matrices that arise from discretization are to be calculated. An efficient multigrid method for solving these problems is presented. The method begins by obtaining an initial approximation for the dominant subspace on a coarse level using a damped Jacobi relaxation. This proceeds until enough accuracy for the dominant subspace has been obtained. The resulting grid functions are then used as an initial approximation for appropriate eigenvalue problems. These problems are being solved first on coarse levels, followed by refinement until a desired accuracy for the eigenvalues has been achieved. The method employs local relaxation on all levels together with a global change on the coarsest level only, which is designed to separate the different eigenfunctions as well as to update their corresponding eigenvalues. Coarsening is done using the FAS formulation in a non-standard way in which the right hand side of the coarse grid equations involves unknown parameters to be solved for on the coarse grid. This in particular leads to a new multigrid method for calculating the eigenvalues of symmetric problems. Numerical experiments with a model problem demonstrate the effectiveness of the method proposed. Using an FMG algorithm a solution to the level of discretization errors is obtained in just a few work units (less than 10), where a work unit is the work involved in one Jacobi relization on the finest level.

Top